scholarly journals A Study on the Amplitude Comparison Monopulse Algorithm

2020 ◽  
Vol 10 (11) ◽  
pp. 3966
Author(s):  
Minjeong Kim ◽  
Daseon Hong ◽  
Sungsu Park

This paper presents two amplitude comparison monopulse algorithms and their covariance prediction equation. The proposed algorithms are based on the iterated least-squares estimation method and include the conventional monopulse algorithm as a special case. The proposed covariance equation is simple but predicts RMS errors very accurately. This equation quantitatively states estimation accuracy in terms of major parameters of amplitude comparison monopulse radar, and is also applicable to the conventional monopulse algorithm. The proposed algorithms and covariance prediction equations are validated by the numerical simulations with 100,000 Monte Carlo runs.

2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Vladimir Shin ◽  
Rebbecca T. Y. Thien ◽  
Yoonsoo Kim

This paper presents a noise covariance estimation method for dynamical models with rectangular noise gain matrices. A novel receding horizon least squares criterion to achieve high estimation accuracy and stability under environmental uncertainties and experimental errors is proposed. The solution to the optimization problem for the proposed criterion gives equations for a novel covariance estimator. The estimator uses a set of recent information with appropriately chosen horizon conditions. Of special interest is a constant rectangular noise gain matrices for which the key theoretical results are obtained. They include derivation of a recursive form for the receding horizon covariance estimator and iteration procedure for selection of the best horizon length. Efficiency of the covariance estimator is demonstrated through its implementation and performance on dynamical systems with an arbitrary number of process and measurement noises.


Author(s):  
Kantaro Shimomura ◽  
Kazushi Ikeda

The covariance matrix of signals is one of the most essential information in multivariate analysis and other signal processing techniques. The estimation accuracy of a covariance matrix is degraded when some eigenvalues of the matrix are almost duplicated. Although the degradation is theoretically analyzed in the asymptotic case of infinite variables and observations, the degradation in finite cases are still open. This paper tackles the problem using the Bayesian approach, where the learning coefficient represents the generalization error. The learning coefficient is derived in a special case, i.e., the covariance matrix is spiked (all eigenvalues take the same value except one) and a shrinkage estimation method is employed. Our theoretical analysis shows a non-monotonic property that the learning coefficient increases as the difference of eigenvalues increases until a critical point and then decreases from the point and converged to the distinct case. The result is validated by numerical experiments.


Sign in / Sign up

Export Citation Format

Share Document