Detection performance of two efficient source tracking algorithms for matched-field processing

1998 ◽  
Vol 104 (6) ◽  
pp. 3351-3355 ◽  
Author(s):  
Michael J. Wilmut ◽  
John M. Ozard
1995 ◽  
Vol 03 (04) ◽  
pp. 311-326 ◽  
Author(s):  
MICHAEL J. WILMUT ◽  
JOHN M. OZARD ◽  
PETER BROUWER

A tracking algorithm may be used to reduce the ambiguity of the positions of a moving underwater acoustic source. The objective of this paper is to evaluate two efficient target tracking algorithms suitable for use with Matched-Field Processing (MFP). The detection after tracking algorithms described here are applicable to targets moving linearly at constant speed and depth that produce low Signal-to-Noise Ratio (SNR) signals at the receivers. The input to the tracker consists of the positions of the largest Bartlett statistics or peaks on the MFP ambiguity surfaces. Even at very low SNRs, these largest peaks usually include the match at or near the source position sufficiently often that the detection performance of our efficient tracker rivals that of an exhaustive tracker. In this paper efficient algorithms are developed and evaluated based on examining either the sum of the uniformly weighted Bartlett array outputs or the sum of the Bartlett array outputs weighted by predicted received signal strength. This sum is evaluated along a set of linear tracks that connect the largest peaks. Detection performance is shown to be better for the signal-strength weighted tracker. The performance difference is largest for tracks along a constant bearing from the array.


1996 ◽  
Vol 04 (04) ◽  
pp. 371-383 ◽  
Author(s):  
ZOI-HELENI MICHALOPOULOU ◽  
MICHAEL B. PORTER

In the Hudson Canyon experiment, a broadband source, transmitting simultaneously in four frequencies, moved in range at a constant depth and bearing. Using broadband matched-field processing we demonstrate that the source can be localized and tracked. Incoherent broadband approaches for matched-field processing, based on the averaging of ambiguity surfaces obtained with the narrowband Bartlett and Minimum Variance processors, are compared to a new coherent variant. Localization is very successful when either the incoherent or coherent Bartlett estimator is used. The Minimum Variance processor performance in determining the source location is very poor in the incoherent case but improves significantly with the introduction of the coherent scheme.


2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


Sign in / Sign up

Export Citation Format

Share Document