Intrinsic feature extraction in the COI of wavelet power spectra of climatic signals

Author(s):  
Zhihua Zhang ◽  
John Moore
Computation ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 78
Author(s):  
Shengkun Xie

Feature extraction plays an important role in machine learning for signal processing, particularly for low-dimensional data visualization and predictive analytics. Data from real-world complex systems are often high-dimensional, multi-scale, and non-stationary. Extracting key features of this type of data is challenging. This work proposes a novel approach to analyze Epileptic EEG signals using both wavelet power spectra and functional principal component analysis. We focus on how the feature extraction method can help improve the separation of signals in a low-dimensional feature subspace. By transforming EEG signals into wavelet power spectra, the functionality of signals is significantly enhanced. Furthermore, the power spectra transformation makes functional principal component analysis suitable for extracting key signal features. Therefore, we refer to this approach as a double feature extraction method since both wavelet transform and functional PCA are feature extractors. To demonstrate the applicability of the proposed method, we have tested it using a set of publicly available epileptic EEGs and patient-specific, multi-channel EEG signals, for both ictal signals and pre-ictal signals. The obtained results demonstrate that combining wavelet power spectra and functional principal component analysis is promising for feature extraction of epileptic EEGs. Therefore, they can be useful in computer-based medical systems for epilepsy diagnosis and epileptic seizure detection problems.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1698
Author(s):  
Iordanis Thoidis ◽  
Lazaros Vrysis ◽  
Dimitrios Markou ◽  
George Papanikolaou

Perceptually motivated audio signal processing and feature extraction have played a key role in the determination of high-level semantic processes and the development of emerging systems and applications, such as mobile phone telecommunication and hearing aids. In the era of deep learning, speech enhancement methods based on neural networks have seen great success, mainly operating on the log-power spectra. Although these approaches surpass the need for exhaustive feature extraction and selection, it is still unclear whether they target the important sound characteristics related to speech perception. In this study, we propose a novel set of auditory-motivated features for single-channel speech enhancement by fusing temporal envelope and temporal fine structure information in the context of vocoder-like processing. A causal gated recurrent unit (GRU) neural network is employed to recover the low-frequency amplitude modulations of speech. Experimental results indicate that the exploited system achieves considerable gains for normal-hearing and hearing-impaired listeners, in terms of objective intelligibility and quality metrics. The proposed auditory-motivated feature set achieved better objective intelligibility results compared to the conventional log-magnitude spectrogram features, while mixed results were observed for simulated listeners with hearing loss. Finally, we demonstrate that the proposed analysis/synthesis framework provides satisfactory reconstruction accuracy of speech signals.


2012 ◽  
Vol 5 (2) ◽  
pp. 746-768 ◽  
Author(s):  
Tsz Wai Wong ◽  
Lok Ming Lui ◽  
Paul M. Thompson ◽  
Tony F. Chan

Author(s):  
Fengchun Tian ◽  
Simon X. Yang ◽  
Xuntao Xu ◽  
Tao Liu

The impact of the characteristics of the sensors used for electronic nose (e-nose) systems on the repeatability of the measurements is considered. The noise performance of the different types of sensors available for e-nose utilization is first examined. Following the theoretical background, the probability density functions and power spectra of noise from real sensors are presented. The impact of sensor imperfections including noise on repeatability forms the basis of the remainder of the chapter. The impact of the sensors themselves, the effect of data pre-processing methods, and the feature extraction algorithm on the repeatability are considered.


Author(s):  
J.P. Fallon ◽  
P.J. Gregory ◽  
C.J. Taylor

Quantitative image analysis systems have been used for several years in research and quality control applications in various fields including metallurgy and medicine. The technique has been applied as an extension of subjective microscopy to problems requiring quantitative results and which are amenable to automatic methods of interpretation.Feature extraction. In the most general sense, a feature can be defined as a portion of the image which differs in some consistent way from the background. A feature may be characterized by the density difference between itself and the background, by an edge gradient, or by the spatial frequency content (texture) within its boundaries. The task of feature extraction includes recognition of features and encoding of the associated information for quantitative analysis.Quantitative Analysis. Quantitative analysis is the determination of one or more physical measurements of each feature. These measurements may be straightforward ones such as area, length, or perimeter, or more complex stereological measurements such as convex perimeter or Feret's diameter.


Author(s):  
Karen F. Han

The primary focus in our laboratory is the study of higher order chromatin structure using three dimensional electron microscope tomography. Three dimensional tomography involves the deconstruction of an object by combining multiple projection views of the object at different tilt angles, image intensities are not always accurate representations of the projected object mass density, due to the effects of electron-specimen interactions and microscope lens aberrations. Therefore, an understanding of the mechanism of image formation is important for interpreting the images. The image formation for thick biological specimens has been analyzed by using both energy filtering and Ewald sphere constructions. Surprisingly, there is a significant amount of coherent transfer for our thick specimens. The relative amount of coherent transfer is correlated with the relative proportion of elastically scattered electrons using electron energy loss spectoscopy and imaging techniques.Electron-specimen interactions include single and multiple, elastic and inelastic scattering. Multiple and inelastic scattering events give rise to nonlinear imaging effects which complicates the interpretation of collected images.


Author(s):  
P. Fraundorf ◽  
B. Armbruster

Optical interferometry, confocal light microscopy, stereopair scanning electron microscopy, scanning tunneling microscopy, and scanning force microscopy, can produce topographic images of surfaces on size scales reaching from centimeters to Angstroms. Second moment (height variance) statistics of surface topography can be very helpful in quantifying “visually suggested” differences from one surface to the next. The two most common methods for displaying this information are the Fourier power spectrum and its direct space transform, the autocorrelation function or interferogram. Unfortunately, for a surface exhibiting lateral structure over several orders of magnitude in size, both the power spectrum and the autocorrelation function will find most of the information they contain pressed into the plot’s origin. This suggests that we plot power in units of LOG(frequency)≡-LOG(period), but rather than add this logarithmic constraint as another element of abstraction to the analysis of power spectra, we further recommend a shift in paradigm.


Sign in / Sign up

Export Citation Format

Share Document