A comparison of linear and nonlinear dimensionality reduction methods applied to synthetic speech

Author(s):  
Andrew Errity ◽  
John McKenna
2017 ◽  
Vol 18 (1) ◽  
Author(s):  
Jiaoyun Yang ◽  
Haipeng Wang ◽  
Huitong Ding ◽  
Ning An ◽  
Gil Alterovitz

Author(s):  
Amir Hossein Karimi ◽  
Mohammad Javad Shafiee ◽  
Ali Ghodsi ◽  
Alexander Wong

Dimensionality reduction methods are widely used in informationprocessing systems to better understand the underlying structuresof datasets, and to improve the efficiency of algorithms for bigdata applications. Methods such as linear random projections haveproven to be simple and highly efficient in this regard, however,there is limited theoretical and experimental analysis for nonlinearrandom projections. In this study, we review the theoretical frameworkfor random projections and nonlinear rectified random projections,and introduce ensemble of nonlinear maximum random projections.We empirically evaluate the embedding performance on 3commonly used natural datasets and compare with linear randomprojections and traditional techniques such as PCA, highlightingthe superior generalization performance and stable embedding ofthe proposed method.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Hui Xu ◽  
Yongguo Yang ◽  
Xin Wang ◽  
Mingming Liu ◽  
Hongxia Xie ◽  
...  

Traditional supervised multiple kernel learning (MKL) for dimensionality reduction is generally an extension of kernel discriminant analysis (KDA), which has some restrictive assumptions. In addition, they generally are based on graph embedding framework. A more general multiple kernel-based dimensionality reduction algorithm, called multiple kernel marginal Fisher analysis (MKL-MFA), is presented for supervised nonlinear dimensionality reduction combined with ratio-race optimization problem. MKL-MFA aims at relaxing the restrictive assumption that the data of each class is of a Gaussian distribution and finding an appropriate convex combination of several base kernels. To improve the efficiency of multiple kernel dimensionality reduction, the spectral regression frameworks are incorporated into the optimization model. Furthermore, the optimal weights of predefined base kernels can be obtained by solving a different convex optimization. Experimental results on benchmark datasets demonstrate that MKL-MFA outperforms the state-of-the-art supervised multiple kernel dimensionality reduction methods.


Sign in / Sign up

Export Citation Format

Share Document