Robust Image Representation via Low Rank Locality Preserving Projection

2021 ◽  
Vol 15 (4) ◽  
pp. 1-22
Author(s):  
Shuai Yin ◽  
Yanfeng Sun ◽  
Junbin Gao ◽  
Yongli Hu ◽  
Boyue Wang ◽  
...  

Locality preserving projection (LPP) is a dimensionality reduction algorithm preserving the neighhorhood graph structure of data. However, the conventional LPP is sensitive to outliers existing in data. This article proposes a novel low-rank LPP model called LR-LPP. In this new model, original data are decomposed into the clean intrinsic component and noise component. Then the projective matrix is learned based on the clean intrinsic component which is encoded in low-rank features. The noise component is constrained by the ℓ 1 -norm which is more robust to outliers. Finally, LR-LPP model is extended to LR-FLPP in which low-dimensional feature is measured by F-norm. LR-FLPP will reduce aggregated error and weaken the effect of outliers, which will make the proposed LR-FLPP even more robust for outliers. The experimental results on public image databases demonstrate the effectiveness of the proposed LR-LPP and LR-FLPP.

2021 ◽  
Author(s):  
Ren Wang ◽  
Pengzhi Gao ◽  
Meng Wang

Abstract This paper studies the robust matrix completion problem for time-varying models. Leveraging the low-rank property and the temporal information of the data, we develop novel methods to recover the original data from partially observed and corrupted measurements. We show that the reconstruction performance can be improved if one further leverages the information of the sparse corruptions in addition to the temporal correlations among a sequence of matrices. The dynamic robust matrix completion problem is formulated as a nonconvex optimization problem, and the recovery error is quantified analytically and proved to decay in the same order as that of the state-of-the-art method when there is no corruption. A fast iterative algorithm with convergence guarantee to the stationary point is proposed to solve the nonconvex problem. Experiments on synthetic data and real video dataset demonstrate the effectiveness of our method.


2019 ◽  
Vol 11 (12) ◽  
pp. 168781401988977 ◽  
Author(s):  
Yanfeng Peng ◽  
Yanfei Liu ◽  
Junsheng Cheng ◽  
Yu Yang ◽  
Kuanfang He ◽  
...  

There are two difficulties in the remaining useful life prediction of rolling bearings. First, the vibration signals are always interfered by noise signals. Second, some of the extracted features include useless information which may decrease the prediction accuracy. In order to solve the problems above, corresponding methods are employed in this article. First, adaptive sparsest narrow-band decomposition is utilized for extracting the degradation information from noise. Compared with the commonly used empirical mode decomposition method, problems including mode mixture and boundary effect caused by the calculation of extremas is not required. Second, locality-preserving projection is applied for merging the meaningful information from the original data and reduces the dimension of features. Based on adaptive sparsest narrow-band decomposition and locality preserving projection, a novel approach is employed for the remaining useful life prediction. The prediction procedure is as follows. First, the signals are analyzed by adaptive sparsest narrow-band decomposition and the feature vectors are constructed. Afterwards, the features are fused by locality preserving projection to merge useful information from the features. Least squares support vector machine is applied for the remaining useful life prediction in the end. The analysis results indicate that the proposed approach is reliable for rolling bearing remaining useful life prediction.


2021 ◽  
Vol 11 (19) ◽  
pp. 9063
Author(s):  
Ümit Öztürk ◽  
Atınç Yılmaz

Manifold learning tries to find low-dimensional manifolds on high-dimensional data. It is useful to omit redundant data from input. Linear manifold learning algorithms have applicability for out-of-sample data, in which they are fast and practical especially for classification purposes. Locality preserving projection (LPP) and orthogonal locality preserving projection (OLPP) are two known linear manifold learning algorithms. In this study, scatter information of a distance matrix is used to construct a weight matrix with a supervised approach for the LPP and OLPP algorithms to improve classification accuracy rates. Low-dimensional data are classified with SVM and the results of the proposed method are compared with some other important existing linear manifold learning methods. Class-based enhancements and coefficients proposed for the formulization are reported visually. Furthermore, the change on weight matrices, band information, and correlation matrices with p-values are extracted and visualized to understand the effect of the proposed method. Experiments are conducted on hyperspectral imaging (HSI) with two different datasets. According to the experimental results, application of the proposed method with the LPP or OLPP algorithms outperformed traditional LPP, OLPP, neighborhood preserving embedding (NPE) and orthogonal neighborhood preserving embedding (ONPE) algorithms. Furthermore, the analytical findings on visualizations show consistency with obtained classification accuracy enhancements.


2019 ◽  
Vol 9 (10) ◽  
pp. 2161
Author(s):  
Lin He ◽  
Xianjun Chen ◽  
Jun Li ◽  
Xiaofeng Xie

Manifold learning is a powerful dimensionality reduction tool for a hyperspectral image (HSI) classification to relieve the curse of dimensionality and to reveal the intrinsic low-dimensional manifold. However, a specific characteristic of HSIs, i.e., irregular spatial dependency, is not taken into consideration in the method design, which can yield many spatially homogenous subregions in an HSI scence. Conventional manifold learning methods, such as a locality preserving projection (LPP), pursue a unified projection on the entire HSI, while neglecting the local homogeneities on the HSI manifold caused by those spatially homogenous subregions. In this work, we propose a novel multiscale superpixelwise LPP (MSuperLPP) for HSI classification to overcome the challenge. First, we partition an HSI into homogeneous subregions with a multiscale superpixel segmentation. Then, on each scale, subregion specific LPPs and the associated preliminary classifications are performed. Finally, we aggregate the classification results from all scales using a decision fusion strategy to achieve the final result. Experimental results on three real hyperspectral data sets validate the effectiveness of our method.


Author(s):  
Yang Liu ◽  
Quanxue Gao ◽  
Jin Li ◽  
Jungong Han ◽  
Ling Shao

Zero-shot learning (ZSL) has been widely researched and get successful in machine learning. Most existing ZSL methods aim to accurately recognize objects of unseen classes by learning a shared mapping from the feature space to a semantic space. However, such methods did not investigate in-depth whether the mapping can precisely reconstruct the original visual feature. Motivated by the fact that the data have low intrinsic dimensionality e.g. low-dimensional subspace. In this paper, we formulate a novel framework named Low-rank Embedded Semantic AutoEncoder (LESAE) to jointly seek a low-rank mapping to link visual features with their semantic representations. Taking the encoder-decoder paradigm, the encoder part aims to learn a low-rank mapping from the visual feature to the semantic space, while decoder part manages to reconstruct the original data with the learned mapping. In addition, a non-greedy iterative algorithm is adopted to solve our model. Extensive experiments on six benchmark datasets demonstrate its superiority over several state-of-the-art algorithms.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joshua T. Vogelstein ◽  
Eric W. Bridgeford ◽  
Minh Tang ◽  
Da Zheng ◽  
Christopher Douville ◽  
...  

AbstractTo solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection. The simplest version, Linear Optimal Low-rank projection, incorporates the class-conditional means. We prove, and substantiate with both synthetic and real data benchmarks, that Linear Optimal Low-Rank Projection and its generalizations lead to improved data representations for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of more than 150 million features, and several genomics datasets with more than 500,000 features, Linear Optimal Low-Rank Projection outperforms other scalable linear dimensionality reduction techniques in terms of accuracy, while only requiring a few minutes on a standard desktop computer.


2020 ◽  
Vol 523 ◽  
pp. 14-37 ◽  
Author(s):  
Huafeng Li ◽  
Xiaoge He ◽  
Zhengtao Yu ◽  
Jiebo Luo

2021 ◽  
Vol 12 (4) ◽  
pp. 1-25
Author(s):  
Stanley Ebhohimhen Abhadiomhen ◽  
Zhiyang Wang ◽  
Xiangjun Shen ◽  
Jianping Fan

Multi-view subspace clustering (MVSC) finds a shared structure in latent low-dimensional subspaces of multi-view data to enhance clustering performance. Nonetheless, we observe that most existing MVSC methods neglect the diversity in multi-view data by considering only the common knowledge to find a shared structure either directly or by merging different similarity matrices learned for each view. In the presence of noise, this predefined shared structure becomes a biased representation of the different views. Thus, in this article, we propose a MVSC method based on coupled low-rank representation to address the above limitation. Our method first obtains a low-rank representation for each view, constrained to be a linear combination of the view-specific representation and the shared representation by simultaneously encouraging the sparsity of view-specific one. Then, it uses the k -block diagonal regularizer to learn a manifold recovery matrix for each view through respective low-rank matrices to recover more manifold structures from them. In this way, the proposed method can find an ideal similarity matrix by approximating clustering projection matrices obtained from the recovery structures. Hence, this similarity matrix denotes our clustering structure with exactly k connected components by applying a rank constraint on the similarity matrix’s relaxed Laplacian matrix to avoid spectral post-processing of the low-dimensional embedding matrix. The core of our idea is such that we introduce dynamic approximation into the low-rank representation to allow the clustering structure and the shared representation to guide each other to learn cleaner low-rank matrices that would lead to a better clustering structure. Therefore, our approach is notably different from existing methods in which the local manifold structure of data is captured in advance. Extensive experiments on six benchmark datasets show that our method outperforms 10 similar state-of-the-art compared methods in six evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document