matrix recovery
Recently Published Documents


TOTAL DOCUMENTS

302
(FIVE YEARS 90)

H-INDEX

24
(FIVE YEARS 4)

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 343
Author(s):  
Yanbin Zhang ◽  
Long-Ting Huang ◽  
Yangqing Li ◽  
Kai Zhang ◽  
Changchuan Yin

In order to reduce the amount of hyperspectral imaging (HSI) data transmission required through hyperspectral remote sensing (HRS), we propose a structured low-rank and joint-sparse (L&S) data compression and reconstruction method. The proposed method exploits spatial and spectral correlations in HSI data using sparse Bayesian learning and compressive sensing (CS). By utilizing a simultaneously L&S data model, we employ the information of the principal components and Bayesian learning to reconstruct the hyperspectral images. The simulation results demonstrate that the proposed method is superior to LRMR and SS&LR methods in terms of reconstruction accuracy and computational burden under the same signal-to-noise tatio (SNR) and compression ratio.


2021 ◽  
Author(s):  
Hang Xu ◽  
Song Li ◽  
Junhong Lin

Abstract Many problems in data science can be treated as recovering a low-rank matrix from a small number of random linear measurements, possibly corrupted with adversarial noise and dense noise. Recently, a bunch of theories on variants of models have been developed for different noises, but with fewer theories on the adversarial noise. In this paper, we study low-rank matrix recovery problem from linear measurements perturbed by $\ell_1$-bounded noise and sparse noise that can arbitrarily change an adversarially chosen $\omega$-fraction of the measurement vector. For Gaussian measurements with nearly optimal number of measurements, we show that the nuclear-norm constrained least absolute deviation (LAD) can successfully estimate the ground-truth matrix for any $\omega<0.239$. Similar robust recovery results are also established for an iterative hard thresholding algorithm applied to the rank-constrained LAD considering geometrically decaying step-sizes, and the unconstrained LAD based on matrix factorization as well as its subgradient descent solver.


2021 ◽  
Author(s):  
Zhengqin Xu ◽  
Huasong Xing ◽  
Shun Fang ◽  
Shiqian Wu ◽  
Shoulie Xie

2021 ◽  
pp. 108273
Author(s):  
Lili Yang ◽  
Jie Li ◽  
Fangjiong Chen ◽  
Yuwei Wei ◽  
Fei Ji ◽  
...  

Author(s):  
Hengyou Wang ◽  
Wen Li ◽  
Lujin Hu ◽  
Changlun Zhang ◽  
Qiang He

Author(s):  
Xuan Vinh Doan ◽  
Stephen Vavasis

AbstractLow-rank matrix recovery problem is difficult due to its non-convex properties and it is usually solved using convex relaxation approaches. In this paper, we formulate the non-convex low-rank matrix recovery problem exactly using novel Ky Fan 2-k-norm-based models. A general difference of convex functions algorithm (DCA) is developed to solve these models. A proximal point algorithm (PPA) framework is proposed to solve sub-problems within the DCA, which allows us to handle large instances. Numerical results show that the proposed models achieve high recoverability rates as compared to the truncated nuclear norm method and the alternating bilinear optimization approach. The results also demonstrate that the proposed DCA with the PPA framework is efficient in handling larger instances.


Sign in / Sign up

Export Citation Format

Share Document