A convex relaxation of a dimension reduction problem using the nuclear norm

Author(s):  
Christian Lyzell ◽  
Martin Andersen ◽  
Martin Enqvist
2019 ◽  
Vol 63 (2) ◽  
pp. 319-345
Author(s):  
Assaf Naor ◽  
Gilles Pisier ◽  
Gideon Schechtman

2018 ◽  
Vol 18 (5-6) ◽  
pp. 388-410 ◽  
Author(s):  
Scott Powers ◽  
Trevor Hastie ◽  
Robert Tibshirani

We propose the nuclear norm penalty as an alternative to the ridge penalty for regularized multinomial regression. This convex relaxation of reduced-rank multinomial regression has the advantage of leveraging underlying structure among the response categories to make better predictions. We apply our method, nuclear penalized multinomial regression (NPMR), to Major League Baseball play-by-play data to predict outcome probabilities based on batter–pitcher matchups. The interpretation of the results meshes well with subject-area expertise and also suggests a novel understanding of what differentiates players.


2019 ◽  
Vol 1 (1) ◽  
pp. 341-358 ◽  
Author(s):  
Guoqing Chao ◽  
Yuan Luo ◽  
Weiping Ding

Recently, we have witnessed an explosive growth in both the quantity and dimension of data generated, which aggravates the high dimensionality challenge in tasks such as predictive modeling and decision support. Up to now, a large amount of unsupervised dimension reduction methods have been proposed and studied. However, there is no specific review focusing on the supervised dimension reduction problem. Most studies performed classification or regression after unsupervised dimension reduction methods. However, we recognize the following advantages if learning the low-dimensional representation and the classification/regression model simultaneously: high accuracy and effective representation. Considering classification or regression as being the main goal of dimension reduction, the purpose of this paper is to summarize and organize the current developments in the field into three main classes: PCA-based, Non-negative Matrix Factorization (NMF)-based, and manifold-based supervised dimension reduction methods, as well as provide elaborated discussions on their advantages and disadvantages. Moreover, we outline a dozen open problems that can be further explored to advance the development of this topic.


Author(s):  
Holger Rauhut ◽  
Željka Stojanac

AbstractWe study extensions of compressive sensing and low rank matrix recovery to the recovery of tensors of low rank from incomplete linear information. While the reconstruction of low rank matrices via nuclear norm minimization is rather well-understand by now, almost no theory is available so far for the extension to higher order tensors due to various theoretical and computational difficulties arising for tensor decompositions. In fact, nuclear norm minimization for matrix recovery is a tractable convex relaxation approach, but the extension of the nuclear norm to tensors is in general NP-hard to compute. In this article, we introduce convex relaxations of the tensor nuclear norm which are computable in polynomial time via semidefinite programming. Our approach is based on theta bodies, a concept from real computational algebraic geometry which is similar to the one of the better known Lasserre relaxations. We introduce polynomial ideals which are generated by the second-order minors corresponding to different matricizations of the tensor (where the tensor entries are treated as variables) such that the nuclear norm ball is the convex hull of the algebraic variety of the ideal. The theta body of order k for such an ideal generates a new norm which we call the θk-norm. We show that in the matrix case, these norms reduce to the standard nuclear norm. For tensors of order three or higher however, we indeed obtain new norms. The sequence of the corresponding unit-θk-norm balls converges asymptotically to the unit tensor nuclear norm ball. By providing the Gröbner basis for the ideals, we explicitly give semidefinite programs for the computation of the θk-norm and for the minimization of the θk-norm under an affine constraint. Finally, numerical experiments for order-three tensor recovery via θ1-norm minimization suggest that our approach successfully reconstructs tensors of low rank from incomplete linear (random) measurements.


Test ◽  
2021 ◽  
Author(s):  
Andrea Bergesio ◽  
María Eugenia Szretter Noste ◽  
Víctor J. Yohai

Author(s):  
Haoyang Cheng ◽  
Wenquan Cui

Heteroscedasticity often appears in the high-dimensional data analysis. In order to achieve a sparse dimension reduction direction for high-dimensional data with heteroscedasticity, we propose a new sparse sufficient dimension reduction method, called Lasso-PQR. From the candidate matrix derived from the principal quantile regression (PQR) method, we construct a new artificial response variable which is made up from top eigenvectors of the candidate matrix. Then we apply a Lasso regression to obtain sparse dimension reduction directions. While for the “large [Formula: see text] small [Formula: see text]” case that [Formula: see text], we use principal projection to solve the dimension reduction problem in a lower-dimensional subspace and projection back to the original dimension reduction problem. Theoretical properties of the methodology are established. Compared with several existing methods in the simulations and real data analysis, we demonstrate the advantages of our method in the high dimension data with heteroscedasticity.


2013 ◽  
Vol 718-720 ◽  
pp. 2308-2313
Author(s):  
Lu Liu ◽  
Wei Huang ◽  
Di Rong Chen

Minimizing the nuclear norm is recently considered as the convex relaxation of the rank minimization problem and arises in many applications as Netflix challenge. A closest nonconvex relaxation - Schatten norm minimization has been proposed to replace the NP hard rank minimization. In this paper, an algorithm based on Majorization Minimization has be proposed to solve Schatten norm minimization. The numerical experiments show that Schatten norm with recovers low rank matrix from fewer measurements than nuclear norm minimization. The numerical results also indicate that our algorithm give a more accurate reconstruction


2020 ◽  
Vol 12 (14) ◽  
pp. 2264
Author(s):  
Hongyi Liu ◽  
Hanyang Li ◽  
Zebin Wu ◽  
Zhihui Wei

Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.


Sign in / Sign up

Export Citation Format

Share Document