Events and Seminars : 2012 Seminars

Joint Variable And Rank Selection For Parsimonious Estimation of High Dimensional Matrices

The talk discusses joint variable and rank selection for supervised dimension reduction in predictive learning. When the number of responses and/or that of the predictors exceed the sample size, one has to consider shrinkage methods for estimation and prediction. We propose to apply sparsity and reduced rank techniques jointly to attain simultaneous feature selection and feature extraction. A class of estimators are introduced are based on novel penalties that impose both row and rank restrictions on the coefficient matrix. We show that these estimators adapt to the unknown matrix sparsity and have fast rates of convergence than LASSO and reduced rank regression. A computation algorithm is developed and applied to real world applications in machine learning, cognitive neuroscience and macroeconometrics forecasting.

Click here to view slides