Home > Research > Publications & Outputs > Hedging parameter selection for basis pursuit



View graph of relations

Hedging parameter selection for basis pursuit

Research output: Contribution to Journal/MagazineJournal article

<mark>Journal publication date</mark>4/05/2018
Number of pages12
Publication StatusPublished
<mark>Original language</mark>English


In Compressed Sensing and high dimensional estimation, signal recovery often relies on sparsity assumptions and estimation is performed via ℓ1-penalized least-squares optimization, a.k.a. LASSO. The ℓ1 penalisation is usually controlled by a weight, also called "relaxation parameter", denoted by λ. It is commonly thought that the practical efficiency of the LASSO for prediction crucially relies on accurate selection of λ. In this short note, we propose to consider the hyper-parameter selection problem from a new perspective which combines the Hedge online learning method by Freund and Shapire, with the stochastic Frank-Wolfe method for the LASSO. Using the Hedge algorithm, we show that a our simple selection rule can achieve prediction results comparable to Cross Validation at a potentially much lower computational cost.