Home > Research > Publications & Outputs > Hedging parameter selection for basis pursuit

Links

Keywords

View graph of relations

Hedging parameter selection for basis pursuit

Research output: Contribution to Journal/MagazineJournal article

Published

Standard

Hedging parameter selection for basis pursuit. / Chretien, Stephane; Gibberd, Alex; Roy, Sandipan.
In: arXiv, 04.05.2018.

Research output: Contribution to Journal/MagazineJournal article

Harvard

APA

Vancouver

Chretien S, Gibberd A, Roy S. Hedging parameter selection for basis pursuit. arXiv. 2018 May 4.

Author

Chretien, Stephane ; Gibberd, Alex ; Roy, Sandipan. / Hedging parameter selection for basis pursuit. In: arXiv. 2018.

Bibtex

@article{588f5e5a798342efa2e681632932221a,
title = "Hedging parameter selection for basis pursuit",
abstract = "In Compressed Sensing and high dimensional estimation, signal recovery often relies on sparsity assumptions and estimation is performed via ℓ1-penalized least-squares optimization, a.k.a. LASSO. The ℓ1 penalisation is usually controlled by a weight, also called {"}relaxation parameter{"}, denoted by λ. It is commonly thought that the practical efficiency of the LASSO for prediction crucially relies on accurate selection of λ. In this short note, we propose to consider the hyper-parameter selection problem from a new perspective which combines the Hedge online learning method by Freund and Shapire, with the stochastic Frank-Wolfe method for the LASSO. Using the Hedge algorithm, we show that a our simple selection rule can achieve prediction results comparable to Cross Validation at a potentially much lower computational cost.",
keywords = "stat.CO",
author = "Stephane Chretien and Alex Gibberd and Sandipan Roy",
year = "2018",
month = may,
day = "4",
language = "English",
journal = "arXiv",

}

RIS

TY - JOUR

T1 - Hedging parameter selection for basis pursuit

AU - Chretien, Stephane

AU - Gibberd, Alex

AU - Roy, Sandipan

PY - 2018/5/4

Y1 - 2018/5/4

N2 - In Compressed Sensing and high dimensional estimation, signal recovery often relies on sparsity assumptions and estimation is performed via ℓ1-penalized least-squares optimization, a.k.a. LASSO. The ℓ1 penalisation is usually controlled by a weight, also called "relaxation parameter", denoted by λ. It is commonly thought that the practical efficiency of the LASSO for prediction crucially relies on accurate selection of λ. In this short note, we propose to consider the hyper-parameter selection problem from a new perspective which combines the Hedge online learning method by Freund and Shapire, with the stochastic Frank-Wolfe method for the LASSO. Using the Hedge algorithm, we show that a our simple selection rule can achieve prediction results comparable to Cross Validation at a potentially much lower computational cost.

AB - In Compressed Sensing and high dimensional estimation, signal recovery often relies on sparsity assumptions and estimation is performed via ℓ1-penalized least-squares optimization, a.k.a. LASSO. The ℓ1 penalisation is usually controlled by a weight, also called "relaxation parameter", denoted by λ. It is commonly thought that the practical efficiency of the LASSO for prediction crucially relies on accurate selection of λ. In this short note, we propose to consider the hyper-parameter selection problem from a new perspective which combines the Hedge online learning method by Freund and Shapire, with the stochastic Frank-Wolfe method for the LASSO. Using the Hedge algorithm, we show that a our simple selection rule can achieve prediction results comparable to Cross Validation at a potentially much lower computational cost.

KW - stat.CO

M3 - Journal article

JO - arXiv

JF - arXiv

ER -