Home > Research > Publications & Outputs > Ivanov-Regularised Least-Squares Estimators ove...

Links

View graph of relations

Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces. / Page, Stephen; Grunewalder, Steffen.
In: Journal of Machine Learning Research, Vol. 20, 01.09.2019, p. 1-49.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Author

Bibtex

@article{c90f71dd75e24a20800ddaeb4b2510f8,
title = "Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces",
abstract = "We study kernel least-squares estimation under a norm constraint. This form of regularisation is known as Ivanov regularisation and it provides better control of the norm of the estimator than the well-established Tikhonov regularisation. Ivanov regularisation can be studied under minimal assumptions. In particular, we assume only that the RKHS is separable with a bounded and measurable kernel. We provide rates of convergence for the expected squared L2error of our estimator under the weak assumption that the variance of the response variables is bounded and the unknown regression function lies in an interpolation space between L2 and the RKHS. We then obtain faster rates of convergence when the regression function is bounded by clipping the estimator. In fact, we attain the optimal rate of convergence. Furthermore, we provide a high-probability bound under the stronger assumption that the response variables have subgaussian errors and that the regression function lies in an interpolation space between L∞ and the RKHS. Finally, we derive adaptive results for the settings in which the regression function is bounded.",
author = "Stephen Page and Steffen Grunewalder",
year = "2019",
month = sep,
day = "1",
language = "English",
volume = "20",
pages = "1--49",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

RIS

TY - JOUR

T1 - Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces

AU - Page, Stephen

AU - Grunewalder, Steffen

PY - 2019/9/1

Y1 - 2019/9/1

N2 - We study kernel least-squares estimation under a norm constraint. This form of regularisation is known as Ivanov regularisation and it provides better control of the norm of the estimator than the well-established Tikhonov regularisation. Ivanov regularisation can be studied under minimal assumptions. In particular, we assume only that the RKHS is separable with a bounded and measurable kernel. We provide rates of convergence for the expected squared L2error of our estimator under the weak assumption that the variance of the response variables is bounded and the unknown regression function lies in an interpolation space between L2 and the RKHS. We then obtain faster rates of convergence when the regression function is bounded by clipping the estimator. In fact, we attain the optimal rate of convergence. Furthermore, we provide a high-probability bound under the stronger assumption that the response variables have subgaussian errors and that the regression function lies in an interpolation space between L∞ and the RKHS. Finally, we derive adaptive results for the settings in which the regression function is bounded.

AB - We study kernel least-squares estimation under a norm constraint. This form of regularisation is known as Ivanov regularisation and it provides better control of the norm of the estimator than the well-established Tikhonov regularisation. Ivanov regularisation can be studied under minimal assumptions. In particular, we assume only that the RKHS is separable with a bounded and measurable kernel. We provide rates of convergence for the expected squared L2error of our estimator under the weak assumption that the variance of the response variables is bounded and the unknown regression function lies in an interpolation space between L2 and the RKHS. We then obtain faster rates of convergence when the regression function is bounded by clipping the estimator. In fact, we attain the optimal rate of convergence. Furthermore, we provide a high-probability bound under the stronger assumption that the response variables have subgaussian errors and that the regression function lies in an interpolation space between L∞ and the RKHS. Finally, we derive adaptive results for the settings in which the regression function is bounded.

M3 - Journal article

VL - 20

SP - 1

EP - 49

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -