Home > Research > Publications & Outputs > Reproducing-Kernel Hilbert space regression wit...

Electronic data

  • 2019pagephd

    Final published version, 924 KB, PDF document

    Available under license: CC BY-NC-ND

Text available via DOI:

View graph of relations

Reproducing-Kernel Hilbert space regression with notes on the Wasserstein Distance

Research output: ThesisDoctoral Thesis

Published

Standard

Reproducing-Kernel Hilbert space regression with notes on the Wasserstein Distance. / Page, Stephen.
Lancaster University, 2019. 262 p.

Research output: ThesisDoctoral Thesis

Harvard

APA

Vancouver

Page S. Reproducing-Kernel Hilbert space regression with notes on the Wasserstein Distance. Lancaster University, 2019. 262 p. doi: 10.17635/lancaster/thesis/689

Author

Bibtex

@phdthesis{82be8052012d4fafa1f30cbf252ea4a8,
title = "Reproducing-Kernel Hilbert space regression with notes on the Wasserstein Distance",
abstract = "We study kernel least-squares estimators for the regression problem subject to a norm constraint. We bound the squared L2 error of our estimators with respect to the covariate distribution. We also bound the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. This leads us to investigate the extreme points of Wasserstein balls.In Chapter 3, we provide bounds on our estimators both when the regression function is unbounded and when the regression function is bounded. When the regression function is bounded, we clip the estimators so that they are closer to the regression function. In this setting, we also use training and validation to adaptively select a size for our norm constraint based on the data.In Chapter 4, we study a different adaptive estimation procedure called the Goldenshluger--Lepski method. Unlike training and validation, this method uses all of the data to create estimators for a range of sizes of norm constraint before using pairwise comparisons to select a final estimator. We are able to adaptively select both a size for our norm constraint and a kernel.In Chapter 5, we examine the extreme points of Wasserstein balls. We show that the only extreme points which are not on the surface of the ball are the Dirac measures. This is followed by finding conditions under which points on the surface of the ball are extreme points or not extreme points.In Chapter 6, we provide bounds on the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. We prove bounds both when the regression function is unbounded and when the regression function is bounded. We also investigate the analysis and computation of alternative estimators.",
keywords = "adaptive estimation, covariate shift, extreme points, Goldenshluger–Lepski method, interpolation space, Ivanov regularisation, regression, RKHS, training and validation, Wasserstein distance",
author = "Stephen Page",
year = "2019",
doi = "10.17635/lancaster/thesis/689",
language = "English",
publisher = "Lancaster University",
school = "Lancaster University",

}

RIS

TY - BOOK

T1 - Reproducing-Kernel Hilbert space regression with notes on the Wasserstein Distance

AU - Page, Stephen

PY - 2019

Y1 - 2019

N2 - We study kernel least-squares estimators for the regression problem subject to a norm constraint. We bound the squared L2 error of our estimators with respect to the covariate distribution. We also bound the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. This leads us to investigate the extreme points of Wasserstein balls.In Chapter 3, we provide bounds on our estimators both when the regression function is unbounded and when the regression function is bounded. When the regression function is bounded, we clip the estimators so that they are closer to the regression function. In this setting, we also use training and validation to adaptively select a size for our norm constraint based on the data.In Chapter 4, we study a different adaptive estimation procedure called the Goldenshluger--Lepski method. Unlike training and validation, this method uses all of the data to create estimators for a range of sizes of norm constraint before using pairwise comparisons to select a final estimator. We are able to adaptively select both a size for our norm constraint and a kernel.In Chapter 5, we examine the extreme points of Wasserstein balls. We show that the only extreme points which are not on the surface of the ball are the Dirac measures. This is followed by finding conditions under which points on the surface of the ball are extreme points or not extreme points.In Chapter 6, we provide bounds on the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. We prove bounds both when the regression function is unbounded and when the regression function is bounded. We also investigate the analysis and computation of alternative estimators.

AB - We study kernel least-squares estimators for the regression problem subject to a norm constraint. We bound the squared L2 error of our estimators with respect to the covariate distribution. We also bound the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. This leads us to investigate the extreme points of Wasserstein balls.In Chapter 3, we provide bounds on our estimators both when the regression function is unbounded and when the regression function is bounded. When the regression function is bounded, we clip the estimators so that they are closer to the regression function. In this setting, we also use training and validation to adaptively select a size for our norm constraint based on the data.In Chapter 4, we study a different adaptive estimation procedure called the Goldenshluger--Lepski method. Unlike training and validation, this method uses all of the data to create estimators for a range of sizes of norm constraint before using pairwise comparisons to select a final estimator. We are able to adaptively select both a size for our norm constraint and a kernel.In Chapter 5, we examine the extreme points of Wasserstein balls. We show that the only extreme points which are not on the surface of the ball are the Dirac measures. This is followed by finding conditions under which points on the surface of the ball are extreme points or not extreme points.In Chapter 6, we provide bounds on the worst-case squared L2 error of our estimators with respect to a Wasserstein ball of probability measures centred at the covariate distribution. We prove bounds both when the regression function is unbounded and when the regression function is bounded. We also investigate the analysis and computation of alternative estimators.

KW - adaptive estimation

KW - covariate shift

KW - extreme points

KW - Goldenshluger–Lepski method

KW - interpolation space

KW - Ivanov regularisation

KW - regression

KW - RKHS

KW - training and validation

KW - Wasserstein distance

U2 - 10.17635/lancaster/thesis/689

DO - 10.17635/lancaster/thesis/689

M3 - Doctoral Thesis

PB - Lancaster University

ER -