Home > Research > Publications & Outputs > Sparse multiscale Gaussian process regression
View graph of relations

Sparse multiscale Gaussian process regression

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Sparse multiscale Gaussian process regression. / Walder, Christian; Kim, Kwang In; Schölkopf, Bernhard.
Proc. International Conference on Machine Learning (ICML) 2008. Tuebingen, Germany: Max Planck Institute for Biological Cybernetics, 2008. p. 1112-1119.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Walder, C, Kim, KI & Schölkopf, B 2008, Sparse multiscale Gaussian process regression. in Proc. International Conference on Machine Learning (ICML) 2008. Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, pp. 1112-1119. <http://icml2008.cs.helsinki.fi/papers/599.pdf>

APA

Walder, C., Kim, K. I., & Schölkopf, B. (2008). Sparse multiscale Gaussian process regression. In Proc. International Conference on Machine Learning (ICML) 2008 (pp. 1112-1119). Max Planck Institute for Biological Cybernetics. http://icml2008.cs.helsinki.fi/papers/599.pdf

Vancouver

Walder C, Kim KI, Schölkopf B. Sparse multiscale Gaussian process regression. In Proc. International Conference on Machine Learning (ICML) 2008. Tuebingen, Germany: Max Planck Institute for Biological Cybernetics. 2008. p. 1112-1119

Author

Walder, Christian ; Kim, Kwang In ; Schölkopf, Bernhard. / Sparse multiscale Gaussian process regression. Proc. International Conference on Machine Learning (ICML) 2008. Tuebingen, Germany : Max Planck Institute for Biological Cybernetics, 2008. pp. 1112-1119

Bibtex

@inproceedings{f6a4fc2ff89c46e58133ebb87e3fa50a,
title = "Sparse multiscale Gaussian process regression",
abstract = "Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. Wegeneralise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typicallybetter than was previously possible. We perform gradient based optimisation ofthe marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicableto all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.",
author = "Christian Walder and Kim, {Kwang In} and Bernhard Sch{\"o}lkopf",
year = "2008",
language = "English",
isbn = "9781605582054",
pages = "1112--1119",
booktitle = "Proc. International Conference on Machine Learning (ICML) 2008",
publisher = "Max Planck Institute for Biological Cybernetics",

}

RIS

TY - GEN

T1 - Sparse multiscale Gaussian process regression

AU - Walder, Christian

AU - Kim, Kwang In

AU - Schölkopf, Bernhard

PY - 2008

Y1 - 2008

N2 - Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. Wegeneralise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typicallybetter than was previously possible. We perform gradient based optimisation ofthe marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicableto all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

AB - Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. Wegeneralise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typicallybetter than was previously possible. We perform gradient based optimisation ofthe marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicableto all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

M3 - Conference contribution/Paper

SN - 9781605582054

SP - 1112

EP - 1119

BT - Proc. International Conference on Machine Learning (ICML) 2008

PB - Max Planck Institute for Biological Cybernetics

CY - Tuebingen, Germany

ER -