Home > Research > Publications & Outputs > On sparse variational methods and the Kullback-...

Electronic data

  • sparseKL_AISTATS

    Accepted author manuscript, 261 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

View graph of relations

On sparse variational methods and the Kullback-Leibler divergence between stochastic processes

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. / Matthews, Alexander G. de G.; Hensman, James; Turner, Richard et al.
In: Journal of Machine Learning Research, Vol. 51, 2016, p. 231-239.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Matthews, AGDG, Hensman, J, Turner, R & Ghahramani, Z 2016, 'On sparse variational methods and the Kullback-Leibler divergence between stochastic processes', Journal of Machine Learning Research, vol. 51, pp. 231-239. <http://www.jmlr.org/proceedings/papers/v51/matthews16.pdf>

APA

Matthews, A. G. D. G., Hensman, J., Turner, R., & Ghahramani, Z. (2016). On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. Journal of Machine Learning Research, 51, 231-239. http://www.jmlr.org/proceedings/papers/v51/matthews16.pdf

Vancouver

Matthews AGDG, Hensman J, Turner R, Ghahramani Z. On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. Journal of Machine Learning Research. 2016;51:231-239.

Author

Matthews, Alexander G. de G. ; Hensman, James ; Turner, Richard et al. / On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In: Journal of Machine Learning Research. 2016 ; Vol. 51. pp. 231-239.

Bibtex

@article{ea8d2d8ab9da4769a470d04d704bb865,
title = "On sparse variational methods and the Kullback-Leibler divergence between stochastic processes",
abstract = "The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature.The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approximating and posterior processes. To our knowledge this connection has thus far gone unremarked in the literature. In this paper we give a substantial generalization of the literature on this topic. We give a new proof of the result for infinite index sets which allowsinducing points that are not data points and likelihoods that depend on all function values.We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model. We then characterize an extra condition where such a guarantee is obtainable. Finally we show how our framework sheds light on interdomain sparse approximations and sparseapproximations for Cox processes.",
author = "Matthews, {Alexander G. de G.} and James Hensman and Richard Turner and Zoubin Ghahramani",
year = "2016",
language = "English",
volume = "51",
pages = "231--239",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

RIS

TY - JOUR

T1 - On sparse variational methods and the Kullback-Leibler divergence between stochastic processes

AU - Matthews, Alexander G. de G.

AU - Hensman, James

AU - Turner, Richard

AU - Ghahramani, Zoubin

PY - 2016

Y1 - 2016

N2 - The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature.The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approximating and posterior processes. To our knowledge this connection has thus far gone unremarked in the literature. In this paper we give a substantial generalization of the literature on this topic. We give a new proof of the result for infinite index sets which allowsinducing points that are not data points and likelihoods that depend on all function values.We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model. We then characterize an extra condition where such a guarantee is obtainable. Finally we show how our framework sheds light on interdomain sparse approximations and sparseapproximations for Cox processes.

AB - The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature.The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approximating and posterior processes. To our knowledge this connection has thus far gone unremarked in the literature. In this paper we give a substantial generalization of the literature on this topic. We give a new proof of the result for infinite index sets which allowsinducing points that are not data points and likelihoods that depend on all function values.We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model. We then characterize an extra condition where such a guarantee is obtainable. Finally we show how our framework sheds light on interdomain sparse approximations and sparseapproximations for Cox processes.

M3 - Journal article

VL - 51

SP - 231

EP - 239

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -