Accepted author manuscript, 261 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - On sparse variational methods and the Kullback-Leibler divergence between stochastic processes
AU - Matthews, Alexander G. de G.
AU - Hensman, James
AU - Turner, Richard
AU - Ghahramani, Zoubin
PY - 2016
Y1 - 2016
N2 - The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature.The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approximating and posterior processes. To our knowledge this connection has thus far gone unremarked in the literature. In this paper we give a substantial generalization of the literature on this topic. We give a new proof of the result for infinite index sets which allowsinducing points that are not data points and likelihoods that depend on all function values.We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model. We then characterize an extra condition where such a guarantee is obtainable. Finally we show how our framework sheds light on interdomain sparse approximations and sparseapproximations for Cox processes.
AB - The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature.The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approximating and posterior processes. To our knowledge this connection has thus far gone unremarked in the literature. In this paper we give a substantial generalization of the literature on this topic. We give a new proof of the result for infinite index sets which allowsinducing points that are not data points and likelihoods that depend on all function values.We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model. We then characterize an extra condition where such a guarantee is obtainable. Finally we show how our framework sheds light on interdomain sparse approximations and sparseapproximations for Cox processes.
M3 - Journal article
VL - 51
SP - 231
EP - 239
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
SN - 1532-4435
ER -