Home > Research > Publications & Outputs > Smooth Operators
View graph of relations

Smooth Operators

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Smooth Operators. / Grunewalder, S.; Gretton, A.; Shawe-Taylor, J.
Proceedings of the 30th International Conference on Machine Learning. Vol. 28 PMLR, 2013. p. 1184-1192.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Grunewalder, S, Gretton, A & Shawe-Taylor, J 2013, Smooth Operators. in Proceedings of the 30th International Conference on Machine Learning. vol. 28, PMLR, pp. 1184-1192.

APA

Grunewalder, S., Gretton, A., & Shawe-Taylor, J. (2013). Smooth Operators. In Proceedings of the 30th International Conference on Machine Learning (Vol. 28, pp. 1184-1192). PMLR.

Vancouver

Grunewalder S, Gretton A, Shawe-Taylor J. Smooth Operators. In Proceedings of the 30th International Conference on Machine Learning. Vol. 28. PMLR. 2013. p. 1184-1192

Author

Grunewalder, S. ; Gretton, A. ; Shawe-Taylor, J. / Smooth Operators. Proceedings of the 30th International Conference on Machine Learning. Vol. 28 PMLR, 2013. pp. 1184-1192

Bibtex

@inproceedings{1deace9f83914f5da2357c94ae86f8f0,
title = "Smooth Operators",
abstract = "We develop a generic approach to form smooth versions of basic mathematical operations like multiplication, composition, change of measure, and conditional expectation, among others. Operations which result in functions outside the reproducing kernel Hilbert space (such as the product of two RKHS functions) are approximated via a natural cost function, such that the solution is guaranteed to be in the targeted RKHS. This approximation problem is reduced to a regression problem using an adjoint trick, and solved in a vector-valued RKHS, consisting of continuous, linear, smooth operators which map from an input, real-valued RKHS to the desired target RKHS. Important constraints, such as an almost everywhere positive density, can be enforced or approximated naturally in this framework, using convex constraints on the operators. Finally, smooth operators can be composed to accomplish more complex machine learning tasks, such as the sum rule and kernelized approximate Bayesian inference, where state-of-the-art convergence rates are obtained.",
author = "S. Grunewalder and A. Gretton and J. Shawe-Taylor",
year = "2013",
month = jun,
day = "16",
language = "English",
volume = "28",
pages = "1184--1192",
booktitle = "Proceedings of the 30th International Conference on Machine Learning",
publisher = "PMLR",

}

RIS

TY - GEN

T1 - Smooth Operators

AU - Grunewalder, S.

AU - Gretton, A.

AU - Shawe-Taylor, J.

PY - 2013/6/16

Y1 - 2013/6/16

N2 - We develop a generic approach to form smooth versions of basic mathematical operations like multiplication, composition, change of measure, and conditional expectation, among others. Operations which result in functions outside the reproducing kernel Hilbert space (such as the product of two RKHS functions) are approximated via a natural cost function, such that the solution is guaranteed to be in the targeted RKHS. This approximation problem is reduced to a regression problem using an adjoint trick, and solved in a vector-valued RKHS, consisting of continuous, linear, smooth operators which map from an input, real-valued RKHS to the desired target RKHS. Important constraints, such as an almost everywhere positive density, can be enforced or approximated naturally in this framework, using convex constraints on the operators. Finally, smooth operators can be composed to accomplish more complex machine learning tasks, such as the sum rule and kernelized approximate Bayesian inference, where state-of-the-art convergence rates are obtained.

AB - We develop a generic approach to form smooth versions of basic mathematical operations like multiplication, composition, change of measure, and conditional expectation, among others. Operations which result in functions outside the reproducing kernel Hilbert space (such as the product of two RKHS functions) are approximated via a natural cost function, such that the solution is guaranteed to be in the targeted RKHS. This approximation problem is reduced to a regression problem using an adjoint trick, and solved in a vector-valued RKHS, consisting of continuous, linear, smooth operators which map from an input, real-valued RKHS to the desired target RKHS. Important constraints, such as an almost everywhere positive density, can be enforced or approximated naturally in this framework, using convex constraints on the operators. Finally, smooth operators can be composed to accomplish more complex machine learning tasks, such as the sum rule and kernelized approximate Bayesian inference, where state-of-the-art convergence rates are obtained.

M3 - Conference contribution/Paper

VL - 28

SP - 1184

EP - 1192

BT - Proceedings of the 30th International Conference on Machine Learning

PB - PMLR

ER -