Home > Research > Publications & Outputs > Recurrent Kalman networks

Electronic data

  • 19icml_kalman

    Accepted author manuscript, 522 KB, PDF document

    Available under license: Unspecified

  • 19icml_kalman_supp

    Rights statement: Please note: this is the Appendix to the article.

    Other version, 416 KB, PDF document

Links

View graph of relations

Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces. / Becker, P.; Pandya, H.; Gebhardt, G. et al.
Proceedings of Machine Learning Research (PMLR). Vol. 97 2019. p. 544-552 (Proceedings of Machine Learning Research (PMLR); Vol. 97).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Becker, P, Pandya, H, Gebhardt, G, Zhao, C, Taylor, CJ & Neumann, G 2019, Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces. in Proceedings of Machine Learning Research (PMLR). vol. 97, Proceedings of Machine Learning Research (PMLR), vol. 97, pp. 544-552. <http://proceedings.mlr.press/v97/becker19a.html>

APA

Becker, P., Pandya, H., Gebhardt, G., Zhao, C., Taylor, C. J., & Neumann, G. (2019). Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces. In Proceedings of Machine Learning Research (PMLR) (Vol. 97, pp. 544-552). (Proceedings of Machine Learning Research (PMLR); Vol. 97). http://proceedings.mlr.press/v97/becker19a.html

Vancouver

Becker P, Pandya H, Gebhardt G, Zhao C, Taylor CJ, Neumann G. Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces. In Proceedings of Machine Learning Research (PMLR). Vol. 97. 2019. p. 544-552. (Proceedings of Machine Learning Research (PMLR)).

Author

Becker, P. ; Pandya, H. ; Gebhardt, G. et al. / Recurrent Kalman networks : factorized inference in high-dimensional deep feature spaces. Proceedings of Machine Learning Research (PMLR). Vol. 97 2019. pp. 544-552 (Proceedings of Machine Learning Research (PMLR)).

Bibtex

@inproceedings{4e83b1b5dc45410b92a4a62fad255c9e,
title = "Recurrent Kalman networks: factorized inference in high-dimensional deep feature spaces",
abstract = "In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) have been integrated with deep learning models; however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.",
keywords = "Kalman Filter, state estimation, robot dynamics",
author = "P. Becker and H. Pandya and G. Gebhardt and C. Zhao and Taylor, {C. James} and G. Neumann",
year = "2019",
month = jun,
day = "13",
language = "English",
volume = "97",
series = "Proceedings of Machine Learning Research (PMLR)",
pages = "544--552",
booktitle = "Proceedings of Machine Learning Research (PMLR)",

}

RIS

TY - GEN

T1 - Recurrent Kalman networks

T2 - factorized inference in high-dimensional deep feature spaces

AU - Becker, P.

AU - Pandya, H.

AU - Gebhardt, G.

AU - Zhao, C.

AU - Taylor, C. James

AU - Neumann, G.

PY - 2019/6/13

Y1 - 2019/6/13

N2 - In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) have been integrated with deep learning models; however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.

AB - In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) have been integrated with deep learning models; however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.

KW - Kalman Filter

KW - state estimation

KW - robot dynamics

M3 - Conference contribution/Paper

VL - 97

T3 - Proceedings of Machine Learning Research (PMLR)

SP - 544

EP - 552

BT - Proceedings of Machine Learning Research (PMLR)

ER -