Home > Research > Publications & Outputs > On convergence of the EM algorithm and the Gibb...
View graph of relations

On convergence of the EM algorithm and the Gibbs sampler.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On convergence of the EM algorithm and the Gibbs sampler. / Sahu, Sujit K.; Roberts, Gareth O.
In: Statistics and Computing, Vol. 9, No. 1, 04.1999, p. 55-64.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Sahu, SK & Roberts, GO 1999, 'On convergence of the EM algorithm and the Gibbs sampler.', Statistics and Computing, vol. 9, no. 1, pp. 55-64. https://doi.org/10.1023/A:1008814227332

APA

Sahu, S. K., & Roberts, G. O. (1999). On convergence of the EM algorithm and the Gibbs sampler. Statistics and Computing, 9(1), 55-64. https://doi.org/10.1023/A:1008814227332

Vancouver

Sahu SK, Roberts GO. On convergence of the EM algorithm and the Gibbs sampler. Statistics and Computing. 1999 Apr;9(1):55-64. doi: 10.1023/A:1008814227332

Author

Sahu, Sujit K. ; Roberts, Gareth O. / On convergence of the EM algorithm and the Gibbs sampler. In: Statistics and Computing. 1999 ; Vol. 9, No. 1. pp. 55-64.

Bibtex

@article{2e9e6ffa84274bbeb377d474481eab11,
title = "On convergence of the EM algorithm and the Gibbs sampler.",
abstract = "In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.",
keywords = "Gaussian distribution - Generalized linear mixed models - Markov chain Monte Carlo - Parameterization - Rate of convergence",
author = "Sahu, {Sujit K.} and Roberts, {Gareth O.}",
year = "1999",
month = apr,
doi = "10.1023/A:1008814227332",
language = "English",
volume = "9",
pages = "55--64",
journal = "Statistics and Computing",
issn = "0960-3174",
publisher = "Springer Netherlands",
number = "1",

}

RIS

TY - JOUR

T1 - On convergence of the EM algorithm and the Gibbs sampler.

AU - Sahu, Sujit K.

AU - Roberts, Gareth O.

PY - 1999/4

Y1 - 1999/4

N2 - In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.

AB - In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.

KW - Gaussian distribution - Generalized linear mixed models - Markov chain Monte Carlo - Parameterization - Rate of convergence

U2 - 10.1023/A:1008814227332

DO - 10.1023/A:1008814227332

M3 - Journal article

VL - 9

SP - 55

EP - 64

JO - Statistics and Computing

JF - Statistics and Computing

SN - 0960-3174

IS - 1

ER -