Home > Research > Publications & Outputs > Proximity penalty priors for Bayesian mixture m...

Keywords

View graph of relations

Proximity penalty priors for Bayesian mixture models

Research output: Working paper

Unpublished

Standard

Proximity penalty priors for Bayesian mixture models. / Sperrin, Matthew.
2011.

Research output: Working paper

Harvard

APA

Vancouver

Author

Bibtex

@techreport{ff84355b2d9444bdba0eb272963c7c75,
title = "Proximity penalty priors for Bayesian mixture models",
abstract = "When using mixture models it may be the case that the modeller has a-priori beliefs or desires about what the components of the mixture should represent. For example, if a mixture of normal densities is to be fitted to some data, it may be desirable for components to focus on capturing differences in location rather than scale. We introduce a framework called proximity penalty priors (PPPs) that allows this preference to be made explicit in the prior information. The approach is scale-free and imposes minimal restrictions on the posterior; in particular no arbitrary thresholds need to be set. We show the theoretical validity of the approach, and demonstrate the effects of using PPPs on posterior distributions with simulated and real data.",
keywords = "stat.ME",
author = "Matthew Sperrin",
note = "14 pages, 6 figures",
year = "2011",
month = jul,
day = "27",
language = "Undefined/Unknown",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - Proximity penalty priors for Bayesian mixture models

AU - Sperrin, Matthew

N1 - 14 pages, 6 figures

PY - 2011/7/27

Y1 - 2011/7/27

N2 - When using mixture models it may be the case that the modeller has a-priori beliefs or desires about what the components of the mixture should represent. For example, if a mixture of normal densities is to be fitted to some data, it may be desirable for components to focus on capturing differences in location rather than scale. We introduce a framework called proximity penalty priors (PPPs) that allows this preference to be made explicit in the prior information. The approach is scale-free and imposes minimal restrictions on the posterior; in particular no arbitrary thresholds need to be set. We show the theoretical validity of the approach, and demonstrate the effects of using PPPs on posterior distributions with simulated and real data.

AB - When using mixture models it may be the case that the modeller has a-priori beliefs or desires about what the components of the mixture should represent. For example, if a mixture of normal densities is to be fitted to some data, it may be desirable for components to focus on capturing differences in location rather than scale. We introduce a framework called proximity penalty priors (PPPs) that allows this preference to be made explicit in the prior information. The approach is scale-free and imposes minimal restrictions on the posterior; in particular no arbitrary thresholds need to be set. We show the theoretical validity of the approach, and demonstrate the effects of using PPPs on posterior distributions with simulated and real data.

KW - stat.ME

M3 - Working paper

BT - Proximity penalty priors for Bayesian mixture models

ER -