Conjugate prior
In Bayesian probability theory, a class of prior probability distributions p(θ) is said to be conjugate to a class of likelihood functions p(x|θ) if the resulting posterior distributions p(θ|x) are in the same family as p(θ). For example, the Gaussian family is conjugate to itself (or self-conjugate): if the likelihood function is Gaussian, choosing a Gaussian prior will ensure that the posterior distribution is also Gaussian. The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory.^{[1]} A similar concept had been discovered independently by George Alfred Barnard.^{[2]}
Consider the general problem of inferring a distribution for a parameter θ given some datum or data x. From Bayes' theorem, the posterior distribution is calculated from the prior p(θ) and the likelihood function as
Let the likelihood function be considered fixed; the likelihood function is usually well-determined from a statement of the data-generating process. It is clear that different choices of the prior distribution p(θ) may make the integral more or less difficult to calculate, and the product p(x|θ) × p(θ) may take one algebraic form or another. For certain choices of the prior, the posterior has the same algebraic form as the prior (generally with different parameters). Such a choice is a conjugate prior.
A conjugate prior is an algebraic convenience: otherwise a difficult numerical integration may be necessary.
All members of the exponential family have conjugate priors. See Gelman et al.^{[3]} for a catalog.
Example
For a random variable which is a Bernoulli trial with unknown probability of success q in [0,1], the usual conjugate prior is the beta distribution with
where and are chosen to reflect any existing belief or information ( = 1 and = 1 would give a uniform distribution) and Β(, ) is the Beta function acting as a normalising constant.
If we then sample this random variable and get s successes and f failures, we have
which is another Beta distribution with a simple change to the parameters. This posterior distribution could then be used as the prior for more samples, with the parameters simply adding each extra piece of information as it comes.
Table of conjugate distributions
Let n denote the number of observations
Discrete likelihood distributions
Likelihood | Model parameters | Conjugate prior distribution | Prior hyperparameters | Posterior hyperparameters |
---|---|---|---|---|
Bernoulli | p (probability) | Beta | ^{[4]} | |
Binomial | p (probability) | Beta | ^{[4]} | |
Poisson | λ (rate) | Gamma | ^{[4]} | |
Multinomial | p (probability vector) | Dirichlet | ||
Geometric | p_{0} (probability) | Beta | ^{[4]} |
Continuous likelihood distributions
Likelihood | Model parameters | Conjugate prior distribution | Prior hyperparameters | Posterior hyperparameters |
---|---|---|---|---|
Uniform | Pareto | |||
Exponential | λ (rate) | Gamma | ^{[4]} | |
Normal with known variance σ^{2} |
μ (mean) | Normal | ||
Normal with known mean μ |
σ^{2} (variance) | Scaled inverse chi-square | ||
Normal with known mean μ |
τ (precision) | Gamma | ^{[4]} | |
Normal with known mean μ |
σ^{2} (variance) | Inverse Gamma Distribution | ^{[4]} | |
Normal | μ and σ^{2} Assuming dependence |
Normal-Scaled inverse gamma | , where is the sample mean and is the sample variance. | |
Normal | μ and τ Assuming dependence |
Normal-gamma | , where is sample mean and is the sample variance. | |
Multivariate normal with known covariance matrix | μ (mean vector) | Multivariate normal | , where is the sample mean. | |
Multivariate normal | Σ (variance matrix) | inverse-Wishart | ||
Pareto | k (shape) | Gamma | ||
Pareto | x_{m} (location) | Pareto | ||
Gamma with known shape α |
β (inverse scale) | Gamma | , (if ) | |
Gamma ^{[5]} | α (shape), β (inverse scale) |
Notes
- ↑ Howard Raiffa and Robert Schlaifer. Applied Statistical Decision Theory. Division of Research, Graduate School of Business Administration, Harvard University, 1961.
- ↑ Jeff Miller et al. Earliest Known Uses of Some of the Words of Mathematics, "conjugate prior distributions". Electronic document, revision of November 13, 2005, retrieved December 2, 2005.
- ↑ Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis, 2nd edition. CRC Press, 2003. ISBN 1-58488-388-X.
- ↑ ^{4.0} ^{4.1} ^{4.2} ^{4.3} ^{4.4} ^{4.5} ^{4.6} β is inverse scale or rate
- ↑ Fink, D. 1995 A Compendium of Conjugate Priors. In progress report: Extension and enhancement of methods for setting data quality objectives. (DOE contract 95‑831).