Likelihood function

A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations.[1][2][3] When evaluated on the actual data points, it becomes a function solely of the model parameters.

In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision.

In contrast, in Bayesian statistics, the estimate of interest is the converse of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule.[4]

  1. ^ Casella, George; Berger, Roger L. (2002). Statistical Inference (2nd ed.). Duxbury. p. 290. ISBN 0-534-24312-6.
  2. ^ Wakefield, Jon (2013). Frequentist and Bayesian Regression Methods (1st ed.). Springer. p. 36. ISBN 978-1-4419-0925-1.
  3. ^ Lehmann, Erich L.; Casella, George (1998). Theory of Point Estimation (2nd ed.). Springer. p. 444. ISBN 0-387-98502-6.
  4. ^ Zellner, Arnold (1971). An Introduction to Bayesian Inference in Econometrics. New York: Wiley. pp. 13–14. ISBN 0-471-98165-6.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by razib.in