Likelihood principle

In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function of observable random variable as a function of a parameter . Then for a specific value of , the function is a likelihood function of : it gives a measure of how "likely" any particular value of is, if we know that has the value . The density function may be a density with respect to counting measure, i.e. a probability mass function.

Two likelihood functions are equivalent if one is a scalar multiple of the other.[a] The likelihood principle is this: All information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.[1]


Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).

  1. ^ Dodge, Y. (2003). The Oxford Dictionary of Statistical Terms. Oxford University Press. ISBN 0-19-920613-9.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by razib.in