Misplaced Pages

Bias (statistics): Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 02:24, 29 October 2004 editMichael Hardy (talk | contribs)Administrators210,279 editsm Death to blank lines.← Previous edit Revision as of 02:24, 29 October 2004 edit undoMichael Hardy (talk | contribs)Administrators210,279 editsm The sometimes-good kindNext edit →
Line 37: Line 37:
:<math>\operatorname{E}(S^2)=\frac{n-1}{n}\sigma^2\neq\sigma^2.</math> :<math>\operatorname{E}(S^2)=\frac{n-1}{n}\sigma^2\neq\sigma^2.</math>


However, this biased estimator is, by the commonly used criterion of "mean squared error", actually better (but only very slightly) than the unbiased estimator that results from putting ''n &minus; 1'' in the denominator where ''n'' appears in the definition of ''S''<sup>2</sup> above. Even then the ] of the unbiased estimator of the population ] is not an unbiased estimator of the population ]; for a non-linear function ''f'' and an unbiased estimator ''U'' of a parameter ''p'', ''f''(''U'') is usually not an unbiased estimator of ''f''(''p''). However, this biased estimator is, by the commonly used criterion of "mean squared error", actually better (but only very slightly) than the unbiased estimator that results from putting ''n'' &minus; 1 in the denominator where ''n'' appears in the definition of ''S''<sup>2</sup> above. Even then the ] of the unbiased estimator of the population ] is not an unbiased estimator of the population ]; for a non-linear function ''f'' and an unbiased estimator ''U'' of a parameter ''p'', ''f''(''U'') is usually not an unbiased estimator of ''f''(''p'').


A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose ''X'' has a ] with expectation &lambda;. It is desired to estimate A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose ''X'' has a ] with expectation &lambda;. It is desired to estimate

Revision as of 02:24, 29 October 2004

In statistics, a biased estimator is one that for some reason on average over- or underestimates what is being estimated. The word bias has at least two different senses in statistics, one referring to something considered very bad, the other referring to something that can at times produce results more useful and closer to the truth than an insistence on being "unbiased."

The bad kind

One meaning is involved in what is called a biased sample: If some elements are more likely to be chosen in the sample than others, and those that are have a higher or lower value of the quantity being estimated, the outcome will be higher or lower than the true value.

A famous case of what can go wrong when using a biased sample is found in the 1936 US presidential election polls. The Literary Digest held a poll that forecast that Alfred M. Landon would defeat Franklin Delano Roosevelt by 57% to 43%. George Gallup, using a much smaller sample (300,000 rather than 2,000,000), predicted Roosevelt would win, and he was right. What went wrong with the Literary Digest poll? They had used lists of telephone and automobile owners to select their sample. In those days, these were luxuries, so their sample consisted mainly of middle- and upper-class citizens. These voted in majority for Landon, but the lower classes voted for Roosevelt. Because their sample was biased towards wealthier citizens, their result was incorrect.

This kind of bias is usually regarded as a worse problem than statistical noise: Problems with statistical noise can be lessened by enlarging the sample, but a biased sample will not go away that easily. In particular, a meta-analysis will distill good data for studies that themselves suffer from statistical noise, but a meta-analysis of biased studies will be biased itself.

The sometimes-good kind

Another kind of bias in statistics does not involve biased samples, but does involve the use of a statistic whose average value differs from the value of the quantity being estimated. Suppose we are trying to estimate the parameter θ {\displaystyle \theta } using an estimator θ ^ {\displaystyle {\hat {\theta }}} (that is, some function of the observed data). Then the bias of θ ^ {\displaystyle {\hat {\theta }}} is defined to be

E ( θ ^ ) θ . {\displaystyle \operatorname {E} ({\hat {\theta }})-\theta .}

In words, this would be "the expected value of the estimator θ ^ {\displaystyle {\hat {\theta }}} minus the true value θ {\displaystyle \theta } ". This may be rewritten as

E ( θ ^ θ ) . {\displaystyle \operatorname {E} ({\hat {\theta }}-\theta ).}

which would read "the expected value of the difference between the estimator and the true value" (the expected value of θ {\displaystyle \theta } is θ {\displaystyle \theta } ).

For example, suppose X1, ..., Xn are independent and identically distributed random variables, each with a normal distribution with expectation μ and variance σ. Let

X ¯ = ( X 1 + + X n ) / n {\displaystyle {\overline {X}}=(X_{1}+\cdots +X_{n})/n}

be the "sample average", and let

S 2 = 1 n i = 1 n ( X i X ¯ ) 2 {\displaystyle S^{2}={\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-{\overline {X}}\,)^{2}}

be a "sample variance". Then S is a "biased estimator" of σ because

E ( S 2 ) = n 1 n σ 2 σ 2 . {\displaystyle \operatorname {E} (S^{2})={\frac {n-1}{n}}\sigma ^{2}\neq \sigma ^{2}.}

However, this biased estimator is, by the commonly used criterion of "mean squared error", actually better (but only very slightly) than the unbiased estimator that results from putting n − 1 in the denominator where n appears in the definition of S above. Even then the square root of the unbiased estimator of the population variance is not an unbiased estimator of the population standard deviation; for a non-linear function f and an unbiased estimator U of a parameter p, f(U) is usually not an unbiased estimator of f(p).

A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose X has a Poisson distribution with expectation λ. It is desired to estimate

P ( X = 0 ) 2 = e 2 λ . {\displaystyle \operatorname {P} (X=0)^{2}=e^{-2\lambda }.\quad }

The only function of the data constituting an unbiased estimator is

δ ( X ) = ( 1 ) X . {\displaystyle \delta (X)=(-1)^{X}.\quad }

If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is obviously very likely to be near 0, which is the opposite extreme. And if X is observed to be 101, then the estimate is even more absurd: it is -1, although the quantity being estimated obviously must be positive. The (biased) maximum-likelihood estimator

e 2 X {\displaystyle e^{-2X}\quad }

is better than this unbiased estimator in the sense that the mean squared error

e 4 λ 2 e λ ( 1 / e 2 3 ) + e λ ( 1 / e 4 1 ) {\displaystyle e^{-4\lambda }-2e^{\lambda (1/e^{2}-3)}+e^{\lambda (1/e^{4}-1)}}

is smaller. Compare the unbiased estimator's MSE of

1 e 4 λ {\displaystyle 1-e^{-4\lambda }}

The MSE is a function of the true value λ. The bias of the maximum-likelihood estimator is:

e 2 λ e λ ( 1 / e 2 1 ) {\displaystyle e^{-2\lambda }-e^{\lambda (1/e^{2}-1)}} .

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only n/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X.

See also