Misplaced Pages

Behrens–Fisher distribution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Behrens-Fisher distribution) Probability distribution
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Behrens–Fisher distribution" – news · newspapers · books · scholar · JSTOR (July 2022) (Learn how and when to remove this message)

In statistics, the Behrens–Fisher distribution, named after Ronald Fisher and Walter Behrens, is a parameterized family of probability distributions arising from the solution of the Behrens–Fisher problem proposed first by Behrens and several years later by Fisher. The Behrens–Fisher problem is that of statistical inference concerning the difference between the means of two normally distributed populations when the ratio of their variances is not known (and in particular, it is not known that their variances are equal).

Definition

The Behrens–Fisher distribution is the distribution of a random variable of the form

T 2 cos θ T 1 sin θ {\displaystyle T_{2}\cos \theta -T_{1}\sin \theta \,}

where T1 and T2 are independent random variables each with a Student's t-distribution, with respective degrees of freedom ν1 = n1 − 1 and ν2 = n2 − 1, and θ is a constant. Thus the family of Behrens–Fisher distributions is parametrized by ν1ν2, and θ.

Derivation

Suppose it were known that the two population variances are equal, and samples of sizes n1 and n2 are taken from the two populations:

X 1 , 1 , , X 1 , n 1 i . i . d . N ( μ 1 , σ 2 ) , X 2 , 1 , , X 2 , n 2 i . i . d . N ( μ 2 , σ 2 ) . {\displaystyle {\begin{aligned}X_{1,1},\ldots ,X_{1,n_{1}}&\sim \operatorname {i.i.d.} N(\mu _{1},\sigma ^{2}),\\X_{2,1},\ldots ,X_{2,n_{2}}&\sim \operatorname {i.i.d.} N(\mu _{2},\sigma ^{2}).\end{aligned}}}

where "i.i.d" are independent and identically distributed random variables and N denotes the normal distribution. The two sample means are

X ¯ 1 = ( X 1 , 1 + + X 1 , n 1 ) / n 1 X ¯ 2 = ( X 2 , 1 + + X 2 , n 2 ) / n 2 {\displaystyle {\begin{aligned}{\bar {X}}_{1}&=(X_{1,1}+\cdots +X_{1,n_{1}})/n_{1}\\{\bar {X}}_{2}&=(X_{2,1}+\cdots +X_{2,n_{2}})/n_{2}\end{aligned}}}

The usual "pooled" unbiased estimate of the common variance σ is then

S p o o l e d 2 = k = 1 n 1 ( X 1 , k X ¯ 1 ) 2 + k = 1 n 2 ( X 2 , k X ¯ 2 ) 2 n 1 + n 2 2 = ( n 1 1 ) S 1 2 + ( n 2 1 ) S 2 2 n 1 + n 2 2 {\displaystyle S_{\mathrm {pooled} }^{2}={\frac {\sum _{k=1}^{n_{1}}(X_{1,k}-{\bar {X}}_{1})^{2}+\sum _{k=1}^{n_{2}}(X_{2,k}-{\bar {X}}_{2})^{2}}{n_{1}+n_{2}-2}}={\frac {(n_{1}-1)S_{1}^{2}+(n_{2}-1)S_{2}^{2}}{n_{1}+n_{2}-2}}}

where S1 and S2 are the usual unbiased (Bessel-corrected) estimates of the two population variances.

Under these assumptions, the pivotal quantity

( μ 2 μ 1 ) ( X ¯ 2 X ¯ 1 ) S p o o l e d 2 n 1 + S p o o l e d 2 n 2 {\displaystyle {\frac {(\mu _{2}-\mu _{1})-({\bar {X}}_{2}-{\bar {X}}_{1})}{\displaystyle {\sqrt {{\frac {S_{\mathrm {pooled} }^{2}}{n_{1}}}+{\frac {S_{\mathrm {pooled} }^{2}}{n_{2}}}}}}}}

has a t-distribution with n1 + n2 − 2 degrees of freedom. Accordingly, one can find a confidence interval for μ2 − μ1 whose endpoints are

X ¯ 2 X 1 ¯ ± A S p o o l e d 1 n 1 + 1 n 2 , {\displaystyle {\bar {X}}_{2}-{\bar {X_{1}}}\pm A\cdot S_{\mathrm {pooled} }{\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}},}

where A is an appropriate quantile of the t-distribution.

However, in the Behrens–Fisher problem, the two population variances are not known to be equal, nor is their ratio known. Fisher considered the pivotal quantity

( μ 2 μ 1 ) ( X ¯ 2 X ¯ 1 ) S 1 2 n 1 + S 2 2 n 2 . {\displaystyle {\frac {(\mu _{2}-\mu _{1})-({\bar {X}}_{2}-{\bar {X}}_{1})}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}.}

This can be written as

T 2 cos θ T 1 sin θ , {\displaystyle T_{2}\cos \theta -T_{1}\sin \theta ,\,}

where

T i = μ i X ¯ i S i / n i  for  i = 1 , 2 {\displaystyle T_{i}={\frac {\mu _{i}-{\bar {X}}_{i}}{S_{i}/{\sqrt {n_{i}}}}}{\text{ for }}i=1,2\,}

are the usual one-sample t-statistics and

tan θ = S 1 / n 1 S 2 / n 2 {\displaystyle \tan \theta ={\frac {S_{1}/{\sqrt {n_{1}}}}{S_{2}/{\sqrt {n_{2}}}}}}

and one takes θ to be in the first quadrant. The algebraic details are as follows:

( μ 2 μ 1 ) ( X ¯ 2 X ¯ 1 ) S 1 2 n 1 + S 2 2 n 2 = μ 2 X ¯ 2 S 1 2 n 1 + S 2 2 n 2 μ 1 X ¯ 1 S 1 2 n 1 + S 2 2 n 2 = μ 2 X ¯ 2 S 2 / n 2 This is  T 2 ( S 2 / n 2 S 1 2 n 1 + S 2 2 n 2 ) This is  cos θ μ 1 X ¯ 1 S 1 / n 1 This is  T 1 ( S 1 / n 1 S 1 2 n 1 + S 2 2 n 2 ) This is  sin θ . ( 1 ) {\displaystyle {\begin{aligned}{\frac {(\mu _{2}-\mu _{1})-({\bar {X}}_{2}-{\bar {X}}_{1})}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}&={\frac {\mu _{2}-{\bar {X}}_{2}}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}-{\frac {\mu _{1}-{\bar {X}}_{1}}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}\\&=\underbrace {\frac {\mu _{2}-{\bar {X}}_{2}}{S_{2}/{\sqrt {n_{2}}}}} _{{\text{This is }}T_{2}}\cdot \underbrace {\left({\frac {S_{2}/{\sqrt {n_{2}}}}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}\right)} _{{\text{This is }}\cos \theta }-\underbrace {\frac {\mu _{1}-{\bar {X}}_{1}}{S_{1}/{\sqrt {n_{1}}}}} _{{\text{This is }}T_{1}}\cdot \underbrace {\left({\frac {S_{1}/{\sqrt {n_{1}}}}{\displaystyle {\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}}\right)} _{{\text{This is }}\sin \theta }.\qquad \qquad \qquad (1)\end{aligned}}}

The fact that the sum of the squares of the expressions in parentheses above is 1 implies that they are the squared cosine and squared sine of some angle.

The Behren–Fisher distribution is actually the conditional distribution of the quantity (1) above, given the values of the quantities labeled cos θ and sin θ. In effect, Fisher conditions on ancillary information.

Fisher then found the "fiducial interval" whose endpoints are

X ¯ 2 X ¯ 1 ± A S 1 2 n 1 + S 2 2 n 2 {\displaystyle {\bar {X}}_{2}-{\bar {X}}_{1}\pm A{\sqrt {{\frac {S_{1}^{2}}{n_{1}}}+{\frac {S_{2}^{2}}{n_{2}}}}}}

where A is the appropriate percentage point of the Behrens–Fisher distribution. Fisher claimed that the probability that μ2 − μ1 is in this interval, given the data (ultimately the Xs) is the probability that a Behrens–Fisher-distributed random variable is between −A and A.

Fiducial intervals versus confidence intervals

Bartlett showed that this "fiducial interval" is not a confidence interval because it does not have a constant coverage rate. Fisher did not consider that a cogent objection to the use of the fiducial interval.


Further reading

  • Kendall, Maurice G., Stuart, Alan (1973) The Advanced Theory of Statistics, Volume 2: Inference and Relationship, 3rd Edition, Griffin. ISBN 0-85264-215-6 (Chapter 21)
Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families

References

  1. Kim, Seock-Ho; Cohen, Allan S. (December 1998). "On the Behrens-Fisher Problem: A Review". Journal of Educational and Behavioral Statistics. 23 (4): 356–377. doi:10.3102/10769986023004356. ISSN 1076-9986. S2CID 85462934.
Category: