Misplaced Pages

Generalized chi-squared distribution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Generalized chi-square distribution) A theory in statistics
Generalized chi-squared distribution
Probability density functionGeneralized chi-square probability density function
Cumulative distribution functionGeneralized chi-square cumulative distribution function
Notation χ ~ ( w , k , λ , s , m ) {\displaystyle {\tilde {\chi }}({\boldsymbol {w}},{\boldsymbol {k}},{\boldsymbol {\lambda }},s,m)}
Parameters w {\displaystyle {\boldsymbol {w}}} , vector of weights of noncentral chi-square components
k {\displaystyle {\boldsymbol {k}}} , vector of degrees of freedom of noncentral chi-square components
λ {\displaystyle {\boldsymbol {\lambda }}} , vector of non-centrality parameters of chi-square components
s {\displaystyle s} , scale of normal term
m {\displaystyle m} , offset
Support x { [ m , + )  if  w i 0 , s = 0 , ( , m ]  if  w i 0 , s = 0 , R  otherwise. {\displaystyle x\in {\begin{cases}{\text{ if }}w_{i}\leq 0,s=0,\\\mathbb {R} {\text{ otherwise.}}\end{cases}}}
PDF no closed-form expression
CDF no closed-form expression
Mean j w j ( k j + λ j ) + m {\displaystyle \sum _{j}w_{j}(k_{j}+\lambda _{j})+m}
Variance 2 j w j 2 ( k j + 2 λ j ) + s 2 {\displaystyle 2\sum _{j}w_{j}^{2}(k_{j}+2\lambda _{j})+s^{2}}
MGF exp [ t ( m + j w j λ j 1 + 2 w j t ) + s 2 t 2 2 ] j ( 1 + 2 w j t ) k j / 2 {\displaystyle {\frac {\exp \left}{\prod _{j}\left(1+2w_{j}t\right)^{k_{j}/2}}}}
CF exp [ i t ( m + j w j λ j 1 2 i w j t ) s 2 t 2 2 ] j ( 1 2 i w j t ) k j / 2 {\displaystyle {\frac {\exp \left}{\prod _{j}\left(1-2iw_{j}t\right)^{k_{j}/2}}}}

In probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic form of a multinormal variable (normal vector), or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.

Definition

The generalized chi-squared variable may be described in multiple ways. One is to write it as a weighted sum of independent noncentral chi-square variables χ 2 {\displaystyle {{\chi }'}^{2}} and a standard normal variable z {\displaystyle z} :

χ ~ ( w , k , λ , s , m ) = i w i χ 2 ( k i , λ i ) + s z + m . {\displaystyle {\tilde {\chi }}({\boldsymbol {w}},{\boldsymbol {k}},{\boldsymbol {\lambda }},s,m)=\sum _{i}w_{i}{{\chi }'}^{2}(k_{i},\lambda _{i})+sz+m.}

Here the parameters are the weights w i {\displaystyle w_{i}} , the degrees of freedom k i {\displaystyle k_{i}} and non-centralities λ i {\displaystyle \lambda _{i}} of the constituent non-central chi-squares, and the coefficients s {\displaystyle s} and m {\displaystyle m} of the normal. Some important special cases of this have all weights w i {\displaystyle w_{i}} of the same sign, or have central chi-squared components, or omit the normal term.

Since a non-central chi-squared variable is a sum of squares of normal variables with different means, the generalized chi-square variable is also defined as a sum of squares of independent normal variables, plus an independent normal variable: that is, a quadratic in normal variables.

Another equivalent way is to formulate it as a quadratic form of a normal vector x {\displaystyle {\boldsymbol {x}}} :

χ ~ = q ( x ) = x Q 2 x + q 1 x + q 0 {\displaystyle {\tilde {\chi }}=q({\boldsymbol {x}})={\boldsymbol {x}}'\mathbf {Q_{2}} {\boldsymbol {x}}+{\boldsymbol {q_{1}}}'{\boldsymbol {x}}+q_{0}} .

Here Q 2 {\displaystyle \mathbf {Q_{2}} } is a matrix, q 1 {\displaystyle {\boldsymbol {q_{1}}}} is a vector, and q 0 {\displaystyle q_{0}} is a scalar. These, together with the mean μ {\displaystyle {\boldsymbol {\mu }}} and covariance matrix Σ {\displaystyle \mathbf {\Sigma } } of the normal vector x {\displaystyle {\boldsymbol {x}}} , parameterize the distribution.

For the most general case, a reduction towards a common standard form can be made by using a representation of the following form:

X = ( z + a ) T A ( z + a ) + c T z = ( x + b ) T D ( x + b ) + d T x + e , {\displaystyle X=(z+a)^{\mathrm {T} }A(z+a)+c^{\mathrm {T} }z=(x+b)^{\mathrm {T} }D(x+b)+d^{\mathrm {T} }x+e,}

where D is a diagonal matrix and where x represents a vector of uncorrelated standard normal random variables.

Parameter conversions

A generalized chi-square variable or distribution can be parameterized in two ways. The first is in terms of the weights w i {\displaystyle w_{i}} , the degrees of freedom k i {\displaystyle k_{i}} and non-centralities λ i {\displaystyle \lambda _{i}} of the constituent non-central chi-squares, and the coefficients s {\displaystyle s} and m {\displaystyle m} of the added normal term. The second parameterization is using the quadratic form of a normal vector, where the paremeters are the matrix Q 2 {\displaystyle \mathbf {Q_{2}} } , the vector q 1 {\displaystyle {\boldsymbol {q_{1}}}} , and the scalar q 0 {\displaystyle q_{0}} , and the mean μ {\displaystyle {\boldsymbol {\mu }}} and covariance matrix Σ {\displaystyle \mathbf {\Sigma } } of the normal vector.

The parameters of the first expression (in terms of non-central chi-squares, a normal and a constant) can be calculated in terms of the parameters of the second expression (quadratic form of a normal vector).

The parameters of the second expression (quadratic form of a normal vector) can also be calculated in terms of the parameters of the first expression (in terms of non-central chi-squares, a normal and a constant).

There exists Matlab code to convert from one set of parameters to another.

Computing the PDF/CDF/inverse CDF/random numbers

The probability density, cumulative distribution, and inverse cumulative distribution functions of a generalized chi-squared variable do not have simple closed-form expressions. But there exist several methods to compute them numerically: Ruben's method, Imhof's method, IFFT method, ray method, and ellipse approximation.

Numerical algorithms and computer code (Fortran and C, Matlab, R, Python, Julia) have been published that implement some of these methods to compute the PDF, CDF, and inverse CDF, and to generate random numbers.

The following table shows the best methods to use to compute the CDF and PDF for the different parts of the generalized chi-square distribution in different cases:

χ ~ {\displaystyle {\tilde {\chi }}} type part best cdf/pdf method(s)
ellipse: w i {\displaystyle w_{i}} same sign, s = 0 {\displaystyle s=0} body Ruben, Imhof, IFFT, ray
finite tail Ruben, ray (if λ i = 0 {\displaystyle \lambda _{i}=0} ), ellipse
infinite tail Ruben, ray
not ellipse: w i {\displaystyle w_{i}} mixed signs, and/or s 0 {\displaystyle s\neq 0} body Imhof, IFFT, ray
infinite tails ray

Applications

The generalized chi-squared is the distribution of statistical estimates in cases where the usual statistical theory does not hold, as in the examples below.

In model fitting and selection

If a predictive model is fitted by least squares, but the residuals have either autocorrelation or heteroscedasticity, then alternative models can be compared (in model selection) by relating changes in the sum of squares to an asymptotically valid generalized chi-squared distribution.

Classifying normal vectors using Gaussian discriminant analysis

If x {\displaystyle {\boldsymbol {x}}} is a normal vector, its log likelihood is a quadratic form of x {\displaystyle {\boldsymbol {x}}} , and is hence distributed as a generalized chi-squared. The log likelihood ratio that x {\displaystyle {\boldsymbol {x}}} arises from one normal distribution versus another is also a quadratic form, so distributed as a generalized chi-squared.

In Gaussian discriminant analysis, samples from multinormal distributions are optimally separated by using a quadratic classifier, a boundary that is a quadratic function (e.g. the curve defined by setting the likelihood ratio between two Gaussians to 1). The classification error rates of different types (false positives and false negatives) are integrals of the normal distributions within the quadratic regions defined by this classifier. Since this is mathematically equivalent to integrating a quadratic form of a normal vector, the result is an integral of a generalized-chi-squared variable.

In signal processing

The following application arises in the context of Fourier analysis in signal processing, renewal theory in probability theory, and multi-antenna systems in wireless communication. The common factor of these areas is that the sum of exponentially distributed variables is of importance (or identically, the sum of squared magnitudes of circularly-symmetric centered complex Gaussian variables).

If Z i {\displaystyle Z_{i}} are k independent, circularly-symmetric centered complex Gaussian random variables with mean 0 and variance σ i 2 {\displaystyle \sigma _{i}^{2}} , then the random variable

Q ~ = i = 1 k | Z i | 2 {\displaystyle {\tilde {Q}}=\sum _{i=1}^{k}|Z_{i}|^{2}}

has a generalized chi-squared distribution of a particular form. The difference from the standard chi-squared distribution is that Z i {\displaystyle Z_{i}} are complex and can have different variances, and the difference from the more general generalized chi-squared distribution is that the relevant scaling matrix A is diagonal. If μ = σ i 2 {\displaystyle \mu =\sigma _{i}^{2}} for all i, then Q ~ {\displaystyle {\tilde {Q}}} , scaled down by μ / 2 {\displaystyle \mu /2} (i.e. multiplied by 2 / μ {\displaystyle 2/\mu } ), has a chi-squared distribution, χ 2 ( 2 k ) {\displaystyle \chi ^{2}(2k)} , also known as an Erlang distribution. If σ i 2 {\displaystyle \sigma _{i}^{2}} have distinct values for all i, then Q ~ {\displaystyle {\tilde {Q}}} has the pdf

f ( x ; k , σ 1 2 , , σ k 2 ) = i = 1 k e x σ i 2 σ i 2 j = 1 , j i k ( 1 σ j 2 σ i 2 ) for  x 0. {\displaystyle f(x;k,\sigma _{1}^{2},\ldots ,\sigma _{k}^{2})=\sum _{i=1}^{k}{\frac {e^{-{\frac {x}{\sigma _{i}^{2}}}}}{\sigma _{i}^{2}\prod _{j=1,j\neq i}^{k}\left(1-{\frac {\sigma _{j}^{2}}{\sigma _{i}^{2}}}\right)}}\quad {\text{for }}x\geq 0.}

If there are sets of repeated variances among σ i 2 {\displaystyle \sigma _{i}^{2}} , assume that they are divided into M sets, each representing a certain variance value. Denote r = ( r 1 , r 2 , , r M ) {\displaystyle \mathbf {r} =(r_{1},r_{2},\dots ,r_{M})} to be the number of repetitions in each group. That is, the mth set contains r m {\displaystyle r_{m}} variables that have variance σ m 2 . {\displaystyle \sigma _{m}^{2}.} It represents an arbitrary linear combination of independent χ 2 {\displaystyle \chi ^{2}} -distributed random variables with different degrees of freedom:

Q ~ = m = 1 M σ m 2 / 2 Q m , Q m χ 2 ( 2 r m ) . {\displaystyle {\tilde {Q}}=\sum _{m=1}^{M}\sigma _{m}^{2}/2*Q_{m},\quad Q_{m}\sim \chi ^{2}(2r_{m})\,.}

The pdf of Q ~ {\displaystyle {\tilde {Q}}} is

f ( x ; r , σ 1 2 , σ M 2 ) = m = 1 M 1 σ m 2 r m k = 1 M l = 1 r k Ψ k , l , r ( r k l ) ! ( x ) r k l e x σ k 2 ,  for  x 0 , {\displaystyle f(x;\mathbf {r} ,\sigma _{1}^{2},\dots \sigma _{M}^{2})=\prod _{m=1}^{M}{\frac {1}{\sigma _{m}^{2r_{m}}}}\sum _{k=1}^{M}\sum _{l=1}^{r_{k}}{\frac {\Psi _{k,l,\mathbf {r} }}{(r_{k}-l)!}}(-x)^{r_{k}-l}e^{-{\frac {x}{\sigma _{k}^{2}}}},\quad {\text{ for }}x\geq 0,}

where

Ψ k , l , r = ( 1 ) r k 1 i Ω k , l j k ( i j + r j 1 i j ) ( 1 σ j 2 1 σ k 2 ) ( r j + i j ) , {\displaystyle \Psi _{k,l,\mathbf {r} }=(-1)^{r_{k}-1}\sum _{\mathbf {i} \in \Omega _{k,l}}\prod _{j\neq k}{\binom {i_{j}+r_{j}-1}{i_{j}}}\left({\frac {1}{\sigma _{j}^{2}}}\!-\!{\frac {1}{\sigma _{k}^{2}}}\right)^{-(r_{j}+i_{j})},}

with i = [ i 1 , , i M ] T {\displaystyle \mathbf {i} =^{T}} from the set Ω k , l {\displaystyle \Omega _{k,l}} of all partitions of l 1 {\displaystyle l-1} (with i k = 0 {\displaystyle i_{k}=0} ) defined as

Ω k , l = { [ i 1 , , i m ] Z m ; j = 1 M i j = l 1 , i k = 0 , i j 0  for all  j } . {\displaystyle \Omega _{k,l}=\left\{\in \mathbb {Z} ^{m};\sum _{j=1}^{M}i_{j}\!=l-1,i_{k}=0,i_{j}\geq 0{\text{ for all }}j\right\}.}

See also

References

  1. Davies, R. B. (1973). "Numerical inversion of a characteristic function". Biometrika. 60 (2): 415–417. doi:10.1093/biomet/60.2.415.
  2. ^ Davies, R. B. (1980). "Algorithm AS155: The distribution of a linear combination of χ random variables". Journal of the Royal Statistical Society. Series C (Applied Statistics). 29: 323–333. doi:10.2307/2346911.
  3. ^ Jones, D. A. (1983). "Statistical analysis of empirical models fitted by optimisation". Biometrika. 70 (1): 67–88. doi:10.1093/biomet/70.1.67.
  4. ^ Das, Abhranil; Wilson S Geisler (2020). "Methods to integrate multinormals and compute classification measures". arXiv:2012.14331 .
  5. ^ Sheil, J.; O'Muircheartaigh, I. (1977). "Algorithm AS106: The distribution of non-negative quadratic forms in normal variables". Journal of the Royal Statistical Society. Series C (Applied Statistics). 26 (1): 92–98. doi:10.2307/2346884.
  6. ^ Das, Abhranil (2024). "New methods to compute the generalized chi-square distribution". arXiv:2404.05062.
  7. Ruben, Harold (1962). "Probability content of regions under spherical normal distributions, IV: The distribution of homogeneous and non-homogeneous quadratic functions of normal variables". The Annals of Mathematical Statistics: 542-570.
  8. ^ Imhof, J. P. (1961). "Computing the Distribution of Quadratic Forms in Normal Variables" (PDF). Biometrika. 48 (3/4): 419–426. doi:10.2307/2332763. JSTOR 2332763.
  9. D. Hammarwall, M. Bengtsson, B. Ottersten (2008) "Acquiring Partial CSI for Spatially Selective Transmission by Instantaneous Channel Norm Feedback", IEEE Transactions on Signal Processing, 56, 1188–1204
  10. E. Björnson, D. Hammarwall, B. Ottersten (2009) "Exploiting Quantized Channel Norm Feedback through Conditional Statistics in Arbitrarily Correlated MIMO Systems", IEEE Transactions on Signal Processing, 57, 4027–4041

External links

Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Category: