Misplaced Pages

Complex normal distribution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Complex normal variable) Statistical distribution of complex random variables
Complex normal
Parameters

μ C n {\displaystyle \mathbf {\mu } \in \mathbb {C} ^{n}} location
Γ C n × n {\displaystyle \Gamma \in \mathbb {C} ^{n\times n}} covariance matrix (positive semi-definite matrix)

C C n × n {\displaystyle C\in \mathbb {C} ^{n\times n}} relation matrix (complex symmetric matrix)
Support C n {\displaystyle \mathbb {C} ^{n}}
PDF complicated, see text
Mean μ {\displaystyle \mathbf {\mu } }
Mode μ {\displaystyle \mathbf {\mu } }
Variance Γ {\displaystyle \Gamma }
CF exp { i Re ( w ¯ μ ) 1 4 ( w ¯ Γ w + Re ( w ¯ C w ¯ ) ) } {\displaystyle \exp \!{\big \{}i\operatorname {Re} ({\overline {w}}'\mu )-{\tfrac {1}{4}}{\big (}{\overline {w}}'\Gamma w+\operatorname {Re} ({\overline {w}}'C{\overline {w}}){\big )}{\big \}}}

In probability theory, the family of complex normal distributions, denoted C N {\displaystyle {\mathcal {CN}}} or N C {\displaystyle {\mathcal {N}}_{\mathcal {C}}} , characterizes complex random variables whose real and imaginary parts are jointly normal. The complex normal family has three parameters: location parameter μ, covariance matrix Γ {\displaystyle \Gamma } , and the relation matrix C {\displaystyle C} . The standard complex normal is the univariate distribution with μ = 0 {\displaystyle \mu =0} , Γ = 1 {\displaystyle \Gamma =1} , and C = 0 {\displaystyle C=0} .

An important subclass of complex normal family is called the circularly-symmetric (central) complex normal and corresponds to the case of zero relation matrix and zero mean: μ = 0 {\displaystyle \mu =0} and C = 0 {\displaystyle C=0} . This case is used extensively in signal processing, where it is sometimes referred to as just complex normal in the literature.

Definitions

Complex standard normal random variable

The standard complex normal random variable or standard complex Gaussian random variable is a complex random variable Z {\displaystyle Z} whose real and imaginary parts are independent normally distributed random variables with mean zero and variance 1 / 2 {\displaystyle 1/2} . Formally,

Z C N ( 0 , 1 ) ( Z ) ( Z )  and  ( Z ) N ( 0 , 1 / 2 )  and  ( Z ) N ( 0 , 1 / 2 ) {\displaystyle Z\sim {\mathcal {CN}}(0,1)\quad \iff \quad \Re (Z)\perp \!\!\!\perp \Im (Z){\text{ and }}\Re (Z)\sim {\mathcal {N}}(0,1/2){\text{ and }}\Im (Z)\sim {\mathcal {N}}(0,1/2)} (Eq.1)

where Z C N ( 0 , 1 ) {\displaystyle Z\sim {\mathcal {CN}}(0,1)} denotes that Z {\displaystyle Z} is a standard complex normal random variable.

Complex normal random variable

Suppose X {\displaystyle X} and Y {\displaystyle Y} are real random variables such that ( X , Y ) T {\displaystyle (X,Y)^{\mathrm {T} }} is a 2-dimensional normal random vector. Then the complex random variable Z = X + i Y {\displaystyle Z=X+iY} is called complex normal random variable or complex Gaussian random variable.

Z  complex normal random variable ( ( Z ) , ( Z ) ) T  real normal random vector {\displaystyle Z{\text{ complex normal random variable}}\quad \iff \quad (\Re (Z),\Im (Z))^{\mathrm {T} }{\text{ real normal random vector}}} (Eq.2)

Complex standard normal random vector

A n-dimensional complex random vector Z = ( Z 1 , , Z n ) T {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathrm {T} }} is a complex standard normal random vector or complex standard Gaussian random vector if its components are independent and all of them are standard complex normal random variables as defined above. That Z {\displaystyle \mathbf {Z} } is a standard complex normal random vector is denoted Z C N ( 0 , I n ) {\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,{\boldsymbol {I}}_{n})} .

Z C N ( 0 , I n ) ( Z 1 , , Z n )  independent  and for  1 i n : Z i C N ( 0 , 1 ) {\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,{\boldsymbol {I}}_{n})\quad \iff (Z_{1},\ldots ,Z_{n}){\text{ independent}}{\text{ and for }}1\leq i\leq n:Z_{i}\sim {\mathcal {CN}}(0,1)} (Eq.3)

Complex normal random vector

If X = ( X 1 , , X n ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathrm {T} }} and Y = ( Y 1 , , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\mathrm {T} }} are random vectors in R n {\displaystyle \mathbb {R} ^{n}} such that [ X , Y ] {\displaystyle } is a normal random vector with 2 n {\displaystyle 2n} components. Then we say that the complex random vector

Z = X + i Y {\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} \,}

is a complex normal random vector or a complex Gaussian random vector.

Z  complex normal random vector ( ( Z T ) , ( Z T ) ) T = ( ( Z 1 ) , , ( Z n ) , ( Z 1 ) , , ( Z n ) ) T  real normal random vector {\displaystyle \mathbf {Z} {\text{ complex normal random vector}}\quad \iff \quad (\Re (\mathbf {Z} ^{\mathrm {T} }),\Im (\mathbf {Z} ^{\mathrm {T} }))^{\mathrm {T} }=(\Re (Z_{1}),\ldots ,\Re (Z_{n}),\Im (Z_{1}),\ldots ,\Im (Z_{n}))^{\mathrm {T} }{\text{ real normal random vector}}} (Eq.4)

Mean, covariance, and relation

The complex Gaussian distribution can be described with 3 parameters:

μ = E [ Z ] , Γ = E [ ( Z μ ) ( Z μ ) H ] , C = E [ ( Z μ ) ( Z μ ) T ] , {\displaystyle \mu =\operatorname {E} ,\quad \Gamma =\operatorname {E} ,\quad C=\operatorname {E} ,}

where Z T {\displaystyle \mathbf {Z} ^{\mathrm {T} }} denotes matrix transpose of Z {\displaystyle \mathbf {Z} } , and Z H {\displaystyle \mathbf {Z} ^{\mathrm {H} }} denotes conjugate transpose.

Here the location parameter μ {\displaystyle \mu } is a n-dimensional complex vector; the covariance matrix Γ {\displaystyle \Gamma } is Hermitian and non-negative definite; and, the relation matrix or pseudo-covariance matrix C {\displaystyle C} is symmetric. The complex normal random vector Z {\displaystyle \mathbf {Z} } can now be denoted as Z     C N ( μ ,   Γ ,   C ) . {\displaystyle \mathbf {Z} \ \sim \ {\mathcal {CN}}(\mu ,\ \Gamma ,\ C).} Moreover, matrices Γ {\displaystyle \Gamma } and C {\displaystyle C} are such that the matrix

P = Γ ¯ C H Γ 1 C {\displaystyle P={\overline {\Gamma }}-{C}^{\mathrm {H} }\Gamma ^{-1}C}

is also non-negative definite where Γ ¯ {\displaystyle {\overline {\Gamma }}} denotes the complex conjugate of Γ {\displaystyle \Gamma } .

Relationships between covariance matrices

Main article: Complex random vector § Covariance matrix and pseudo-covariance matrix

As for any complex random vector, the matrices Γ {\displaystyle \Gamma } and C {\displaystyle C} can be related to the covariance matrices of X = ( Z ) {\displaystyle \mathbf {X} =\Re (\mathbf {Z} )} and Y = ( Z ) {\displaystyle \mathbf {Y} =\Im (\mathbf {Z} )} via expressions

V X X E [ ( X μ X ) ( X μ X ) T ] = 1 2 Re [ Γ + C ] , V X Y E [ ( X μ X ) ( Y μ Y ) T ] = 1 2 Im [ Γ + C ] , V Y X E [ ( Y μ Y ) ( X μ X ) T ] = 1 2 Im [ Γ + C ] , V Y Y E [ ( Y μ Y ) ( Y μ Y ) T ] = 1 2 Re [ Γ C ] , {\displaystyle {\begin{aligned}&V_{XX}\equiv \operatorname {E} ={\tfrac {1}{2}}\operatorname {Re} ,\quad V_{XY}\equiv \operatorname {E} ={\tfrac {1}{2}}\operatorname {Im} ,\\&V_{YX}\equiv \operatorname {E} ={\tfrac {1}{2}}\operatorname {Im} ,\quad \,V_{YY}\equiv \operatorname {E} ={\tfrac {1}{2}}\operatorname {Re} ,\end{aligned}}}

and conversely

Γ = V X X + V Y Y + i ( V Y X V X Y ) , C = V X X V Y Y + i ( V Y X + V X Y ) . {\displaystyle {\begin{aligned}&\Gamma =V_{XX}+V_{YY}+i(V_{YX}-V_{XY}),\\&C=V_{XX}-V_{YY}+i(V_{YX}+V_{XY}).\end{aligned}}}

Density function

The probability density function for complex normal distribution can be computed as

f ( z ) = 1 π n det ( Γ ) det ( P ) exp { 1 2 ( ( z ¯ μ ¯ ) , ( z μ ) ) ( Γ C C ¯ Γ ¯ ) 1 ( z μ z ¯ μ ¯ ) } = det ( P 1 ¯ R P 1 R ) det ( P 1 ) π n e ( z μ ) P 1 ¯ ( z μ ) + Re ( ( z μ ) R P 1 ¯ ( z μ ) ) , {\displaystyle {\begin{aligned}f(z)&={\frac {1}{\pi ^{n}{\sqrt {\det(\Gamma )\det(P)}}}}\,\exp \!\left\{-{\frac {1}{2}}{\begin{pmatrix}({\overline {z}}-{\overline {\mu }})^{\intercal },&(z-\mu )^{\intercal }\end{pmatrix}}{\begin{pmatrix}\Gamma &C\\{\overline {C}}&{\overline {\Gamma }}\end{pmatrix}}^{\!\!-1}\!{\begin{pmatrix}z-\mu \\{\overline {z}}-{\overline {\mu }}\end{pmatrix}}\right\}\\&={\tfrac {\sqrt {\det \left({\overline {P^{-1}}}-R^{\ast }P^{-1}R\right)\det(P^{-1})}}{\pi ^{n}}}\,e^{-(z-\mu )^{\ast }{\overline {P^{-1}}}(z-\mu )+\operatorname {Re} \left((z-\mu )^{\intercal }R^{\intercal }{\overline {P^{-1}}}(z-\mu )\right)},\end{aligned}}}

where R = C H Γ 1 {\displaystyle R=C^{\mathrm {H} }\Gamma ^{-1}} and P = Γ ¯ R C {\displaystyle P={\overline {\Gamma }}-RC} .

Characteristic function

The characteristic function of complex normal distribution is given by

φ ( w ) = exp { i Re ( w ¯ μ ) 1 4 ( w ¯ Γ w + Re ( w ¯ C w ¯ ) ) } , {\displaystyle \varphi (w)=\exp \!{\big \{}i\operatorname {Re} ({\overline {w}}'\mu )-{\tfrac {1}{4}}{\big (}{\overline {w}}'\Gamma w+\operatorname {Re} ({\overline {w}}'C{\overline {w}}){\big )}{\big \}},}

where the argument w {\displaystyle w} is an n-dimensional complex vector.

Properties

  • If Z {\displaystyle \mathbf {Z} } is a complex normal n-vector, A {\displaystyle {\boldsymbol {A}}} an m×n matrix, and b {\displaystyle b} a constant m-vector, then the linear transform A Z + b {\displaystyle {\boldsymbol {A}}\mathbf {Z} +b} will be distributed also complex-normally:
Z     C N ( μ , Γ , C ) A Z + b     C N ( A μ + b , A Γ A H , A C A T ) {\displaystyle Z\ \sim \ {\mathcal {CN}}(\mu ,\,\Gamma ,\,C)\quad \Rightarrow \quad AZ+b\ \sim \ {\mathcal {CN}}(A\mu +b,\,A\Gamma A^{\mathrm {H} },\,ACA^{\mathrm {T} })}
  • If Z {\displaystyle \mathbf {Z} } is a complex normal n-vector, then
2 [ ( Z μ ) H P 1 ¯ ( Z μ ) Re ( ( Z μ ) T R T P 1 ¯ ( Z μ ) ) ]     χ 2 ( 2 n ) {\displaystyle 2{\Big }\ \sim \ \chi ^{2}(2n)}
  • Central limit theorem. If Z 1 , , Z T {\displaystyle Z_{1},\ldots ,Z_{T}} are independent and identically distributed complex random variables, then
T ( 1 T t = 1 T Z t E [ Z t ] )   d   C N ( 0 , Γ , C ) , {\displaystyle {\sqrt {T}}{\Big (}{\tfrac {1}{T}}\textstyle \sum _{t=1}^{T}Z_{t}-\operatorname {E} {\Big )}\ {\xrightarrow {d}}\ {\mathcal {CN}}(0,\,\Gamma ,\,C),}
where Γ = E [ Z Z H ] {\displaystyle \Gamma =\operatorname {E} } and C = E [ Z Z T ] {\displaystyle C=\operatorname {E} } .

Circularly-symmetric central case

Definition

A complex random vector Z {\displaystyle \mathbf {Z} } is called circularly symmetric if for every deterministic φ [ π , π ) {\displaystyle \varphi \in [-\pi ,\pi )} the distribution of e i φ Z {\displaystyle e^{\mathrm {i} \varphi }\mathbf {Z} } equals the distribution of Z {\displaystyle \mathbf {Z} } .

Main article: Complex random vector § Circular symmetry

Central normal complex random vectors that are circularly symmetric are of particular interest because they are fully specified by the covariance matrix Γ {\displaystyle \Gamma } .

The circularly-symmetric (central) complex normal distribution corresponds to the case of zero mean and zero relation matrix, i.e. μ = 0 {\displaystyle \mu =0} and C = 0 {\displaystyle C=0} . This is usually denoted

Z C N ( 0 , Γ ) {\displaystyle \mathbf {Z} \sim {\mathcal {CN}}(0,\,\Gamma )}

Distribution of real and imaginary parts

If Z = X + i Y {\displaystyle \mathbf {Z} =\mathbf {X} +i\mathbf {Y} } is circularly-symmetric (central) complex normal, then the vector [ X , Y ] {\displaystyle } is multivariate normal with covariance structure

( X Y )     N ( [ 0 0 ] ,   1 2 [ Re Γ Im Γ Im Γ Re Γ ] ) {\displaystyle {\begin{pmatrix}\mathbf {X} \\\mathbf {Y} \end{pmatrix}}\ \sim \ {\mathcal {N}}{\Big (}{\begin{bmatrix}0\\0\end{bmatrix}},\ {\tfrac {1}{2}}{\begin{bmatrix}\operatorname {Re} \,\Gamma &-\operatorname {Im} \,\Gamma \\\operatorname {Im} \,\Gamma &\operatorname {Re} \,\Gamma \end{bmatrix}}{\Big )}}

where Γ = E [ Z Z H ] {\displaystyle \Gamma =\operatorname {E} } .

Probability density function

For nonsingular covariance matrix Γ {\displaystyle \Gamma } , its distribution can also be simplified as

f Z ( z ) = 1 π n det ( Γ ) e ( z μ ) H Γ 1 ( z μ ) {\displaystyle f_{\mathbf {Z} }(\mathbf {z} )={\tfrac {1}{\pi ^{n}\det(\Gamma )}}\,e^{-(\mathbf {z} -\mathbf {\mu } )^{\mathrm {H} }\Gamma ^{-1}(\mathbf {z} -\mathbf {\mu } )}} .

Therefore, if the non-zero mean μ {\displaystyle \mu } and covariance matrix Γ {\displaystyle \Gamma } are unknown, a suitable log likelihood function for a single observation vector z {\displaystyle z} would be

ln ( L ( μ , Γ ) ) = ln ( det ( Γ ) ) ( z μ ) ¯ Γ 1 ( z μ ) n ln ( π ) . {\displaystyle \ln(L(\mu ,\Gamma ))=-\ln(\det(\Gamma ))-{\overline {(z-\mu )}}'\Gamma ^{-1}(z-\mu )-n\ln(\pi ).}

The standard complex normal (defined in Eq.1) corresponds to the distribution of a scalar random variable with μ = 0 {\displaystyle \mu =0} , C = 0 {\displaystyle C=0} and Γ = 1 {\displaystyle \Gamma =1} . Thus, the standard complex normal distribution has density

f Z ( z ) = 1 π e z ¯ z = 1 π e | z | 2 . {\displaystyle f_{Z}(z)={\tfrac {1}{\pi }}e^{-{\overline {z}}z}={\tfrac {1}{\pi }}e^{-|z|^{2}}.}

Properties

The above expression demonstrates why the case C = 0 {\displaystyle C=0} , μ = 0 {\displaystyle \mu =0} is called “circularly-symmetric”. The density function depends only on the magnitude of z {\displaystyle z} but not on its argument. As such, the magnitude | z | {\displaystyle |z|} of a standard complex normal random variable will have the Rayleigh distribution and the squared magnitude | z | 2 {\displaystyle |z|^{2}} will have the exponential distribution, whereas the argument will be distributed uniformly on [ π , π ] {\displaystyle } .

If { Z 1 , , Z k } {\displaystyle \left\{\mathbf {Z} _{1},\ldots ,\mathbf {Z} _{k}\right\}} are independent and identically distributed n-dimensional circular complex normal random vectors with μ = 0 {\displaystyle \mu =0} , then the random squared norm

Q = j = 1 k Z j H Z j = j = 1 k Z j 2 {\displaystyle Q=\sum _{j=1}^{k}\mathbf {Z} _{j}^{\mathrm {H} }\mathbf {Z} _{j}=\sum _{j=1}^{k}\|\mathbf {Z} _{j}\|^{2}}

has the generalized chi-squared distribution and the random matrix

W = j = 1 k Z j Z j H {\displaystyle W=\sum _{j=1}^{k}\mathbf {Z} _{j}\mathbf {Z} _{j}^{\mathrm {H} }}

has the complex Wishart distribution with k {\displaystyle k} degrees of freedom. This distribution can be described by density function

f ( w ) = det ( Γ 1 ) k det ( w ) k n π n ( n 1 ) / 2 j = 1 k ( k j ) !   e tr ( Γ 1 w ) {\displaystyle f(w)={\frac {\det(\Gamma ^{-1})^{k}\det(w)^{k-n}}{\pi ^{n(n-1)/2}\prod _{j=1}^{k}(k-j)!}}\ e^{-\operatorname {tr} (\Gamma ^{-1}w)}}

where k n {\displaystyle k\geq n} , and w {\displaystyle w} is a n × n {\displaystyle n\times n} nonnegative-definite matrix.

See also

References

This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (July 2011) (Learn how and when to remove this message)
  1. Goodman, N.R. (1963). "Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction)". The Annals of Mathematical Statistics. 34 (1): 152–177. doi:10.1214/aoms/1177704250. JSTOR 2991290.
  2. bookchapter, Gallager.R, pg9.
  3. ^ Lapidoth, A. (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 9780521193955.
  4. ^ Tse, David (2005). Fundamentals of Wireless Communication. Cambridge University Press. ISBN 9781139444668.
  5. ^ Picinbono, Bernard (1996). "Second-order complex random vectors and normal distributions". IEEE Transactions on Signal Processing. 44 (10): 2637–2640. Bibcode:1996ITSP...44.2637P. doi:10.1109/78.539051.
  6. Daniel Wollschlaeger. "The Hoyt Distribution (Documentation for R package 'shotGroups' version 0.6.2)".
  7. bookchapter, Gallager.R
Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Categories: