Misplaced Pages

Directional statistics

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Circular variance) Subdiscipline of statistics

Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R), axes (lines through the origin in R) or rotations in R. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.

The overall shape of a protein can be parameterized as a sequence of points on the unit sphere. Shown are two views of the spherical histogram of such points for a large collection of protein structures. The statistical treatment of such data is in the realm of directional statistics.

The fact that 0 degrees and 360 degrees are identical angles, so that for example 180 degrees is not a sensible mean of 2 degrees and 358 degrees, provides one illustration that special statistical methods are required for the analysis of some types of data (in this case, angular data). Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), compass directions, dihedral angles in molecules, orientations, rotations and so on.

Circular distributions

Main article: Circular distribution

Any probability density function (pdf)   p ( x ) {\displaystyle \ p(x)} on the line can be "wrapped" around the circumference of a circle of unit radius. That is, the pdf of the wrapped variable θ = x w = x mod 2 π     ( π , π ] {\displaystyle \theta =x_{w}=x{\bmod {2}}\pi \ \ \in (-\pi ,\pi ]} is p w ( θ ) = k = p ( θ + 2 π k ) . {\displaystyle p_{w}(\theta )=\sum _{k=-\infty }^{\infty }{p(\theta +2\pi k)}.}

This concept can be extended to the multivariate context by an extension of the simple sum to a number of F {\displaystyle F} sums that cover all dimensions in the feature space: p w ( θ ) = k 1 = k F = p ( θ + 2 π k 1 e 1 + + 2 π k F e F ) {\displaystyle p_{w}({\boldsymbol {\theta }})=\sum _{k_{1}=-\infty }^{\infty }\cdots \sum _{k_{F}=-\infty }^{\infty }{p({\boldsymbol {\theta }}+2\pi k_{1}\mathbf {e} _{1}+\dots +2\pi k_{F}\mathbf {e} _{F})}} where e k = ( 0 , , 0 , 1 , 0 , , 0 ) T {\displaystyle \mathbf {e} _{k}=(0,\dots ,0,1,0,\dots ,0)^{\mathsf {T}}} is the k {\displaystyle k} -th Euclidean basis vector.

The following sections show some relevant circular distributions.

von Mises circular distribution

Main article: von Mises distribution

The von Mises distribution is a circular distribution which, like any other circular distribution, may be thought of as a wrapping of a certain linear probability distribution around the circle. The underlying linear probability distribution for the von Mises distribution is mathematically intractable; however, for statistical purposes, there is no need to deal with the underlying linear distribution. The usefulness of the von Mises distribution is twofold: it is the most mathematically tractable of all circular distributions, allowing simpler statistical analysis, and it is a close approximation to the wrapped normal distribution, which, analogously to the linear normal distribution, is important because it is the limiting case for the sum of a large number of small angular deviations. In fact, the von Mises distribution is often known as the "circular normal" distribution because of its ease of use and its close relationship to the wrapped normal distribution.

The pdf of the von Mises distribution is: f ( θ ; μ , κ ) = e κ cos ( θ μ ) 2 π I 0 ( κ ) {\displaystyle f(\theta ;\mu ,\kappa )={\frac {e^{\kappa \cos(\theta -\mu )}}{2\pi I_{0}(\kappa )}}} where I 0 {\displaystyle I_{0}} is the modified Bessel function of order 0.

Circular uniform distribution

Main article: Circular uniform distribution

The probability density function (pdf) of the circular uniform distribution is given by U ( θ ) = 1 2 π . {\displaystyle U(\theta )={\frac {1}{2\pi }}.}

It can also be thought of as κ = 0 {\displaystyle \kappa =0} of the von Mises above.

Wrapped normal distribution

Main article: Wrapped normal distribution

The pdf of the wrapped normal distribution (WN) is: W N ( θ ; μ , σ ) = 1 σ 2 π k = exp [ ( θ μ 2 π k ) 2 2 σ 2 ] = 1 2 π ϑ ( θ μ 2 π , i σ 2 2 π ) {\displaystyle WN(\theta ;\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}\sum _{k=-\infty }^{\infty }\exp \left={\frac {1}{2\pi }}\vartheta \left({\frac {\theta -\mu }{2\pi }},{\frac {i\sigma ^{2}}{2\pi }}\right)} where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively and ϑ ( θ , τ ) {\displaystyle \vartheta (\theta ,\tau )} is the Jacobi theta function: ϑ ( θ , τ ) = n = ( w 2 ) n q n 2 {\displaystyle \vartheta (\theta ,\tau )=\sum _{n=-\infty }^{\infty }(w^{2})^{n}q^{n^{2}}} where w e i π θ {\displaystyle w\equiv e^{i\pi \theta }} and q e i π τ . {\displaystyle q\equiv e^{i\pi \tau }.}

Wrapped Cauchy distribution

Main article: Wrapped Cauchy distribution

The pdf of the wrapped Cauchy distribution (WC) is: W C ( θ ; θ 0 , γ ) = n = γ π ( γ 2 + ( θ + 2 π n θ 0 ) 2 ) = 1 2 π sinh γ cosh γ cos ( θ θ 0 ) {\displaystyle WC(\theta ;\theta _{0},\gamma )=\sum _{n=-\infty }^{\infty }{\frac {\gamma }{\pi (\gamma ^{2}+(\theta +2\pi n-\theta _{0})^{2})}}={\frac {1}{2\pi }}\,\,{\frac {\sinh \gamma }{\cosh \gamma -\cos(\theta -\theta _{0})}}} where γ {\displaystyle \gamma } is the scale factor and θ 0 {\displaystyle \theta _{0}} is the peak position.

Wrapped Lévy distribution

Main article: Wrapped Lévy distribution

The pdf of the wrapped Lévy distribution (WL) is: f W L ( θ ; μ , c ) = n = c 2 π e c / 2 ( θ + 2 π n μ ) ( θ + 2 π n μ ) 3 / 2 {\displaystyle f_{WL}(\theta ;\mu ,c)=\sum _{n=-\infty }^{\infty }{\sqrt {\frac {c}{2\pi }}}\,{\frac {e^{-c/2(\theta +2\pi n-\mu )}}{(\theta +2\pi n-\mu )^{3/2}}}} where the value of the summand is taken to be zero when θ + 2 π n μ 0 {\displaystyle \theta +2\pi n-\mu \leq 0} , c {\displaystyle c} is the scale factor and μ {\displaystyle \mu } is the location parameter.

Projected normal distribution

Main article: Projected normal distribution

The projected normal distribution is a circular distribution representing the direction of a random variable with multivariate normal distribution, obtained by radial projection of the variable over the unit (n-1)-sphere. Due to this, and unlike other commonly used circular distributions, it is not symmetric nor unimodal.

Distributions on higher-dimensional manifolds

Three points sets sampled from different Kent distributions on the sphere.

There also exist distributions on the two-dimensional sphere (such as the Kent distribution), the N-dimensional sphere (the von Mises–Fisher distribution) or the torus (the bivariate von Mises distribution).

The matrix von Mises–Fisher distribution is a distribution on the Stiefel manifold, and can be used to construct probability distributions over rotation matrices.

The Bingham distribution is a distribution over axes in N dimensions, or equivalently, over points on the (N − 1)-dimensional sphere with the antipodes identified. For example, if N = 2, the axes are undirected lines through the origin in the plane. In this case, each axis cuts the unit circle in the plane (which is the one-dimensional sphere) at two points that are each other's antipodes. For N = 4, the Bingham distribution is a distribution over the space of unit quaternions (versors). Since a versor corresponds to a rotation matrix, the Bingham distribution for N = 4 can be used to construct probability distributions over the space of rotations, just like the Matrix-von Mises–Fisher distribution.

These distributions are for example used in geology, crystallography and bioinformatics.

Moments

The raw vector (or trigonometric) moments of a circular distribution are defined as

m n = E ( z n ) = Γ P ( θ ) z n d θ {\displaystyle m_{n}=\operatorname {E} (z^{n})=\int _{\Gamma }P(\theta )z^{n}\,d\theta }

where Γ {\displaystyle \Gamma } is any interval of length 2 π {\displaystyle 2\pi } , P ( θ ) {\displaystyle P(\theta )} is the PDF of the circular distribution, and z = e i θ {\displaystyle z=e^{i\theta }} . Since the integral P ( θ ) {\displaystyle P(\theta )} is unity, and the integration interval is finite, it follows that the moments of any circular distribution are always finite and well defined.

Sample moments are analogously defined:

m ¯ n = 1 N i = 1 N z i n . {\displaystyle {\overline {m}}_{n}={\frac {1}{N}}\sum _{i=1}^{N}z_{i}^{n}.}

The population resultant vector, length, and mean angle are defined in analogy with the corresponding sample parameters.

ρ = m 1 {\displaystyle \rho =m_{1}}
R = | m 1 | {\displaystyle R=|m_{1}|}
θ n = Arg ( m n ) . {\displaystyle \theta _{n}=\operatorname {Arg} (m_{n}).}

In addition, the lengths of the higher moments are defined as:

R n = | m n | {\displaystyle R_{n}=|m_{n}|}

while the angular parts of the higher moments are just ( n θ n ) mod 2 π {\displaystyle (n\theta _{n}){\bmod {2}}\pi } . The lengths of all moments will lie between 0 and 1.

Measures of location and spread

Various measures of central tendency and statistical dispersion may be defined for both the population and a sample drawn from that population.

Central tendency

Further information: Circular mean

The most common measure of location is the circular mean. The population circular mean is simply the first moment of the distribution while the sample mean is the first moment of the sample. The sample mean will serve as an unbiased estimator of the population mean.

When data is concentrated, the median and mode may be defined by analogy to the linear case, but for more dispersed or multi-modal data, these concepts are not useful.

Dispersion

See also: Yamartino method

The most common measures of circular spread are:

  • The circular variance. For the sample the circular variance is defined as: Var ( z ) ¯ = 1 R ¯ {\displaystyle {\overline {\operatorname {Var} (z)}}=1-{\overline {R}}} and for the population Var ( z ) = 1 R {\displaystyle \operatorname {Var} (z)=1-R} Both will have values between 0 and 1.
  • The circular standard deviation S ( z ) = ln ( 1 / R 2 ) = 2 ln ( R ) {\displaystyle S(z)={\sqrt {\ln(1/R^{2})}}={\sqrt {-2\ln(R)}}} S ¯ ( z ) = ln ( 1 / R ¯ 2 ) = 2 ln ( R ¯ ) {\displaystyle {\overline {S}}(z)={\sqrt {\ln(1/{\overline {R}}^{2})}}={\sqrt {-2\ln({\overline {R}})}}} with values between 0 and infinity. This definition of the standard deviation (rather than the square root of the variance) is useful because for a wrapped normal distribution, it is an estimator of the standard deviation of the underlying normal distribution. It will therefore allow the circular distribution to be standardized as in the linear case, for small values of the standard deviation. This also applies to the von Mises distribution which closely approximates the wrapped normal distribution. Note that for small S ( z ) {\displaystyle S(z)} , we have S ( z ) 2 = 2 Var ( z ) {\displaystyle S(z)^{2}=2\operatorname {Var} (z)} .
  • The circular dispersion δ = 1 R 2 2 R 2 {\displaystyle \delta ={\frac {1-R_{2}}{2R^{2}}}} δ ¯ = 1 R ¯ 2 2 R ¯ 2 {\displaystyle {\overline {\delta }}={\frac {1-{{\overline {R}}_{2}}}{2{\overline {R}}^{2}}}} with values between 0 and infinity. This measure of spread is found useful in the statistical analysis of variance.

Distribution of the mean

Given a set of N measurements z n = e i θ n {\displaystyle z_{n}=e^{i\theta _{n}}} the mean value of z is defined as:

z ¯ = 1 N n = 1 N z n {\displaystyle {\overline {z}}={\frac {1}{N}}\sum _{n=1}^{N}z_{n}}

which may be expressed as

z ¯ = C ¯ + i S ¯ {\displaystyle {\overline {z}}={\overline {C}}+i{\overline {S}}}

where

C ¯ = 1 N n = 1 N cos ( θ n )  and  S ¯ = 1 N n = 1 N sin ( θ n ) {\displaystyle {\overline {C}}={\frac {1}{N}}\sum _{n=1}^{N}\cos(\theta _{n}){\text{ and }}{\overline {S}}={\frac {1}{N}}\sum _{n=1}^{N}\sin(\theta _{n})}

or, alternatively as:

z ¯ = R ¯ e i θ ¯ {\displaystyle {\overline {z}}={\overline {R}}e^{i{\overline {\theta }}}}

where

R ¯ = C ¯ 2 + S ¯ 2  and  θ ¯ = arctan ( S ¯ / C ¯ ) . {\displaystyle {\overline {R}}={\sqrt {{\overline {C}}^{2}+{\overline {S}}^{2}}}{\text{ and }}{\overline {\theta }}=\arctan({\overline {S}}/{\overline {C}}).}

The distribution of the mean angle ( θ ¯ {\displaystyle {\overline {\theta }}} ) for a circular pdf P(θ) will be given by:

P ( C ¯ , S ¯ ) d C ¯ d S ¯ = P ( R ¯ , θ ¯ ) d R ¯ d θ ¯ = Γ Γ n = 1 N [ P ( θ n ) d θ n ] {\displaystyle P({\overline {C}},{\overline {S}})\,d{\overline {C}}\,d{\overline {S}}=P({\overline {R}},{\overline {\theta }})\,d{\overline {R}}\,d{\overline {\theta }}=\int _{\Gamma }\cdots \int _{\Gamma }\prod _{n=1}^{N}\left}

where Γ {\displaystyle \Gamma } is over any interval of length 2 π {\displaystyle 2\pi } and the integral is subject to the constraint that S ¯ {\displaystyle {\overline {S}}} and C ¯ {\displaystyle {\overline {C}}} are constant, or, alternatively, that R ¯ {\displaystyle {\overline {R}}} and θ ¯ {\displaystyle {\overline {\theta }}} are constant.

The calculation of the distribution of the mean for most circular distributions is not analytically possible, and in order to carry out an analysis of variance, numerical or mathematical approximations are needed.

The central limit theorem may be applied to the distribution of the sample means. (main article: Central limit theorem for directional statistics). It can be shown that the distribution of [ C ¯ , S ¯ ] {\displaystyle } approaches a bivariate normal distribution in the limit of large sample size.

Goodness of fit and significance testing

For cyclic data – (e.g., is it uniformly distributed) :

See also

References

  1. ^ Hamelryck, Thomas; Kent, John T.; Krogh, Anders (2006). "Hamelryck, T., Kent, J., Krogh, A. (2006) Sampling realistic protein conformations using local structural bias. PLoS Comput. Biol., 2(9): e131". PLOS Computational Biology. 2 (9): e131. Bibcode:2006PLSCB...2..131H. doi:10.1371/journal.pcbi.0020131. PMC 1570370. PMID 17002495.
  2. Bahlmann, C., (2006), Directional features in online handwriting recognition, Pattern Recognition, 39
  3. ^ Fisher 1993.
  4. Kent, J (1982) The Fisher–Bingham distribution on the sphere. J Royal Stat Soc, 44, 71–80.
  5. Fisher, RA (1953) Dispersion on a sphere. Proc. Roy. Soc. London Ser. A., 217, 295–305
  6. Mardia, KM. Taylor; CC; Subramaniam, GK. (2007). "Protein Bioinformatics and Mixtures of Bivariate von Mises Distributions for Angular Data". Biometrics. 63 (2): 505–512. doi:10.1111/j.1541-0420.2006.00682.x. PMID 17688502. S2CID 14293602.
  7. Pal, Subhadip; Sengupta, Subhajit; Mitra, Riten; Banerjee, Arunava (September 2020). "Conjugate Priors and Posterior Inference for the Matrix Langevin Distribution on the Stiefel Manifold". Bayesian Analysis. 15 (3): 871–908. doi:10.1214/19-BA1176. ISSN 1936-0975. S2CID 209974627.
  8. Downs (1972). "Orientational statistics". Biometrika. 59 (3): 665–676. doi:10.1093/biomet/59.3.665.
  9. Bingham, C. (1974). "An Antipodally Symmetric Distribution on the Sphere". Ann. Stat. 2 (6): 1201–1225. doi:10.1214/aos/1176342874.
  10. Peel, D.; Whiten, WJ.; McLachlan, GJ. (2001). "Fitting mixtures of Kent distributions to aid in joint set identification" (PDF). J. Am. Stat. Assoc. 96 (453): 56–63. doi:10.1198/016214501750332974. S2CID 11667311.
  11. Krieger Lassen, N. C.; Juul Jensen, D.; Conradsen, K. (1994). "On the statistical analysis of orientation data". Acta Crystallogr. A50 (6): 741–748. Bibcode:1994AcCrA..50..741K. doi:10.1107/S010876739400437X.
  12. Kent, J.T., Hamelryck, T. (2005). Using the Fisher–Bingham distribution in stochastic models for protein structure Archived 2024-01-20 at the Wayback Machine. In S. Barber, P.D. Baxter, K.V.Mardia, & R.E. Walls (Eds.), Quantitative Biology, Shape Analysis, and Wavelets, pp. 57–60. Leeds, Leeds University Press
  13. Boomsma, Wouter; Mardia, Kanti V.; Taylor, Charles C.; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas (2008). "A generative, probabilistic model of local protein structure". Proceedings of the National Academy of Sciences. 105 (26): 8932–8937. Bibcode:2008PNAS..105.8932B. doi:10.1073/pnas.0801715105. PMC 2440424. PMID 18579771.
  14. ^ Jammalamadaka & Sengupta 2001.

Books on directional statistics

Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Categories: