Misplaced Pages

Ratio distribution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Complex normal ratio distribution) Probability distribution

A ratio distribution (also known as a quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two (usually independent) random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

An example is the Cauchy distribution (also called the normal ratio distribution), which comes about as the ratio of two normally distributed variables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: the t-distribution arises from a Gaussian random variable divided by an independent chi-distributed random variable, while the F-distribution originates from the ratio of two independent chi-squared distributed random variables. More general ratio distributions have been considered in the literature.

Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated statistical test. A method based on the median has been suggested as a "work-around".

Algebra of random variables

Main article: Algebra of random variables

The ratio is one type of algebra for random variables: Related to the ratio distribution are the product distribution, sum distribution and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables.

The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product is C = AB and a ratio is D=C/A it does not necessarily mean that the distributions of D and B are the same. Indeed, a peculiar effect is seen for the Cauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution. This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables, C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} each constructed from two Gaussian distributions C 1 = G 1 / G 2 {\displaystyle C_{1}=G_{1}/G_{2}} and C 2 = G 3 / G 4 {\displaystyle C_{2}=G_{3}/G_{4}} then

C 1 C 2 = G 1 / G 2 G 3 / G 4 = G 1 G 4 G 2 G 3 = G 1 G 2 × G 4 G 3 = C 1 × C 3 , {\displaystyle {\frac {C_{1}}{C_{2}}}={\frac {{G_{1}}/{G_{2}}}{{G_{3}}/{G_{4}}}}={\frac {G_{1}G_{4}}{G_{2}G_{3}}}={\frac {G_{1}}{G_{2}}}\times {\frac {G_{4}}{G_{3}}}=C_{1}\times C_{3},}

where C 3 = G 4 / G 3 {\displaystyle C_{3}=G_{4}/G_{3}} . The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions.

Derivation

A way of deriving the ratio distribution of Z = X / Y {\displaystyle Z=X/Y} from the joint distribution of the two other random variables X , Y , with joint pdf p X , Y ( x , y ) {\displaystyle p_{X,Y}(x,y)} , is by integration of the following form

p Z ( z ) = + | y | p X , Y ( z y , y ) d y . {\displaystyle p_{Z}(z)=\int _{-\infty }^{+\infty }|y|\,p_{X,Y}(zy,y)\,dy.}

If the two variables are independent then p X Y ( x , y ) = p X ( x ) p Y ( y ) {\displaystyle p_{XY}(x,y)=p_{X}(x)p_{Y}(y)} and this becomes

p Z ( z ) = + | y | p X ( z y ) p Y ( y ) d y . {\displaystyle p_{Z}(z)=\int _{-\infty }^{+\infty }|y|\,p_{X}(zy)p_{Y}(y)\,dy.}

This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is

p X , Y ( x , y ) = 1 2 π exp ( x 2 2 ) exp ( y 2 2 ) {\displaystyle p_{X,Y}(x,y)={\frac {1}{2\pi }}\exp \left(-{\frac {x^{2}}{2}}\right)\exp \left(-{\frac {y^{2}}{2}}\right)}

Defining Z = X / Y {\displaystyle Z=X/Y} we have

p Z ( z ) = 1 2 π | y | exp ( ( z y ) 2 2 ) exp ( y 2 2 ) d y = 1 2 π | y | exp ( y 2 ( z 2 + 1 ) 2 ) d y {\displaystyle {\begin{aligned}p_{Z}(z)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\,|y|\,\exp \left(-{\frac {\left(zy\right)^{2}}{2}}\right)\,\exp \left(-{\frac {y^{2}}{2}}\right)\,dy\\&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\,|y|\,\exp \left(-{\frac {y^{2}\left(z^{2}+1\right)}{2}}\right)\,dy\end{aligned}}}

Using the known definite integral 0 x exp ( c x 2 ) d x = 1 2 c {\textstyle \int _{0}^{\infty }\,x\,\exp \left(-cx^{2}\right)\,dx={\frac {1}{2c}}} we get

p Z ( z ) = 1 π ( z 2 + 1 ) {\displaystyle p_{Z}(z)={\frac {1}{\pi (z^{2}+1)}}}

which is the Cauchy distribution, or Student's t distribution with n = 1

The Mellin transform has also been suggested for derivation of ratio distributions.

In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distribution f x , y ( x , y ) = f x ( x ) f y ( y ) {\displaystyle f_{x,y}(x,y)=f_{x}(x)f_{y}(y)} which has support in the positive quadrant x , y > 0 {\displaystyle x,y>0} and we wish to find the pdf of the ratio R = X / Y {\displaystyle R=X/Y} . The hatched volume above the line y = x / R {\displaystyle y=x/R} represents the cumulative distribution of the function f x , y ( x , y ) {\displaystyle f_{x,y}(x,y)} multiplied with the logical function X / Y R {\displaystyle X/Y\leq R} . The density is first integrated in horizontal strips; the horizontal strip at height y extends from x = 0 to x = Ry and has incremental probability f y ( y ) d y 0 R y f x ( x ) d x {\textstyle f_{y}(y)dy\int _{0}^{Ry}f_{x}(x)\,dx} .
Secondly, integrating the horizontal strips upward over all y yields the volume of probability above the line

F R ( R ) = 0 f y ( y ) ( 0 R y f x ( x ) d x ) d y {\displaystyle F_{R}(R)=\int _{0}^{\infty }f_{y}(y)\left(\int _{0}^{Ry}f_{x}(x)dx\right)dy}

Finally, differentiate F R ( R ) {\displaystyle F_{R}(R)} with respect to R {\displaystyle R} to get the pdf f R ( R ) {\displaystyle f_{R}(R)} .

f R ( R ) = d d R [ 0 f y ( y ) ( 0 R y f x ( x ) d x ) d y ] {\displaystyle f_{R}(R)={\frac {d}{dR}}\left}

Move the differentiation inside the integral:

f R ( R ) = 0 f y ( y ) ( d d R 0 R y f x ( x ) d x ) d y {\displaystyle f_{R}(R)=\int _{0}^{\infty }f_{y}(y)\left({\frac {d}{dR}}\int _{0}^{Ry}f_{x}(x)dx\right)dy}

and since

d d R 0 R y f x ( x ) d x = y f x ( R y ) {\displaystyle {\frac {d}{dR}}\int _{0}^{Ry}f_{x}(x)dx=yf_{x}(Ry)}

then

f R ( R ) = 0 f y ( y ) f x ( R y ) y d y {\displaystyle f_{R}(R)=\int _{0}^{\infty }f_{y}(y)\;f_{x}(Ry)\;y\;dy}

As an example, find the pdf of the ratio R when

f x ( x ) = α e α x , f y ( y ) = β e β y , x , y 0 {\displaystyle f_{x}(x)=\alpha e^{-\alpha x},\;\;\;\;f_{y}(y)=\beta e^{-\beta y},\;\;\;x,y\geq 0}
Evaluating the cumulative distribution of a ratio

We have

0 R y f x ( x ) d x = e α x | 0 R y = 1 e α R y {\displaystyle \int _{0}^{Ry}f_{x}(x)dx=-e^{-\alpha x}\vert _{0}^{Ry}=1-e^{-\alpha Ry}}

thus

F R ( R ) = 0 f y ( y ) ( 1 e α R y ) d y = 0 β e β y ( 1 e α R y ) d y = 1 α R β + α R = R β α + R {\displaystyle {\begin{aligned}F_{R}(R)&=\int _{0}^{\infty }f_{y}(y)\left(1-e^{-\alpha Ry}\right)dy=\int _{0}^{\infty }\beta e^{-\beta y}\left(1-e^{-\alpha Ry}\right)dy\\&=1-{\frac {\alpha R}{\beta +\alpha R}}\\&={\frac {R}{{\tfrac {\beta }{\alpha }}+R}}\end{aligned}}}

Differentiation wrt. R yields the pdf of R

f R ( R ) = d d R ( R β α + R ) = β α ( β α + R ) 2 {\displaystyle f_{R}(R)={\frac {d}{dR}}\left({\frac {R}{{\tfrac {\beta }{\alpha }}+R}}\right)={\frac {\tfrac {\beta }{\alpha }}{\left({\tfrac {\beta }{\alpha }}+R\right)^{2}}}}

Moments of random ratios

From Mellin transform theory, for distributions existing only on the positive half-line x 0 {\displaystyle x\geq 0} , we have the product identity E [ ( U V ) p ] = E [ U p ] E [ V p ] {\displaystyle \operatorname {E} =\operatorname {E} \;\;\operatorname {E} } provided U , V {\displaystyle U,\;V} are independent. For the case of a ratio of samples like E [ ( X / Y ) p ] {\displaystyle \operatorname {E} } , in order to make use of this identity it is necessary to use moments of the inverse distribution. Set 1 / Y = Z {\displaystyle 1/Y=Z} such that E [ ( X Z ) p ] = E [ X p ] E [ Y p ] {\displaystyle \operatorname {E} =\operatorname {E} \;\operatorname {E} } . Thus, if the moments of X p {\displaystyle X^{p}} and Y p {\displaystyle Y^{-p}} can be determined separately, then the moments of X / Y {\displaystyle X/Y} can be found. The moments of Y p {\displaystyle Y^{-p}} are determined from the inverse pdf of Y {\displaystyle Y} , often a tractable exercise. At simplest, E [ Y p ] = 0 y p f y ( y ) d y {\textstyle \operatorname {E} =\int _{0}^{\infty }y^{-p}f_{y}(y)\,dy} .

To illustrate, let X {\displaystyle X} be sampled from a standard Gamma distribution

x α 1 e x / Γ ( α ) {\displaystyle x^{\alpha -1}e^{-x}/\Gamma (\alpha )} whose p {\displaystyle p} -th moment is Γ ( α + p ) / Γ ( α ) {\displaystyle \Gamma (\alpha +p)/\Gamma (\alpha )} .

Z = Y 1 {\displaystyle Z=Y^{-1}} is sampled from an inverse Gamma distribution with parameter β {\displaystyle \beta } and has pdf Γ 1 ( β ) z ( 1 + β ) e 1 / z {\displaystyle \;\Gamma ^{-1}(\beta )z^{-(1+\beta )}e^{-1/z}} . The moments of this pdf are

E [ Z p ] = E [ Y p ] = Γ ( β p ) Γ ( β ) , p < β . {\displaystyle \operatorname {E} =\operatorname {E} ={\frac {\Gamma (\beta -p)}{\Gamma (\beta )}},\;p<\beta .}

Multiplying the corresponding moments gives

E [ ( X / Y ) p ] = E [ X p ] E [ Y p ] = Γ ( α + p ) Γ ( α ) Γ ( β p ) Γ ( β ) , p < β . {\displaystyle \operatorname {E} =\operatorname {E} \;\operatorname {E} ={\frac {\Gamma (\alpha +p)}{\Gamma (\alpha )}}{\frac {\Gamma (\beta -p)}{\Gamma (\beta )}},\;p<\beta .}

Independently, it is known that the ratio of the two Gamma samples R = X / Y {\displaystyle R=X/Y} follows the Beta Prime distribution:

f β ( r , α , β ) = B ( α , β ) 1 r α 1 ( 1 + r ) ( α + β ) {\displaystyle f_{\beta '}(r,\alpha ,\beta )=B(\alpha ,\beta )^{-1}r^{\alpha -1}(1+r)^{-(\alpha +\beta )}} whose moments are E [ R p ] = B ( α + p , β p ) B ( α , β ) {\displaystyle \operatorname {E} ={\frac {\mathrm {B} (\alpha +p,\beta -p)}{\mathrm {B} (\alpha ,\beta )}}}

Substituting B ( α , β ) = Γ ( α ) Γ ( β ) Γ ( α + β ) {\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}} we have E [ R p ] = Γ ( α + p ) Γ ( β p ) Γ ( α + β ) / Γ ( α ) Γ ( β ) Γ ( α + β ) = Γ ( α + p ) Γ ( β p ) Γ ( α ) Γ ( β ) {\displaystyle \operatorname {E} ={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha +\beta )}}{\Bigg /}{\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha )\Gamma (\beta )}}} which is consistent with the product of moments above.

Means and variances of random ratios

In the Product distribution section, and derived from Mellin transform theory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have

E ( X / Y ) = E ( X ) E ( 1 / Y ) {\displaystyle \operatorname {E} (X/Y)=\operatorname {E} (X)\operatorname {E} (1/Y)}

which, in terms of probability distributions, is equivalent to

E ( X / Y ) = x f x ( x ) d x × y 1 f y ( y ) d y {\displaystyle \operatorname {E} (X/Y)=\int _{-\infty }^{\infty }xf_{x}(x)\,dx\times \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy}

Note that E ( 1 / Y ) 1 E ( Y ) {\displaystyle \operatorname {E} (1/Y)\neq {\frac {1}{\operatorname {E} (Y)}}} i.e., y 1 f y ( y ) d y 1 y f y ( y ) d y {\displaystyle \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy\neq {\frac {1}{\int _{-\infty }^{\infty }yf_{y}(y)\,dy}}}

The variance of a ratio of independent variables is

Var ( X / Y ) = E ( [ X / Y ] 2 ) E 2 ( X / Y ) = E ( X 2 ) E ( 1 / Y 2 ) E 2 ( X ) E 2 ( 1 / Y ) {\displaystyle {\begin{aligned}\operatorname {Var} (X/Y)&=\operatorname {E} (^{2})-\operatorname {E^{2}} (X/Y)\\&=\operatorname {E} (X^{2})\operatorname {E} (1/Y^{2})-\operatorname {E} ^{2}(X)\operatorname {E} ^{2}(1/Y)\end{aligned}}}

Normal ratio distributions

It has been suggested that this section be split out into another article titled Normal ratio distributions. (Discuss) (March 2021)

Uncorrelated central normal ratio

When X and Y are independent and have a Gaussian distribution with zero mean, the form of their ratio distribution is a Cauchy distribution. This can be derived by setting Z = X / Y = tan θ {\displaystyle Z=X/Y=\tan \theta } then showing that θ {\displaystyle \theta } has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have

p ( x , y ) = 1 2 π e 1 2 x 2 × 1 2 π e 1 2 y 2 = 1 2 π e 1 2 ( x 2 + y 2 ) = 1 2 π e 1 2 r 2  with  r 2 = x 2 + y 2 {\displaystyle {\begin{aligned}p(x,y)&={\tfrac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}x^{2}}\times {\tfrac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}y^{2}}\\&={\tfrac {1}{2\pi }}e^{-{\frac {1}{2}}(x^{2}+y^{2})}\\&={\tfrac {1}{2\pi }}e^{-{\frac {1}{2}}r^{2}}{\text{ with }}r^{2}=x^{2}+y^{2}\end{aligned}}}

If p ( x , y ) {\displaystyle p(x,y)} is a function only of r then θ {\displaystyle \theta } is uniformly distributed on [ 0 , 2 π ] {\displaystyle } with density 1 / 2 π {\displaystyle 1/2\pi } so the problem reduces to finding the probability distribution of Z under the mapping

Z = X / Y = tan θ {\displaystyle Z=X/Y=\tan \theta }

We have, by conservation of probability

p z ( z ) | d z | = p θ ( θ ) | d θ | {\displaystyle p_{z}(z)|dz|=p_{\theta }(\theta )|d\theta |}

and since d z / d θ = 1 / cos 2 θ {\displaystyle dz/d\theta =1/\cos ^{2}\theta }

p z ( z ) = p θ ( θ ) | d z / d θ | = 1 2 π cos 2 θ {\displaystyle p_{z}(z)={\frac {p_{\theta }(\theta )}{|dz/d\theta |}}={\tfrac {1}{2\pi }}{\cos ^{2}\theta }}

and setting cos 2 θ = 1 1 + ( tan θ ) 2 = 1 1 + z 2 {\textstyle \cos ^{2}\theta ={\frac {1}{1+(\tan \theta )^{2}}}={\frac {1}{1+z^{2}}}} we get

p z ( z ) = 1 / 2 π 1 + z 2 {\displaystyle p_{z}(z)={\frac {1/2\pi }{1+z^{2}}}}

There is a spurious factor of 2 here. Actually, two values of θ {\displaystyle \theta } spaced by π {\displaystyle \pi } map onto the same value of z, the density is doubled, and the final result is

p z ( z ) = 1 / π 1 + z 2 , < z < {\displaystyle p_{z}(z)={\frac {1/\pi }{1+z^{2}}},\;\;-\infty <z<\infty }

When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented by David Hinkley. The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Student t in which the density depends only on radius r = x 2 + y 2 {\textstyle r={\sqrt {x^{2}+y^{2}}}} . It does not extend to the ratio of two independent Student t distributions which give the Cauchy ratio shown in a section below for one degree of freedom.

Uncorrelated noncentral normal ratio

In the absence of correlation ( cor ( X , Y ) = 0 ) {\displaystyle (\operatorname {cor} (X,Y)=0)} , the probability density function of the two normal variables X = N(μX, σX) and Y = N(μY, σY) ratio Z = X/Y is given exactly by the following expression, derived in several sources:

p Z ( z ) = b ( z ) d ( z ) a 3 ( z ) 1 2 π σ x σ y [ Φ ( b ( z ) a ( z ) ) Φ ( b ( z ) a ( z ) ) ] + 1 a 2 ( z ) π σ x σ y e c 2 {\displaystyle p_{Z}(z)={\frac {b(z)\cdot d(z)}{a^{3}(z)}}{\frac {1}{{\sqrt {2\pi }}\sigma _{x}\sigma _{y}}}\left+{\frac {1}{a^{2}(z)\cdot \pi \sigma _{x}\sigma _{y}}}e^{-{\frac {c}{2}}}}

where

a ( z ) = 1 σ x 2 z 2 + 1 σ y 2 {\displaystyle a(z)={\sqrt {{\frac {1}{\sigma _{x}^{2}}}z^{2}+{\frac {1}{\sigma _{y}^{2}}}}}}
b ( z ) = μ x σ x 2 z + μ y σ y 2 {\displaystyle b(z)={\frac {\mu _{x}}{\sigma _{x}^{2}}}z+{\frac {\mu _{y}}{\sigma _{y}^{2}}}}
c = μ x 2 σ x 2 + μ y 2 σ y 2 {\displaystyle c={\frac {\mu _{x}^{2}}{\sigma _{x}^{2}}}+{\frac {\mu _{y}^{2}}{\sigma _{y}^{2}}}}
d ( z ) = e b 2 ( z ) c a 2 ( z ) 2 a 2 ( z ) {\displaystyle d(z)=e^{\frac {b^{2}(z)-ca^{2}(z)}{2a^{2}(z)}}}

and Φ {\displaystyle \Phi } is the normal cumulative distribution function:

Φ ( t ) = t 1 2 π e 1 2 u 2   d u . {\displaystyle \Phi (t)=\int _{-\infty }^{t}\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}u^{2}}\ du\,.}
  • Under several assumptions (usually fulfilled in practical applications), it is possible to derive a highly accurate solid approximation to the PDF. Main benefits are reduced formulae complexity, closed-form CDF, simple defined median, well defined error management, etc... For the sake of simplicity let's introduce parameters: p = μ x 2 σ x {\displaystyle p={\frac {\mu _{x}}{{\sqrt {2}}\sigma _{x}}}} , q = μ y 2 σ y {\displaystyle q={\frac {\mu _{y}}{{\sqrt {2}}\sigma _{y}}}} and r = μ x μ y {\displaystyle r={\frac {\mu _{x}}{\mu _{y}}}} . Then so called solid approximation p Z ( z ) {\displaystyle p_{Z}^{\dagger }(z)} to the uncorrelated noncentral normal ratio PDF is expressed by equation
p Z ( z ) = 1 π p e r f [ q ] 1 r 1 + p 2 q 2 z r ( 1 + p 2 q 2 [ z r ] 2 ) 3 2 e p 2 ( z r 1 ) 2 1 + p 2 q 2 [ z r ] 2 {\displaystyle p_{Z}^{\dagger }(z)={\frac {1}{\sqrt {\pi }}}{\frac {p}{\mathrm {erf} }}{\frac {1}{r}}{\frac {1+{\frac {p^{2}}{q^{2}}}{\frac {z}{r}}}{\left(1+{\frac {p^{2}}{q^{2}}}\left^{2}\right)^{\frac {3}{2}}}}e^{-{\frac {p^{2}\left({\frac {z}{r}}-1\right)^{2}}{1+{\frac {p^{2}}{q^{2}}}\left^{2}}}}}
  • Under certain conditions, a normal approximation is possible, with variance:
σ z 2 = μ x 2 μ y 2 ( σ x 2 μ x 2 + σ y 2 μ y 2 ) {\displaystyle \sigma _{z}^{2}={\frac {\mu _{x}^{2}}{\mu _{y}^{2}}}\left({\frac {\sigma _{x}^{2}}{\mu _{x}^{2}}}+{\frac {\sigma _{y}^{2}}{\mu _{y}^{2}}}\right)}

Correlated central normal ratio

The above expression becomes more complicated when the variables X and Y are correlated. If μ x = μ y = 0 {\displaystyle \mu _{x}=\mu _{y}=0} but σ X σ Y {\displaystyle \sigma _{X}\neq \sigma _{Y}} and ρ 0 {\displaystyle \rho \neq 0} the more general Cauchy distribution is obtained

p Z ( z ) = 1 π β ( z α ) 2 + β 2 , {\displaystyle p_{Z}(z)={\frac {1}{\pi }}{\frac {\beta }{(z-\alpha )^{2}+\beta ^{2}}},}

where ρ is the correlation coefficient between X and Y and

α = ρ σ x σ y , {\displaystyle \alpha =\rho {\frac {\sigma _{x}}{\sigma _{y}}},}
β = σ x σ y 1 ρ 2 . {\displaystyle \beta ={\frac {\sigma _{x}}{\sigma _{y}}}{\sqrt {1-\rho ^{2}}}.}

The complex distribution has also been expressed with Kummer's confluent hypergeometric function or the Hermite function.

Correlated noncentral normal ratio

This was shown in Springer 1979 problem 4.28.

A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be

T μ x + N ( 0 , σ x 2 ) μ y + N ( 0 , σ y 2 ) = μ x + X μ y + Y = μ x μ y 1 + X μ x 1 + Y μ y {\displaystyle T\sim {\frac {\mu _{x}+\mathbb {N} (0,\sigma _{x}^{2})}{\mu _{y}+\mathbb {N} (0,\sigma _{y}^{2})}}={\frac {\mu _{x}+X}{\mu _{y}+Y}}={\frac {\mu _{x}}{\mu _{y}}}{\frac {1+{\frac {X}{\mu _{x}}}}{1+{\frac {Y}{\mu _{y}}}}}} .

Take logs to get

log e ( T ) = log e ( μ x μ y ) + log e ( 1 + X μ x ) log e ( 1 + Y μ y ) . {\displaystyle \log _{e}(T)=\log _{e}\left({\frac {\mu _{x}}{\mu _{y}}}\right)+\log _{e}\left(1+{\frac {X}{\mu _{x}}}\right)-\log _{e}\left(1+{\frac {Y}{\mu _{y}}}\right).}

Since log e ( 1 + δ ) = δ δ 2 2 + δ 3 3 + {\displaystyle \log _{e}(1+\delta )=\delta -{\frac {\delta ^{2}}{2}}+{\frac {\delta ^{3}}{3}}+\cdots } then asymptotically

log e ( T ) log e ( μ x μ y ) + X μ x Y μ y log e ( μ x μ y ) + N ( 0 , σ x 2 μ x 2 + σ y 2 μ y 2 ) . {\displaystyle \log _{e}(T)\approx \log _{e}\left({\frac {\mu _{x}}{\mu _{y}}}\right)+{\frac {X}{\mu _{x}}}-{\frac {Y}{\mu _{y}}}\sim \log _{e}\left({\frac {\mu _{x}}{\mu _{y}}}\right)+\mathbb {N} \left(0,{\frac {\sigma _{x}^{2}}{\mu _{x}^{2}}}+{\frac {\sigma _{y}^{2}}{\mu _{y}^{2}}}\right).}

Alternatively, Geary (1930) suggested that

t μ y T μ x σ y 2 T 2 2 ρ σ x σ y T + σ x 2 {\displaystyle t\approx {\frac {\mu _{y}T-\mu _{x}}{\sqrt {\sigma _{y}^{2}T^{2}-2\rho \sigma _{x}\sigma _{y}T+\sigma _{x}^{2}}}}}

has approximately a standard Gaussian distribution: This transformation has been called the Geary–Hinkley transformation; the approximation is good if Y is unlikely to assume negative values, basically μ y > 3 σ y {\displaystyle \mu _{y}>3\sigma _{y}} .

Exact correlated noncentral normal ratio

This section possibly contains synthesis of material that does not verifiably mention or relate to the main topic. Relevant discussion may be found on the talk page. (November 2019) (Learn how and when to remove this message)

This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratio z {\displaystyle z} could be transformed into a near-Gaussian form and developed an approximation for t {\displaystyle t} dependent on the probability of negative denominator values x + μ x < 0 {\displaystyle x+\mu _{x}<0} being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version.

Let the ratio be:

z = x + μ x y + μ y {\displaystyle z={\frac {x+\mu _{x}}{y+\mu _{y}}}}

in which x , y {\displaystyle x,y} are zero-mean correlated normal variables with variances σ x 2 , σ y 2 {\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}} and X , Y {\displaystyle X,Y} have means μ x , μ y . {\displaystyle \mu _{x},\mu _{y}.} Write x = x ρ y σ x / σ y {\displaystyle x'=x-\rho y\sigma _{x}/\sigma _{y}} such that x , y {\displaystyle x',y} become uncorrelated and x {\displaystyle x'} has standard deviation

σ x = σ x 1 ρ 2 . {\displaystyle \sigma _{x}'=\sigma _{x}{\sqrt {1-\rho ^{2}}}.}

The ratio:

z = x + ρ y σ x / σ y + μ x y + μ y {\displaystyle z={\frac {x'+\rho y\sigma _{x}/\sigma _{y}+\mu _{x}}{y+\mu _{y}}}}

is invariant under this transformation and retains the same pdf. The y {\displaystyle y} term in the numerator appears to be made separable by expanding:

x + ρ y σ x / σ y + μ x = x + μ x ρ μ y σ x σ y + ρ ( y + μ y ) σ x σ y {\displaystyle {x'+\rho y\sigma _{x}/\sigma _{y}+\mu _{x}}=x'+\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}+\rho (y+\mu _{y}){\frac {\sigma _{x}}{\sigma _{y}}}}

to get

z = x + μ x y + μ y + ρ σ x σ y {\displaystyle z={\frac {x'+\mu _{x}'}{y+\mu _{y}}}+\rho {\frac {\sigma _{x}}{\sigma _{y}}}}

in which μ x = μ x ρ μ y σ x σ y {\textstyle \mu '_{x}=\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}} and z has now become a ratio of uncorrelated non-central normal samples with an invariant z-offset (this is not formally proven, though appears to have been used by Geary),

Finally, to be explicit, the pdf of the ratio z {\displaystyle z} for correlated variables is found by inputting the modified parameters σ x , μ x , σ y , μ y {\displaystyle \sigma _{x}',\mu _{x}',\sigma _{y},\mu _{y}} and ρ = 0 {\displaystyle \rho '=0} into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset ρ σ x σ y {\displaystyle -\rho {\frac {\sigma _{x}}{\sigma _{y}}}} on z {\displaystyle z} .

Gaussian ratio contoursContours of the correlated bivariate Gaussian distribution (not to scale) giving ratio x/ypdf of probability distribution ratio zpdf of the Gaussian ratio z and a simulation (points) for
σ x = σ y = 1 , μ x = 0 , μ y = 0.5 , ρ = 0.975 {\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975}

The figures above show an example of a positively correlated ratio with σ x = σ y = 1 , μ x = 0 , μ y = 0.5 , ρ = 0.975 {\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975} in which the shaded wedges represent the increment of area selected by given ratio x / y [ r , r + δ ] {\displaystyle x/y\in } which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratio z = x / y 1 {\displaystyle z=x/y\approx 1} the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdf p Z ( x / y ) {\displaystyle p_{Z}(x/y)} . Conversely as x / y {\displaystyle x/y} moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability.

Complex normal ratio

The ratio of correlated zero-mean circularly symmetric complex normal distributed variables was determined by Baxley et al. and has since been extended to the nonzero-mean and nonsymmetric case. In the correlated zero-mean case, the joint distribution of x, y is

f x , y ( x , y ) = 1 π 2 | Σ | exp ( [ x y ] H Σ 1 [ x y ] ) {\displaystyle f_{x,y}(x,y)={\frac {1}{\pi ^{2}|\Sigma |}}\exp \left(-{\begin{bmatrix}x\\y\end{bmatrix}}^{H}\Sigma ^{-1}{\begin{bmatrix}x\\y\end{bmatrix}}\right)}

where

Σ = [ σ x 2 ρ σ x σ y ρ σ x σ y σ y 2 ] , x = x r + i x i , y = y r + i y i {\displaystyle \Sigma ={\begin{bmatrix}\sigma _{x}^{2}&\rho \sigma _{x}\sigma _{y}\\\rho ^{*}\sigma _{x}\sigma _{y}&\sigma _{y}^{2}\end{bmatrix}},\;\;x=x_{r}+ix_{i},\;\;y=y_{r}+iy_{i}}

( ) H {\displaystyle (\cdot )^{H}} is an Hermitian transpose and

ρ = ρ r + i ρ i = E ( x y σ x σ y ) | C | 1 {\displaystyle \rho =\rho _{r}+i\rho _{i}=\operatorname {E} {\bigg (}{\frac {xy^{*}}{\sigma _{x}\sigma _{y}}}{\bigg )}\;\;\in \;\left|\mathbb {C} \right|\leq 1}

The PDF of Z = X / Y {\displaystyle Z=X/Y} is found to be

f z ( z r , z i ) = 1 | ρ | 2 π σ x 2 σ y 2 ( | z | 2 σ x 2 + 1 σ y 2 2 ρ r z r ρ i z i σ x σ y ) 2 = 1 | ρ | 2 π σ x 2 σ y 2 ( | z σ x ρ σ y | 2 + 1 | ρ | 2 σ y 2 ) 2 {\displaystyle {\begin{aligned}f_{z}(z_{r},z_{i})&={\frac {1-|\rho |^{2}}{\pi \sigma _{x}^{2}\sigma _{y}^{2}}}{\Biggr (}{\frac {|z|^{2}}{\sigma _{x}^{2}}}+{\frac {1}{\sigma _{y}^{2}}}-2{\frac {\rho _{r}z_{r}-\rho _{i}z_{i}}{\sigma _{x}\sigma _{y}}}{\Biggr )}^{-2}\\&={\frac {1-|\rho |^{2}}{\pi \sigma _{x}^{2}\sigma _{y}^{2}}}{\Biggr (}\;\;{\Biggr |}{\frac {z}{\sigma _{x}}}-{\frac {\rho ^{*}}{\sigma _{y}}}{\Biggr |}^{2}+{\frac {1-|\rho |^{2}}{\sigma _{y}^{2}}}{\Biggr )}^{-2}\end{aligned}}}

In the usual event that σ x = σ y {\displaystyle \sigma _{x}=\sigma _{y}} we get

f z ( z r , z i ) = 1 | ρ | 2 π ( | z ρ | 2 + 1 | ρ | 2 ) 2 {\displaystyle f_{z}(z_{r},z_{i})={\frac {1-|\rho |^{2}}{\pi \left(\;\;|z-\rho ^{*}|^{2}+1-|\rho |^{2}\right)^{2}}}}

Further closed-form results for the CDF are also given.

The ratio distribution of correlated complex variables, rho = 0.7 exp(i pi/4).

The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient of ρ = 0.7 exp ( i π / 4 ) {\displaystyle \rho =0.7\exp(i\pi /4)} . The pdf peak occurs at roughly the complex conjugate of a scaled down ρ {\displaystyle \rho } .

Ratio of log-normal

The ratio of independent or correlated log-normals is log-normal. This follows, because if X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are log-normally distributed, then ln ( X 1 ) {\displaystyle \ln(X_{1})} and ln ( X 2 ) {\displaystyle \ln(X_{2})} are normally distributed. If they are independent or their logarithms follow a bivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.

This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution of X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} is adequately approximated by a log-normal. This is a common result of the multiplicative central limit theorem, also known as Gibrat's law, when X i {\displaystyle X_{i}} is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.

Uniform ratio distribution

With two independent random variables following a uniform distribution, e.g.,

p X ( x ) = { 1 0 < x < 1 0 otherwise {\displaystyle p_{X}(x)={\begin{cases}1&0<x<1\\0&{\text{otherwise}}\end{cases}}}

the ratio distribution becomes

p Z ( z ) = { 1 / 2 0 < z < 1 1 2 z 2 z 1 0 otherwise {\displaystyle p_{Z}(z)={\begin{cases}1/2\qquad &0<z<1\\{\frac {1}{2z^{2}}}\qquad &z\geq 1\\0\qquad &{\text{otherwise}}\end{cases}}}

Cauchy ratio distribution

If two independent random variables, X and Y each follow a Cauchy distribution with median equal to zero and shape factor a {\displaystyle a}

p X ( x | a ) = a π ( a 2 + x 2 ) {\displaystyle p_{X}(x|a)={\frac {a}{\pi (a^{2}+x^{2})}}}

then the ratio distribution for the random variable Z = X / Y {\displaystyle Z=X/Y} is

p Z ( z | a ) = 1 π 2 ( z 2 1 ) ln ( z 2 ) . {\displaystyle p_{Z}(z|a)={\frac {1}{\pi ^{2}(z^{2}-1)}}\ln(z^{2}).}

This distribution does not depend on a {\displaystyle a} and the result stated by Springer (p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as the product distribution of the random variable W = X Y {\displaystyle W=XY} :

p W ( w | a ) = a 2 π 2 ( w 2 a 4 ) ln ( w 2 a 4 ) . {\displaystyle p_{W}(w|a)={\frac {a^{2}}{\pi ^{2}(w^{2}-a^{4})}}\ln \left({\frac {w^{2}}{a^{4}}}\right).}

More generally, if two independent random variables X and Y each follow a Cauchy distribution with median equal to zero and shape factor a {\displaystyle a} and b {\displaystyle b} respectively, then:

  1. The ratio distribution for the random variable Z = X / Y {\displaystyle Z=X/Y} is p Z ( z | a , b ) = a b π 2 ( b 2 z 2 a 2 ) ln ( b 2 z 2 a 2 ) . {\displaystyle p_{Z}(z|a,b)={\frac {ab}{\pi ^{2}(b^{2}z^{2}-a^{2})}}\ln \left({\frac {b^{2}z^{2}}{a^{2}}}\right).}
  2. The product distribution for the random variable W = X Y {\displaystyle W=XY} is p W ( w | a , b ) = a b π 2 ( w 2 a 2 b 2 ) ln ( w 2 a 2 b 2 ) . {\displaystyle p_{W}(w|a,b)={\frac {ab}{\pi ^{2}(w^{2}-a^{2}b^{2})}}\ln \left({\frac {w^{2}}{a^{2}b^{2}}}\right).}

The result for the ratio distribution can be obtained from the product distribution by replacing b {\displaystyle b} with 1 b . {\displaystyle {\frac {1}{b}}.}

Ratio of standard normal to standard uniform

Main article: Slash distribution

If X has a standard normal distribution and Y has a standard uniform distribution, then Z = X / Y has a distribution known as the slash distribution, with probability density function

p Z ( z ) = { [ φ ( 0 ) φ ( z ) ] / z 2 z 0 φ ( 0 ) / 2 z = 0 , {\displaystyle p_{Z}(z)={\begin{cases}\left/z^{2}\quad &z\neq 0\\\varphi (0)/2\quad &z=0,\\\end{cases}}}

where φ(z) is the probability density function of the standard normal distribution.

Chi-squared, Gamma, Beta distributions

Let G be a normal(0,1) distribution, Y and Z be chi-squared distributions with m and n degrees of freedom respectively, all independent, with f χ ( x , k ) = x k 2 1 e x / 2 2 k / 2 Γ ( k / 2 ) {\displaystyle f_{\chi }(x,k)={\frac {x^{{\frac {k}{2}}-1}e^{-x/2}}{2^{k/2}\Gamma (k/2)}}} . Then

G Y / m t m {\displaystyle {\frac {G}{\sqrt {Y/m}}}\sim t_{m}} the Student's t distribution
Y / m Z / n = F m , n {\displaystyle {\frac {Y/m}{Z/n}}=F_{m,n}} i.e. Fisher's F-test distribution
Y Y + Z β ( m 2 , n 2 ) {\displaystyle {\frac {Y}{Y+Z}}\sim \beta ({\tfrac {m}{2}},{\tfrac {n}{2}})} the beta distribution
Y Z β ( m 2 , n 2 ) {\displaystyle \;\;{\frac {Y}{Z}}\sim \beta '({\tfrac {m}{2}},{\tfrac {n}{2}})} the standard beta prime distribution

If V 1 χ k 1 2 ( λ ) {\displaystyle V_{1}\sim {\chi '}_{k_{1}}^{2}(\lambda )} , a noncentral chi-squared distribution, and V 2 χ k 2 2 ( 0 ) {\displaystyle V_{2}\sim {\chi '}_{k_{2}}^{2}(0)} and V 1 {\displaystyle V_{1}} is independent of V 2 {\displaystyle V_{2}} then

V 1 / k 1 V 2 / k 2 F k 1 , k 2 ( λ ) {\displaystyle {\frac {V_{1}/k_{1}}{V_{2}/k_{2}}}\sim F'_{k_{1},k_{2}}(\lambda )} , a noncentral F-distribution.

m n F m , n = β ( m 2 , n 2 )  or  F m , n = β ( m 2 , n 2 , 1 , n m ) {\displaystyle {\frac {m}{n}}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}}){\text{ or }}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})} defines F m , n {\displaystyle F'_{m,n}} , Fisher's F density distribution, the PDF of the ratio of two Chi-squares with m, n degrees of freedom.

The CDF of the Fisher density, found in F-tables is defined in the beta prime distribution article. If we enter an F-test table with m = 3, n = 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral

F 3 , 4 ( 6.59 ) = 6.59 β ( x ; m 2 , n 2 , 1 , n m ) d x = 0.05 {\displaystyle F_{3,4}(6.59)=\int _{6.59}^{\infty }\beta '(x;{\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})dx=0.05}

For gamma distributions U and V with arbitrary shape parameters α1 and α2 and their scale parameters both set to unity, that is, U Γ ( α 1 , 1 ) , V Γ ( α 2 , 1 ) {\displaystyle U\sim \Gamma (\alpha _{1},1),V\sim \Gamma (\alpha _{2},1)} , where Γ ( x ; α , 1 ) = x α 1 e x Γ ( α ) {\displaystyle \Gamma (x;\alpha ,1)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}} , then

U U + V β ( α 1 , α 2 ) ,  expectation  = α 1 α 1 + α 2 {\displaystyle {\frac {U}{U+V}}\sim \beta (\alpha _{1},\alpha _{2}),\qquad {\text{ expectation }}={\frac {\alpha _{1}}{\alpha _{1}+\alpha _{2}}}}
U V β ( α 1 , α 2 ) ,  expectation  = α 1 α 2 1 , α 2 > 1 {\displaystyle {\frac {U}{V}}\sim \beta '(\alpha _{1},\alpha _{2}),\qquad \qquad {\text{ expectation }}={\frac {\alpha _{1}}{\alpha _{2}-1}},\;\alpha _{2}>1}
V U β ( α 2 , α 1 ) ,  expectation  = α 2 α 1 1 , α 1 > 1 {\displaystyle {\frac {V}{U}}\sim \beta '(\alpha _{2},\alpha _{1}),\qquad \qquad {\text{ expectation }}={\frac {\alpha _{2}}{\alpha _{1}-1}},\;\alpha _{1}>1}

If U Γ ( x ; α , 1 ) {\displaystyle U\sim \Gamma (x;\alpha ,1)} , then θ U Γ ( x ; α , θ ) = x α 1 e x θ θ k Γ ( α ) {\displaystyle \theta U\sim \Gamma (x;\alpha ,\theta )={\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}} . Note that here θ is a scale parameter, rather than a rate parameter.

If U Γ ( α 1 , θ 1 ) , V Γ ( α 2 , θ 2 ) {\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),\;V\sim \Gamma (\alpha _{2},\theta _{2})} , then by rescaling the θ {\displaystyle \theta } parameter to unity we have

U θ 1 U θ 1 + V θ 2 = θ 2 U θ 2 U + θ 1 V β ( α 1 , α 2 ) {\displaystyle {\frac {\frac {U}{\theta _{1}}}{{\frac {U}{\theta _{1}}}+{\frac {V}{\theta _{2}}}}}={\frac {\theta _{2}U}{\theta _{2}U+\theta _{1}V}}\sim \beta (\alpha _{1},\alpha _{2})}
U θ 1 V θ 2 = θ 2 θ 1 U V β ( α 1 , α 2 ) {\displaystyle {\frac {\frac {U}{\theta _{1}}}{\frac {V}{\theta _{2}}}}={\frac {\theta _{2}}{\theta _{1}}}{\frac {U}{V}}\sim \beta '(\alpha _{1},\alpha _{2})}

Thus

U V β ( α 1 , α 2 , 1 , θ 1 θ 2 )  and  E [ U V ] = θ 1 θ 2 α 1 α 2 1 {\displaystyle {\frac {U}{V}}\sim \beta '(\alpha _{1},\alpha _{2},1,{\frac {\theta _{1}}{\theta _{2}}})\quad {\text{ and }}\operatorname {E} \left={\frac {\theta _{1}}{\theta _{2}}}{\frac {\alpha _{1}}{\alpha _{2}-1}}}

in which β ( α , β , p , q ) {\displaystyle \beta '(\alpha ,\beta ,p,q)} represents the generalised beta prime distribution.

In the foregoing it is apparent that if X β ( α 1 , α 2 , 1 , 1 ) β ( α 1 , α 2 ) {\displaystyle X\sim \beta '(\alpha _{1},\alpha _{2},1,1)\equiv \beta '(\alpha _{1},\alpha _{2})} then θ X β ( α 1 , α 2 , 1 , θ ) {\displaystyle \theta X\sim \beta '(\alpha _{1},\alpha _{2},1,\theta )} . More explicitly, since

β ( x ; α 1 , α 2 , 1 , R ) = 1 R β ( x R ; α 1 , α 2 ) {\displaystyle \beta '(x;\alpha _{1},\alpha _{2},1,R)={\frac {1}{R}}\beta '({\frac {x}{R}};\alpha _{1},\alpha _{2})}

if U Γ ( α 1 , θ 1 ) , V Γ ( α 2 , θ 2 ) {\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),V\sim \Gamma (\alpha _{2},\theta _{2})} then

U V 1 R β ( x R ; α 1 , α 2 ) = ( x R ) α 1 1 ( 1 + x R ) α 1 + α 2 1 R B ( α 1 , α 2 ) , x 0 {\displaystyle {\frac {U}{V}}\sim {\frac {1}{R}}\beta '({\frac {x}{R}};\alpha _{1},\alpha _{2})={\frac {\left({\frac {x}{R}}\right)^{\alpha _{1}-1}}{\left(1+{\frac {x}{R}}\right)^{\alpha _{1}+\alpha _{2}}}}\cdot {\frac {1}{\;R\;B(\alpha _{1},\alpha _{2})}},\;\;x\geq 0}

where

R = θ 1 θ 2 , B ( α 1 , α 2 ) = Γ ( α 1 ) Γ ( α 2 ) Γ ( α 1 + α 2 ) {\displaystyle R={\frac {\theta _{1}}{\theta _{2}}},\;\;\;B(\alpha _{1},\alpha _{2})={\frac {\Gamma (\alpha _{1})\Gamma (\alpha _{2})}{\Gamma (\alpha _{1}+\alpha _{2})}}}

Rayleigh Distributions

If X, Y are independent samples from the Rayleigh distribution f r ( r ) = ( r / σ 2 ) e r 2 / 2 σ 2 , r 0 {\displaystyle f_{r}(r)=(r/\sigma ^{2})e^{-r^{2}/2\sigma ^{2}},\;\;r\geq 0} , the ratio Z = X/Y follows the distribution

f z ( z ) = 2 z ( 1 + z 2 ) 2 , z 0 {\displaystyle f_{z}(z)={\frac {2z}{(1+z^{2})^{2}}},\;\;z\geq 0}

and has cdf

F z ( z ) = 1 1 1 + z 2 = z 2 1 + z 2 , z 0 {\displaystyle F_{z}(z)=1-{\frac {1}{1+z^{2}}}={\frac {z^{2}}{1+z^{2}}},\;\;\;z\geq 0}

The Rayleigh distribution has scaling as its only parameter. The distribution of Z = α X / Y {\displaystyle Z=\alpha X/Y} follows

f z ( z , α ) = 2 α z ( α + z 2 ) 2 , z > 0 {\displaystyle f_{z}(z,\alpha )={\frac {2\alpha z}{(\alpha +z^{2})^{2}}},\;\;z>0}

and has cdf

F z ( z , α ) = z 2 α + z 2 , z 0 {\displaystyle F_{z}(z,\alpha )={\frac {z^{2}}{\alpha +z^{2}}},\;\;\;z\geq 0}

Fractional gamma distributions (including chi, chi-squared, exponential, Rayleigh and Weibull)

The generalized gamma distribution is

f ( x ; a , d , r ) = r Γ ( d / r ) a d x d 1 e ( x / a ) r x 0 ; a , d , r > 0 {\displaystyle f(x;a,d,r)={\frac {r}{\Gamma (d/r)a^{d}}}x^{d-1}e^{-(x/a)^{r}}\;x\geq 0;\;\;a,\;d,\;r>0}

which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that here a is a scale parameter, rather than a rate parameter; d is a shape parameter.

If U f ( x ; a 1 , d 1 , r ) , V f ( x ; a 2 , d 2 , r )  are independent, and  W = U / V {\displaystyle U\sim f(x;a_{1},d_{1},r),\;\;V\sim f(x;a_{2},d_{2},r){\text{ are independent, and }}W=U/V}
then g ( w ) = r ( a 1 a 2 ) d 2 B ( d 1 r , d 2 r ) w d 2 1 ( 1 + ( a 2 a 1 ) r w r ) d 1 + d 2 r , w > 0 {\textstyle g(w)={\frac {r\left({\frac {a_{1}}{a_{2}}}\right)^{d_{2}}}{B\left({\frac {d_{1}}{r}},{\frac {d_{2}}{r}}\right)}}{\frac {w^{-d_{2}-1}}{\left(1+\left({\frac {a_{2}}{a_{1}}}\right)^{-r}w^{-r}\right)^{\frac {d_{1}+d_{2}}{r}}}},\;\;w>0}
where B ( u , v ) = Γ ( u ) Γ ( v ) Γ ( u + v ) {\displaystyle B(u,v)={\frac {\Gamma (u)\Gamma (v)}{\Gamma (u+v)}}}

Modelling a mixture of different scaling factors

In the ratios above, Gamma samples, U, V may have differing sample sizes α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}} but must be drawn from the same distribution x α 1 e x θ θ k Γ ( α ) {\displaystyle {\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}} with equal scaling θ {\displaystyle \theta } .

In situations where U and V are differently scaled, a variables transformation allows the modified random ratio pdf to be determined. Let X = U U + V = 1 1 + B {\displaystyle X={\frac {U}{U+V}}={\frac {1}{1+B}}} where U Γ ( α 1 , θ ) , V Γ ( α 2 , θ ) , θ {\displaystyle U\sim \Gamma (\alpha _{1},\theta ),V\sim \Gamma (\alpha _{2},\theta ),\theta } arbitrary and, from above, X B e t a ( α 1 , α 2 ) , B = V / U B e t a ( α 2 , α 1 ) {\displaystyle X\sim Beta(\alpha _{1},\alpha _{2}),B=V/U\sim Beta'(\alpha _{2},\alpha _{1})} .

Rescale V arbitrarily, defining Y U U + φ V = 1 1 + φ B , 0 φ {\displaystyle Y\sim {\frac {U}{U+\varphi V}}={\frac {1}{1+\varphi B}},\;\;0\leq \varphi \leq \infty }

We have B = 1 X X {\displaystyle B={\frac {1-X}{X}}} and substitution into Y gives Y = X φ + ( 1 φ ) X , d Y / d X = φ ( φ + ( 1 φ ) X ) 2 {\displaystyle Y={\frac {X}{\varphi +(1-\varphi )X}},dY/dX={\frac {\varphi }{(\varphi +(1-\varphi )X)^{2}}}}

Transforming X to Y gives f Y ( Y ) = f X ( X ) | d Y / d X | = β ( X , α 1 , α 2 ) φ / [ φ + ( 1 φ ) X ] 2 {\displaystyle f_{Y}(Y)={\frac {f_{X}(X)}{|dY/dX|}}={\frac {\beta (X,\alpha _{1},\alpha _{2})}{\varphi /^{2}}}}

Noting X = φ Y 1 ( 1 φ ) Y {\displaystyle X={\frac {\varphi Y}{1-(1-\varphi )Y}}} we finally have

f Y ( Y , φ ) = φ [ 1 ( 1 φ ) Y ] 2 β ( φ Y 1 ( 1 φ ) Y , α 1 , α 2 ) , 0 Y 1 {\displaystyle f_{Y}(Y,\varphi )={\frac {\varphi }{^{2}}}\beta \left({\frac {\varphi Y}{1-(1-\varphi )Y}},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq 1}

Thus, if U Γ ( α 1 , θ 1 ) {\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1})} and V Γ ( α 2 , θ 2 ) {\displaystyle V\sim \Gamma (\alpha _{2},\theta _{2})}
then Y = U U + V {\displaystyle Y={\frac {U}{U+V}}} is distributed as f Y ( Y , φ ) {\displaystyle f_{Y}(Y,\varphi )} with φ = θ 2 θ 1 {\displaystyle \varphi ={\frac {\theta _{2}}{\theta _{1}}}}

The distribution of Y is limited here to the interval . It can be generalized by scaling such that if Y f Y ( Y , φ ) {\displaystyle Y\sim f_{Y}(Y,\varphi )} then

Θ Y f Y ( Y , φ , Θ ) {\displaystyle \Theta Y\sim f_{Y}(Y,\varphi ,\Theta )}

where f Y ( Y , φ , Θ ) = φ / Θ [ 1 ( 1 φ ) Y / Θ ] 2 β ( φ Y / Θ 1 ( 1 φ ) Y / Θ , α 1 , α 2 ) , 0 Y Θ {\displaystyle f_{Y}(Y,\varphi ,\Theta )={\frac {\varphi /\Theta }{^{2}}}\beta \left({\frac {\varphi Y/\Theta }{1-(1-\varphi )Y/\Theta }},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq \Theta }

Θ Y {\displaystyle \Theta Y} is then a sample from Θ U U + φ V {\displaystyle {\frac {\Theta U}{U+\varphi V}}}

Reciprocals of samples from beta distributions

Though not ratio distributions of two variables, the following identities for one variable are useful:

If X β ( α , β ) {\displaystyle X\sim \beta (\alpha ,\beta )} then x = X 1 X β ( α , β ) {\displaystyle \mathbf {x} ={\frac {X}{1-X}}\sim \beta '(\alpha ,\beta )}
If Y β ( α , β ) {\displaystyle \mathbf {Y} \sim \beta '(\alpha ,\beta )} then y = 1 Y β ( β , α ) {\displaystyle y={\frac {1}{\mathbf {Y} }}\sim \beta '(\beta ,\alpha )}

combining the latter two equations yields

If X β ( α , β ) {\displaystyle X\sim \beta (\alpha ,\beta )} then x = 1 X 1 β ( β , α ) {\displaystyle \mathbf {x} ={\frac {1}{X}}-1\sim \beta '(\beta ,\alpha )} .
If Y β ( α , β ) {\displaystyle \mathbf {Y} \sim \beta '(\alpha ,\beta )} then y = Y 1 + Y β ( α , β ) {\displaystyle y={\frac {\mathbf {Y} }{1+\mathbf {Y} }}\sim \beta (\alpha ,\beta )}

Corollary

1 1 + Y = Y 1 Y 1 + 1 β ( β , α ) {\displaystyle {\frac {1}{1+\mathbf {Y} }}={\frac {\mathbf {Y} ^{-1}}{\mathbf {Y} ^{-1}+1}}\sim \beta (\beta ,\alpha )}
1 + Y { β ( β , α ) } 1 {\displaystyle 1+\mathbf {Y} \sim \{\;\beta (\beta ,\alpha )\;\}^{-1}} , the distribution of the reciprocals of β ( β , α ) {\displaystyle \beta (\beta ,\alpha )} samples.

If U Γ ( α , 1 ) , V Γ ( β , 1 ) {\displaystyle U\sim \Gamma (\alpha ,1),V\sim \Gamma (\beta ,1)} then U V β ( α , β ) {\displaystyle {\frac {U}{V}}\sim \beta '(\alpha ,\beta )} and

U / V 1 + U / V = U V + U β ( α , β ) {\displaystyle {\frac {U/V}{1+U/V}}={\frac {U}{V+U}}\sim \beta (\alpha ,\beta )}

Further results can be found in the Inverse distribution article.

  • If X , Y {\displaystyle X,\;Y} are independent exponential random variables with mean μ, then X − Y is a double exponential random variable with mean 0 and scale μ.

Binomial distribution

This result was derived by Katz et al.

Suppose X Binomial ( n , p 1 ) {\displaystyle X\sim {\text{Binomial}}(n,p_{1})} and Y Binomial ( m , p 2 ) {\displaystyle Y\sim {\text{Binomial}}(m,p_{2})} and X {\displaystyle X} , Y {\displaystyle Y} are independent. Let T = X / n Y / m {\displaystyle T={\frac {X/n}{Y/m}}} .

Then log ( T ) {\displaystyle \log(T)} is approximately normally distributed with mean log ( p 1 / p 2 ) {\displaystyle \log(p_{1}/p_{2})} and variance ( 1 / p 1 ) 1 n + ( 1 / p 2 ) 1 m {\displaystyle {\frac {(1/p_{1})-1}{n}}+{\frac {(1/p_{2})-1}{m}}} .

The binomial ratio distribution is of significance in clinical trials: if the distribution of T is known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.

Poisson and truncated Poisson distributions

In the ratio of Poisson variables R = X/Y there is a problem that Y is zero with finite probability so R is undefined. To counter this, consider the truncated, or censored, ratio R' = X/Y' where zero sample of Y are discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway.

The probability of a null Poisson sample being e λ {\displaystyle e^{-\lambda }} , the generic pdf of a left truncated Poisson distribution is

p ~ x ( x ; λ ) = 1 1 e λ e λ λ x x ! , x 1 , 2 , 3 , {\displaystyle {\tilde {p}}_{x}(x;\lambda )={\frac {1}{1-e^{-\lambda }}}{\frac {e^{-\lambda }\lambda ^{x}}{x!}},\;\;\;x\in 1,2,3,\cdots }

which sums to unity. Following Cohen, for n independent trials, the multidimensional truncated pdf is

p ~ ( x 1 , x 2 , , x n ; λ ) = 1 ( 1 e λ ) n i = 1 n e λ λ x i x i ! , x i 1 , 2 , 3 , {\displaystyle {\tilde {p}}(x_{1},x_{2},\dots ,x_{n};\lambda )={\frac {1}{(1-e^{-\lambda })^{n}}}\prod _{i=1}^{n}{\frac {e^{-\lambda }\lambda ^{x_{i}}}{x_{i}!}},\;\;\;x_{i}\in 1,2,3,\cdots }

and the log likelihood becomes

L = ln ( p ~ ) = n ln ( 1 e λ ) n λ + ln ( λ ) 1 n x i ln 1 n ( x i ! ) , x i 1 , 2 , 3 , {\displaystyle L=\ln({\tilde {p}})=-n\ln(1-e^{-\lambda })-n\lambda +\ln(\lambda )\sum _{1}^{n}x_{i}-\ln \prod _{1}^{n}(x_{i}!),\;\;\;x_{i}\in 1,2,3,\cdots }

On differentiation we get

d L / d λ = n 1 e λ + 1 λ i = 1 n x i {\displaystyle dL/d\lambda ={\frac {-n}{1-e^{-\lambda }}}+{\frac {1}{\lambda }}\sum _{i=1}^{n}x_{i}}

and setting to zero gives the maximum likelihood estimate λ ^ M L {\displaystyle {\hat {\lambda }}_{ML}}

λ ^ M L 1 e λ ^ M L = 1 n i = 1 n x i = x ¯ {\displaystyle {\frac {{\hat {\lambda }}_{ML}}{1-e^{-{\hat {\lambda }}_{ML}}}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}={\bar {x}}}

Note that as λ ^ 0 {\displaystyle {\hat {\lambda }}\to 0} then x ¯ 1 {\displaystyle {\bar {x}}\to 1} so the truncated maximum likelihood λ {\displaystyle \lambda } estimate, though correct for both truncated and untruncated distributions, gives a truncated mean x ¯ {\displaystyle {\bar {x}}} value which is highly biassed relative to the untruncated one. Nevertheless it appears that x ¯ {\displaystyle {\bar {x}}} is a sufficient statistic for λ {\displaystyle \lambda } since λ ^ M L {\displaystyle {\hat {\lambda }}_{ML}} depends on the data only through the sample mean x ¯ = 1 n i = 1 n x i {\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} in the previous equation which is consistent with the methodology of the conventional Poisson distribution.

Absent any closed form solutions, the following approximate reversion for truncated λ {\displaystyle \lambda } is valid over the whole range 0 λ ; 1 x ¯ {\displaystyle 0\leq \lambda \leq \infty ;\;1\leq {\bar {x}}\leq \infty } .

λ ^ = x ¯ e ( x ¯ 1 ) 0.07 ( x ¯ 1 ) e 0.666 ( x ¯ 1 ) + ϵ , | ϵ | < 0.006 {\displaystyle {\hat {\lambda }}={\bar {x}}-e^{-({\bar {x}}-1)}-0.07({\bar {x}}-1)e^{-0.666({\bar {x}}-1)}+\epsilon ,\;\;\;|\epsilon |<0.006}

which compares with the non-truncated version which is simply λ ^ = x ¯ {\displaystyle {\hat {\lambda }}={\bar {x}}} . Taking the ratio R = λ ^ X / λ ^ Y {\displaystyle R={\hat {\lambda }}_{X}/{\hat {\lambda }}_{Y}} is a valid operation even though λ ^ X {\displaystyle {\hat {\lambda }}_{X}} may use a non-truncated model while λ ^ Y {\displaystyle {\hat {\lambda }}_{Y}} has a left-truncated one.

The asymptotic large- n λ  variance of  λ ^ {\displaystyle n\lambda {\text{ variance of }}{\hat {\lambda }}} (and Cramér–Rao bound) is

V a r ( λ ^ ) ( E [ δ 2 L δ λ 2 ] λ = λ ^ ) 1 {\displaystyle \mathbb {Var} ({\hat {\lambda }})\geq -\left(\mathbb {E} \left_{\lambda ={\hat {\lambda }}}\right)^{-1}}

in which substituting L gives

δ 2 L δ λ 2 = n [ x ¯ λ 2 e λ ( 1 e λ ) 2 ] {\displaystyle {\frac {\delta ^{2}L}{\delta \lambda ^{2}}}=-n\left}

Then substituting x ¯ {\displaystyle {\bar {x}}} from the equation above, we get Cohen's variance estimate

V a r ( λ ^ ) λ ^ n ( 1 e λ ^ ) 2 1 ( λ ^ + 1 ) e λ ^ {\displaystyle \mathbb {Var} ({\hat {\lambda }})\geq {\frac {\hat {\lambda }}{n}}{\frac {(1-e^{-{\hat {\lambda }}})^{2}}{1-({\hat {\lambda }}+1)e^{-{\hat {\lambda }}}}}}

The variance of the point estimate of the mean λ {\displaystyle \lambda } , on the basis of n trials, decreases asymptotically to zero as n increases to infinity. For small λ {\displaystyle \lambda } it diverges from the truncated pdf variance in Springael for example, who quotes a variance of

V a r ( λ ) = λ / n 1 e λ [ 1 λ e λ 1 e λ ] {\displaystyle \mathbb {Var} (\lambda )={\frac {\lambda /n}{1-e^{-\lambda }}}\left}

for n samples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf, V a r ( λ ^ ) / V a r ( λ ) {\displaystyle \mathbb {Var} ({\hat {\lambda }})/\mathbb {Var} (\lambda )} , ranges from 1 for large λ {\displaystyle \lambda } (100% efficient) up to 2 as λ {\displaystyle \lambda } approaches zero (50% efficient).

These mean and variance parameter estimates, together with parallel estimates for X, can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning and there is a Zero-truncated Poisson distribution Misplaced Pages entry.

Double Lomax distribution

This distribution is the ratio of two Laplace distributions. Let X and Y be standard Laplace identically distributed random variables and let z = X / Y. Then the probability distribution of z is

f ( x ) = 1 2 ( 1 + | z | ) 2 {\displaystyle f(x)={\frac {1}{2(1+|z|)^{2}}}}

Let the mean of the X and Y be a. Then the standard double Lomax distribution is symmetric around a.

This distribution has an infinite mean and variance.

If Z has a standard double Lomax distribution, then 1/Z also has a standard double Lomax distribution.

The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution.

For 0 < a < 1, the a-th moment exists.

E ( Z a ) = Γ ( 1 + a ) Γ ( 1 a ) {\displaystyle E(Z^{a})={\frac {\Gamma (1+a)}{\Gamma (1-a)}}}

where Γ is the gamma function.

Ratio distributions in multivariate analysis

Ratio distributions also appear in multivariate analysis. If the random matrices X and Y follow a Wishart distribution then the ratio of the determinants

φ = | X | / | Y | {\displaystyle \varphi =|\mathbf {X} |/|\mathbf {Y} |}

is proportional to the product of independent F random variables. In the case where X and Y are from independent standardized Wishart distributions then the ratio

Λ = | X | / | X + Y | {\displaystyle \Lambda ={|\mathbf {X} |/|\mathbf {X} +\mathbf {Y} |}}

has a Wilks' lambda distribution.

Ratios of Quadratic Forms involving Wishart Matrices

In relation to Wishart matrix distributions if S W p ( Σ , ν + 1 ) {\displaystyle S\sim W_{p}(\Sigma ,\nu +1)} is a sample Wishart matrix and vector V {\displaystyle V} is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead states

V T S V V T Σ V χ ν 2 {\displaystyle {\frac {V^{T}SV}{V^{T}\Sigma V}}\sim \chi _{\nu }^{2}}

The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence of Cochran's theorem. Similarly

V T Σ 1 V V T S 1 V χ ν p + 1 2 {\displaystyle {\frac {V^{T}\Sigma ^{-1}V}{V^{T}S^{-1}V}}\sim \chi _{\nu -p+1}^{2}}

which is Theorem 3.2.12 of Muirhead

See also

Notes

  1. Note, however, that X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} can be individually log-normally distributed without having a bivariate log-normal distribution. As of 2022-06-08 the Misplaced Pages article on "Copula (probability theory)" includes a density and contour plot of two Normal marginals joint with a Gumbel copula, where the joint distribution is not bivariate normal.

References

  1. ^ Geary, R. C. (1930). "The Frequency Distribution of the Quotient of Two Normal Variates". Journal of the Royal Statistical Society. 93 (3): 442–446. doi:10.2307/2342070. JSTOR 2342070.
  2. Fieller, E. C. (November 1932). "The Distribution of the Index in a Normal Bivariate Population". Biometrika. 24 (3/4): 428–440. doi:10.2307/2331976. JSTOR 2331976.
  3. ^ Curtiss, J. H. (December 1941). "On the Distribution of the Quotient of Two Chance Variables". The Annals of Mathematical Statistics. 12 (4): 409–421. doi:10.1214/aoms/1177731679. JSTOR 2235953.
  4. George Marsaglia (April 1964). Ratios of Normal Variables and Ratios of Sums of Uniform Variables. Defense Technical Information Center.
  5. Marsaglia, George (March 1965). "Ratios of Normal Variables and Ratios of Sums of Uniform Variables". Journal of the American Statistical Association. 60 (309): 193–204. doi:10.2307/2283145. JSTOR 2283145. Archived from the original on September 23, 2017.
  6. ^ Hinkley, D. V. (December 1969). "On the Ratio of Two Correlated Normal Random Variables". Biometrika. 56 (3): 635–639. doi:10.2307/2334671. JSTOR 2334671.
  7. ^ Hayya, Jack; Armstrong, Donald; Gressis, Nicolas (July 1975). "A Note on the Ratio of Two Normally Distributed Variables". Management Science. 21 (11): 1338–1341. doi:10.1287/mnsc.21.11.1338. JSTOR 2629897.
  8. ^ Springer, Melvin Dale (1979). The Algebra of Random Variables. Wiley. ISBN 0-471-01406-0.
  9. ^ Pham-Gia, T.; Turkkan, N.; Marchand, E. (2006). "Density of the Ratio of Two Normal Random Variables and Applications". Communications in Statistics – Theory and Methods. 35 (9). Taylor & Francis: 1569–1591. doi:10.1080/03610920600683689. S2CID 120891296.
  10. Brody, James P.; Williams, Brian A.; Wold, Barbara J.; Quake, Stephen R. (October 2002). "Significance and statistical errors in the analysis of DNA microarray data" (PDF). Proc Natl Acad Sci U S A. 99 (20): 12975–12978. Bibcode:2002PNAS...9912975B. doi:10.1073/pnas.162468199. PMC 130571. PMID 12235357.
  11. Šimon, Ján; Ftorek, Branislav (2022-09-15). "Basic Statistical Properties of the Knot Efficiency". Symmetry. 14 (9). MDPI: 1926. Bibcode:2022Symm...14.1926S. doi:10.3390/sym14091926. ISSN 2073-8994.
  12. Díaz-Francés, Eloísa; Rubio, Francisco J. (2012-01-24). "On the existence of a normal approximation to the distribution of the ratio of two independent normal random variables". Statistical Papers. 54 (2). Springer Science and Business Media LLC: 309–323. doi:10.1007/s00362-012-0429-2. ISSN 0932-5026. S2CID 122038290.
  13. Baxley, R T; Waldenhorst, B T; Acosta-Marum, G (2010). "Complex Gaussian Ratio Distribution with Applications for Error Rate Calculation in Fading Channels with Imperfect CSI". 2010 IEEE Global Telecommunications Conference GLOBECOM 2010. pp. 1–5. doi:10.1109/GLOCOM.2010.5683407. ISBN 978-1-4244-5636-9. S2CID 14100052.
  14. Sourisseau, M.; Wu, H.-T.; Zhou, Z. (October 2022). "Asymptotic analysis of synchrosqueezing transform—toward statistical inference with nonlinear-type time-frequency analysis". Annals of Statistics. 50 (5): 2694–2712. arXiv:1904.09534. doi:10.1214/22-AOS2203.
  15. Of course, any invocation of a central limit theorem assumes suitable, commonly met regularity conditions, e.g., finite variance.
  16. ^ Kermond, John (2010). "An Introduction to the Algebra of Random Variables". Mathematical Association of Victoria 47th Annual Conference Proceedings – New Curriculum. New Opportunities. The Mathematical Association of Victoria: 1–16. ISBN 978-1-876949-50-1.
  17. "SLAPPF". Statistical Engineering Division, National Institute of Science and Technology. Retrieved 2009-07-02.
  18. Hamedani, G. G. (Oct 2013). "Characterizations of Distribution of Ratio of Rayleigh Random Variables". Pakistan Journal of Statistics. 29 (4): 369–376.
  19. Raja Rao, B.; Garg., M. L. (1969). "A note on the generalized (positive) Cauchy distribution". Canadian Mathematical Bulletin. 12 (6): 865–868. doi:10.4153/CMB-1969-114-2.
  20. Katz D. et al.(1978) Obtaining confidence intervals for the risk ratio in cohort studies. Biometrics 34:469–474
  21. Cohen, A Clifford (June 1960). "Estimating the Parameter in a Conditional Poisson Distribution". Biometrics. 60 (2): 203–211. doi:10.2307/2527552. JSTOR 2527552.
  22. Springael, Johan (2006). "On the sum of independent zero-truncated Poisson random variables" (PDF). University of Antwerp, Faculty of Business and Economics.
  23. Dietz, Ekkehart; Bohning, Dankmar (2000). "On Estimation of the Poisson Parameter in Zero-Modified Poisson Models". Computational Statistics & Data Analysis. 34 (4): 441–459. doi:10.1016/S0167-9473(99)00111-5.
  24. Bindu P and Sangita K (2015) Double Lomax distribution and its applications. Statistica LXXV (3) 331–342
  25. Brennan, L E; Reed, I S (January 1982). "An Adaptive Array Signal Processing Algorithm for Communications". IEEE Transactions on Aerospace and Electronic Systems. AES-18 No 1: 124–130. Bibcode:1982ITAES..18..124B. doi:10.1109/TAES.1982.309212. S2CID 45721922.
  26. ^ Muirhead, Robb (1982). Aspects of Multivariate Statistical Theory. USA: Wiley. pp. 96, Theorem 3.2.12.

External links

Categories: