Misplaced Pages

Comonotonicity

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Countermonotonicity)

In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable. In two dimensions it is also possible to consider perfect negative dependence, which is called countermonotonicity.

Comonotonicity is also related to the comonotonic additivity of the Choquet integral.

The concept of comonotonicity has applications in financial risk management and actuarial science, see e.g. Dhaene et al. (2002a) and Dhaene et al. (2002b). In particular, the sum of the components X1 + X2 + · · · + Xn is the riskiest if the joint probability distribution of the random vector (X1, X2, . . . , Xn) is comonotonic. Furthermore, the α-quantile of the sum equals the sum of the α-quantiles of its components, hence comonotonic random variables are quantile-additive. In practical risk management terms it means that there is minimal (or eventually no) variance reduction from diversification.

For extensions of comonotonicity, see Jouini & Napp (2004) and Puccetti & Scarsini (2010).

Definitions

Comonotonicity of subsets of R

A subset S of R is called comonotonic (sometimes also nondecreasing) if, for all (x1, x2, . . . , xn) and (y1, y2, . . . , yn) in S with xi < yi for some i ∈ {1, 2, . . . , n}, it follows that xjyj for all j ∈ {1, 2, . . . , n}.

This means that S is a totally ordered set.

Comonotonicity of probability measures on R

Let μ be a probability measure on the n-dimensional Euclidean space R and let F denote its multivariate cumulative distribution function, that is

F ( x 1 , , x n ) := μ ( { ( y 1 , , y n ) R n y 1 x 1 , , y n x n } ) , ( x 1 , , x n ) R n . {\displaystyle F(x_{1},\ldots ,x_{n}):=\mu {\bigl (}\{(y_{1},\ldots ,y_{n})\in {\mathbb {R} }^{n}\mid y_{1}\leq x_{1},\ldots ,y_{n}\leq x_{n}\}{\bigr )},\qquad (x_{1},\ldots ,x_{n})\in {\mathbb {R} }^{n}.}

Furthermore, let F1, . . . , Fn denote the cumulative distribution functions of the n one-dimensional marginal distributions of μ, that means

F i ( x ) := μ ( { ( y 1 , , y n ) R n y i x } ) , x R {\displaystyle F_{i}(x):=\mu {\bigl (}\{(y_{1},\ldots ,y_{n})\in {\mathbb {R} }^{n}\mid y_{i}\leq x\}{\bigr )},\qquad x\in {\mathbb {R} }}

for every i ∈ {1, 2, . . . , n}. Then μ is called comonotonic, if

F ( x 1 , , x n ) = min i { 1 , , n } F i ( x i ) , ( x 1 , , x n ) R n . {\displaystyle F(x_{1},\ldots ,x_{n})=\min _{i\in \{1,\ldots ,n\}}F_{i}(x_{i}),\qquad (x_{1},\ldots ,x_{n})\in {\mathbb {R} }^{n}.}

Note that the probability measure μ is comonotonic if and only if its support S is comonotonic according to the above definition.

Comonotonicity of R-valued random vectors

An R-valued random vector X = (X1, . . . , Xn) is called comonotonic, if its multivariate distribution (the pushforward measure) is comonotonic, this means

Pr ( X 1 x 1 , , X n x n ) = min i { 1 , , n } Pr ( X i x i ) , ( x 1 , , x n ) R n . {\displaystyle \Pr(X_{1}\leq x_{1},\ldots ,X_{n}\leq x_{n})=\min _{i\in \{1,\ldots ,n\}}\Pr(X_{i}\leq x_{i}),\qquad (x_{1},\ldots ,x_{n})\in {\mathbb {R} }^{n}.}

Properties

An R-valued random vector X = (X1, . . . , Xn) is comonotonic if and only if it can be represented as

( X 1 , , X n ) = d ( F X 1 1 ( U ) , , F X n 1 ( U ) ) , {\displaystyle (X_{1},\ldots ,X_{n})=_{\text{d}}(F_{X_{1}}^{-1}(U),\ldots ,F_{X_{n}}^{-1}(U)),\,}

where =d stands for equality in distribution, on the right-hand side are the left-continuous generalized inverses of the cumulative distribution functions FX1, . . . , FXn, and U is a uniformly distributed random variable on the unit interval. More generally, a random vector is comonotonic if and only if it agrees in distribution with a random vector where all components are non-decreasing functions (or all are non-increasing functions) of the same random variable.

Upper bounds

Upper Fréchet–Hoeffding bound for cumulative distribution functions

Main article: Fréchet–Hoeffding copula bounds

Let X = (X1, . . . , Xn) be an R-valued random vector. Then, for every i ∈ {1, 2, . . . , n},

Pr ( X 1 x 1 , , X n x n ) Pr ( X i x i ) , ( x 1 , , x n ) R n , {\displaystyle \Pr(X_{1}\leq x_{1},\ldots ,X_{n}\leq x_{n})\leq \Pr(X_{i}\leq x_{i}),\qquad (x_{1},\ldots ,x_{n})\in {\mathbb {R} }^{n},}

hence

Pr ( X 1 x 1 , , X n x n ) min i { 1 , , n } Pr ( X i x i ) , ( x 1 , , x n ) R n , {\displaystyle \Pr(X_{1}\leq x_{1},\ldots ,X_{n}\leq x_{n})\leq \min _{i\in \{1,\ldots ,n\}}\Pr(X_{i}\leq x_{i}),\qquad (x_{1},\ldots ,x_{n})\in {\mathbb {R} }^{n},}

with equality everywhere if and only if (X1, . . . , Xn) is comonotonic.

Upper bound for the covariance

Let (X, Y) be a bivariate random vector such that the expected values of X, Y and the product XY exist. Let (X, Y) be a comonotonic bivariate random vector with the same one-dimensional marginal distributions as (X, Y). Then it follows from Höffding's formula for the covariance and the upper Fréchet–Hoeffding bound that

Cov ( X , Y ) Cov ( X , Y ) {\displaystyle {\text{Cov}}(X,Y)\leq {\text{Cov}}(X^{*},Y^{*})}

and, correspondingly,

E [ X Y ] E [ X Y ] {\displaystyle \operatorname {E} \leq \operatorname {E} }

with equality if and only if (X, Y) is comonotonic.

Note that this result generalizes the rearrangement inequality and Chebyshev's sum inequality.

See also

Notes

  1. (X, Y) always exists, take for example (FX(U), FY (U)), see section Properties above.

Citations

  1. (Sriboonchitta et al. 2010, pp. 149–152)
  2. (Kaas et al. 2002, Theorem 6)
  3. (Kaas et al. 2002, Theorem 7)
  4. (McNeil, Frey & Embrechts 2005, Proposition 6.15)
  5. (Kaas et al. 2002, Definition 1)
  6. See (Nelsen 2006, Definition 2.5.1) for the case n = 2
  7. See (Nelsen 2006, Theorem 2.5.4) for the case n = 2
  8. (McNeil, Frey & Embrechts 2005, Proposition A.3 (properties of the generalized inverse))
  9. (McNeil, Frey & Embrechts 2005, Proposition 5.16 and its proof)
  10. (McNeil, Frey & Embrechts 2005, Lemma 5.24)
  11. (McNeil, Frey & Embrechts 2005, Theorem 5.25(2))

References

Categories: