Misplaced Pages

Information theory and measure theory

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (December 2023) (Learn how and when to remove this message)
This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (December 2023) (Learn how and when to remove this message)

This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).

Measures in information theory

Many of the concepts in information theory have separate definitions and formulas for continuous and discrete cases. For example, entropy H ( X ) {\displaystyle \mathrm {H} (X)} is usually defined for discrete random variables, whereas for continuous random variables the related concept of differential entropy, written h ( X ) {\displaystyle h(X)} , is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematical expectations, but the expectation is defined with an integral for the continuous case, and a sum for the discrete case.

These separate definitions can be more closely related in terms of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment.

Consider the formula for the differential entropy of a continuous random variable X {\displaystyle X} with range R {\displaystyle \mathbb {R} } and probability density function f ( x ) {\displaystyle f(x)} :

h ( X ) = R f ( x ) log f ( x ) d x . {\displaystyle h(X)=-\int _{\mathbb {R} }f(x)\log f(x)\,dx.}

This can usually be interpreted as the following Riemann–Stieltjes integral:

h ( X ) = R f ( x ) log f ( x ) d μ ( x ) , {\displaystyle h(X)=-\int _{\mathbb {R} }f(x)\log f(x)\,d\mu (x),}

where μ {\displaystyle \mu } is the Lebesgue measure.

If instead, X {\displaystyle X} is discrete, with range Ω {\displaystyle \Omega } a finite set, f {\displaystyle f} is a probability mass function on Ω {\displaystyle \Omega } , and ν {\displaystyle \nu } is the counting measure on Ω {\displaystyle \Omega } , we can write:

H ( X ) = x Ω f ( x ) log f ( x ) = Ω f ( x ) log f ( x ) d ν ( x ) . {\displaystyle \mathrm {H} (X)=-\sum _{x\in \Omega }f(x)\log f(x)=-\int _{\Omega }f(x)\log f(x)\,d\nu (x).}

The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density function f {\displaystyle f} is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.

If P {\displaystyle P} is the probability measure induced by X {\displaystyle X} , then the integral can also be taken directly with respect to P {\displaystyle P} :

h ( X ) = Ω log d P d μ d P , {\displaystyle h(X)=-\int _{\Omega }\log {\frac {\mathrm {d} P}{\mathrm {d} \mu }}\,dP,}

If instead of the underlying measure μ we take another probability measure Q {\displaystyle Q} , we are led to the Kullback–Leibler divergence: let P {\displaystyle P} and Q {\displaystyle Q} be probability measures over the same space. Then if P {\displaystyle P} is absolutely continuous with respect to Q {\displaystyle Q} , written P Q , {\displaystyle P\ll Q,} the Radon–Nikodym derivative d P d Q {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} Q}}} exists and the Kullback–Leibler divergence can be expressed in its full generality:

D KL ( P Q ) = supp P d P d Q log d P d Q d Q = supp P log d P d Q d P , {\displaystyle D_{\operatorname {KL} }(P\|Q)=\int _{\operatorname {supp} P}{\frac {\mathrm {d} P}{\mathrm {d} Q}}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dQ=\int _{\operatorname {supp} P}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dP,}

where the integral runs over the support of P . {\displaystyle P.} Note that we have dropped the negative sign: the Kullback–Leibler divergence is always non-negative due to Gibbs' inequality.

Entropy as a "measure"

Venn diagram for various information measures associated with correlated variables X and Y. The area contained by both circles is the joint entropy H(X,Y). The circle on the left (red and cyan) is the individual entropy H(X), with the red being the conditional entropy H(X|Y). The circle on the right (blue and cyan) is H(Y), with the blue being H(Y|X). The cyan is the mutual information I(X;Y).
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources in this section. Unsourced material may be challenged and removed. (April 2017) (Learn how and when to remove this message)
Venn diagram of information theoretic measures for three variables x, y, and z. Each circle represents an individual entropy: H(x) is the lower left circle, H(y) the lower right, and H(z) is the upper circle. The intersections of any two circles represents the mutual information for the two associated variables (e.g. I(x;z) is yellow and gray). The union of any two circles is the joint entropy for the two associated variables (e.g. H(x,y) is everything but green). The joint entropy H(x,y,z) of all three variables is the union of all three circles. It is partitioned into 7 pieces, red, blue, and green being the conditional entropies H(x|y,z), H(y|x,z), H(z|x,y) respectively, yellow, magenta and cyan being the conditional mutual informations I(x;z|y), I(y;z|x) and I(x;y|z) respectively, and gray being the multivariate mutual information I(x;y;z). The multivariate mutual information is the only one of all that may be negative.

There is an analogy between Shannon's basic "measures" of the information content of random variables and a measure over sets. Namely the joint entropy, conditional entropy, and mutual information can be considered as the measure of a set union, set difference, and set intersection, respectively (Reza pp. 106–108).

If we associate the existence of abstract sets X ~ {\displaystyle {\tilde {X}}} and Y ~ {\displaystyle {\tilde {Y}}} to arbitrary discrete random variables X and Y, somehow representing the information borne by X and Y, respectively, such that:

  • μ ( X ~ Y ~ ) = 0 {\displaystyle \mu ({\tilde {X}}\cap {\tilde {Y}})=0} whenever X and Y are unconditionally independent, and
  • X ~ = Y ~ {\displaystyle {\tilde {X}}={\tilde {Y}}} whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);

where μ {\displaystyle \mu } is a signed measure over these sets, and we set:

H ( X ) = μ ( X ~ ) , H ( Y ) = μ ( Y ~ ) , H ( X , Y ) = μ ( X ~ Y ~ ) , H ( X Y ) = μ ( X ~ Y ~ ) , I ( X ; Y ) = μ ( X ~ Y ~ ) ; {\displaystyle {\begin{aligned}\mathrm {H} (X)&=\mu ({\tilde {X}}),\\\mathrm {H} (Y)&=\mu ({\tilde {Y}}),\\\mathrm {H} (X,Y)&=\mu ({\tilde {X}}\cup {\tilde {Y}}),\\\mathrm {H} (X\mid Y)&=\mu ({\tilde {X}}\setminus {\tilde {Y}}),\\\operatorname {I} (X;Y)&=\mu ({\tilde {X}}\cap {\tilde {Y}});\end{aligned}}}

we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal signed measure over sets, as commonly illustrated in an information diagram. This allows the sum of two measures to be written:

μ ( A ) + μ ( B ) = μ ( A B ) + μ ( A B ) {\displaystyle \mu (A)+\mu (B)=\mu (A\cup B)+\mu (A\cap B)}

and the analog of Bayes' theorem ( μ ( A ) + μ ( B A ) = μ ( B ) + μ ( A B ) {\displaystyle \mu (A)+\mu (B\setminus A)=\mu (B)+\mu (A\setminus B)} ) allows the difference of two measures to be written:

μ ( A ) μ ( B ) = μ ( A B ) μ ( B A ) {\displaystyle \mu (A)-\mu (B)=\mu (A\setminus B)-\mu (B\setminus A)}

This can be a handy mnemonic device in some situations, e.g.

H ( X , Y ) = H ( X ) + H ( Y X ) μ ( X ~ Y ~ ) = μ ( X ~ ) + μ ( Y ~ X ~ ) I ( X ; Y ) = H ( X ) H ( X Y ) μ ( X ~ Y ~ ) = μ ( X ~ ) μ ( X ~ Y ~ ) {\displaystyle {\begin{aligned}\mathrm {H} (X,Y)&=\mathrm {H} (X)+\mathrm {H} (Y\mid X)&\mu ({\tilde {X}}\cup {\tilde {Y}})&=\mu ({\tilde {X}})+\mu ({\tilde {Y}}\setminus {\tilde {X}})\\\operatorname {I} (X;Y)&=\mathrm {H} (X)-\mathrm {H} (X\mid Y)&\mu ({\tilde {X}}\cap {\tilde {Y}})&=\mu ({\tilde {X}})-\mu ({\tilde {X}}\setminus {\tilde {Y}})\end{aligned}}}

Note that measures (expectation values of the logarithm) of true probabilities are called "entropy" and generally represented by the letter H, while other measures are often referred to as "information" or "correlation" and generally represented by the letter I. For notational simplicity, the letter I is sometimes used for all measures.

Multivariate mutual information

Main article: Multivariate mutual information

Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106–108 for an informal but rather complete discussion.) Namely H ( X , Y , Z , ) {\displaystyle \mathrm {H} (X,Y,Z,\cdots )} needs to be defined in the obvious way as the entropy of a joint distribution, and a multivariate mutual information I ( X ; Y ; Z ; ) {\displaystyle \operatorname {I} (X;Y;Z;\cdots )} defined in a suitable manner so that we can set:

H ( X , Y , Z , ) = μ ( X ~ Y ~ Z ~ ) , I ( X ; Y ; Z ; ) = μ ( X ~ Y ~ Z ~ ) ; {\displaystyle {\begin{aligned}\mathrm {H} (X,Y,Z,\cdots )&=\mu ({\tilde {X}}\cup {\tilde {Y}}\cup {\tilde {Z}}\cup \cdots ),\\\operatorname {I} (X;Y;Z;\cdots )&=\mu ({\tilde {X}}\cap {\tilde {Y}}\cap {\tilde {Z}}\cap \cdots );\end{aligned}}}

in order to define the (signed) measure over the whole σ-algebra. There is no single universally accepted definition for the multivariate mutual information, but the one that corresponds here to the measure of a set intersection is due to Fano (1966: p. 57-59). The definition is recursive. As a base case the mutual information of a single random variable is defined to be its entropy: I ( X ) = H ( X ) {\displaystyle \operatorname {I} (X)=\mathrm {H} (X)} . Then for n 2 {\displaystyle n\geq 2} we set

I ( X 1 ; ; X n ) = I ( X 1 ; ; X n 1 ) I ( X 1 ; ; X n 1 X n ) , {\displaystyle \operatorname {I} (X_{1};\cdots ;X_{n})=\operatorname {I} (X_{1};\cdots ;X_{n-1})-\operatorname {I} (X_{1};\cdots ;X_{n-1}\mid X_{n}),}

where the conditional mutual information is defined as

I ( X 1 ; ; X n 1 X n ) = E X n ( I ( X 1 ; ; X n 1 ) X n ) . {\displaystyle \operatorname {I} (X_{1};\cdots ;X_{n-1}\mid X_{n})=\mathbb {E} _{X_{n}}{\big (}\operatorname {I} (X_{1};\cdots ;X_{n-1})\mid X_{n}{\big )}.}

The first step in the recursion yields Shannon's definition I ( X 1 ; X 2 ) = H ( X 1 ) H ( X 1 X 2 ) . {\displaystyle \operatorname {I} (X_{1};X_{2})=\mathrm {H} (X_{1})-\mathrm {H} (X_{1}\mid X_{2}).} The multivariate mutual information (same as interaction information but for a change in sign) of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then I ( X ; Y ; Z ) = 1 {\displaystyle \operatorname {I} (X;Y;Z)=-1} bit.

Many other variations are possible for three or more random variables: for example, I ( X , Y ; Z ) {\displaystyle \operatorname {I} (X,Y;Z)} is the mutual information of the joint distribution of X and Y relative to Z, and can be interpreted as μ ( ( X ~ Y ~ ) Z ~ ) . {\displaystyle \mu (({\tilde {X}}\cup {\tilde {Y}})\cap {\tilde {Z}}).} Many more complicated expressions can be built this way, and still have meaning, e.g. I ( X , Y ; Z W ) , {\displaystyle \operatorname {I} (X,Y;Z\mid W),} or H ( X , Z W , Y ) . {\displaystyle \mathrm {H} (X,Z\mid W,Y).}

References

See also

Categories: