Misplaced Pages

Quantities of information

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Quantities of information" – news · newspapers · books · scholar · JSTOR (September 2012) (Learn how and when to remove this message)
A misleading information diagram showing additive and subtractive relationships among Shannon's basic quantities of information for correlated variables X {\displaystyle X} and Y {\displaystyle Y} . The area contained by both circles is the joint entropy H ( X , Y ) {\displaystyle \mathrm {H} (X,Y)} . The circle on the left (red and violet) is the individual entropy H ( X ) {\displaystyle \mathrm {H} (X)} , with the red being the conditional entropy H ( X | Y ) {\displaystyle \mathrm {H} (X|Y)} . The circle on the right (blue and violet) is H ( Y ) {\displaystyle \mathrm {H} (Y)} , with the blue being H ( Y | X ) {\displaystyle \mathrm {H} (Y|X)} . The violet is the mutual information I ( X ; Y ) {\displaystyle \operatorname {I} (X;Y)} .

The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, or more correctly the shannon, based on the binary logarithm. Although "bit" is more frequently used in place of "shannon", its name is not distinguished from the bit as used in data-processing to refer to a binary value or stream regardless of its entropy (information content) Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm.

In what follows, an expression of the form p log p {\displaystyle p\log p\,} is considered by convention to be equal to zero whenever p {\displaystyle p} is zero. This is justified because lim p 0 + p log p = 0 {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for any logarithmic base.

Self-information

Shannon derived a measure of information content called the self-information or "surprisal" of a message m {\displaystyle m} :

I ( m ) = log ( 1 p ( m ) ) = log ( p ( m ) ) {\displaystyle \operatorname {I} (m)=\log \left({\frac {1}{p(m)}}\right)=-\log(p(m))\,}

where p ( m ) = P r ( M = m ) {\displaystyle p(m)=\mathrm {Pr} (M=m)} is the probability that message m {\displaystyle m} is chosen from all possible choices in the message space M {\displaystyle M} . The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of shannons or more often simply "bits" (a bit in other contexts is rather defined as a "binary digit", whose average information content is at most 1 shannon).

Information from a source is gained by a recipient only if the recipient did not already have that information to begin with. Messages that convey information over a certain (P=1) event (or one which is known with certainty, for instance, through a back-channel) provide no information, as the above equation indicates. Infrequently occurring messages contain more information than more frequently occurring messages.

It can also be shown that a compound message of two (or more) unrelated messages would have a quantity of information that is the sum of the measures of information of each message individually. That can be derived using this definition by considering a compound message m & n {\displaystyle m\&n} providing information regarding the values of two random variables M and N using a message which is the concatenation of the elementary messages m and n, each of whose information content are given by I ( m ) {\displaystyle \operatorname {I} (m)} and I ( n ) {\displaystyle \operatorname {I} (n)} respectively. If the messages m and n each depend only on M and N, and the processes M and N are independent, then since P ( m & n ) = P ( m ) P ( n ) {\displaystyle P(m\&n)=P(m)P(n)} (the definition of statistical independence) it is clear from the above definition that I ( m & n ) = I ( m ) + I ( n ) {\displaystyle \operatorname {I} (m\&n)=\operatorname {I} (m)+\operatorname {I} (n)} .

An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity).

Entropy

The entropy of a discrete message space M {\displaystyle M} is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message m {\displaystyle m} from that message space:

H ( M ) = E [ I ( M ) ] = m M p ( m ) I ( m ) = m M p ( m ) log p ( m ) {\displaystyle \mathrm {H} (M)=\mathbb {E} \left=\sum _{m\in M}p(m)\operatorname {I} (m)=-\sum _{m\in M}p(m)\log p(m)}

where

E [ ] {\displaystyle \mathbb {E} } denotes the expected value operation.

An important property of entropy is that it is maximized when all the messages in the message space are equiprobable (e.g. p ( m ) = 1 / | M | {\displaystyle p(m)=1/|M|} ). In this case H ( M ) = log | M | {\displaystyle \mathrm {H} (M)=\log |M|} .

Sometimes the function H {\displaystyle \mathrm {H} } is expressed in terms of the probabilities of the distribution:

H ( p 1 , p 2 , , p k ) = i = 1 k p i log p i , {\displaystyle \mathrm {H} (p_{1},p_{2},\ldots ,p_{k})=-\sum _{i=1}^{k}p_{i}\log p_{i},} where each p i 0 {\displaystyle p_{i}\geq 0} and i = 1 k p i = 1 {\displaystyle \sum _{i=1}^{k}p_{i}=1}

An important special case of this is the binary entropy function:

H b ( p ) = H ( p , 1 p ) = p log p ( 1 p ) log ( 1 p ) {\displaystyle \mathrm {H} _{\mbox{b}}(p)=\mathrm {H} (p,1-p)=-p\log p-(1-p)\log(1-p)\,}

Joint entropy

The joint entropy of two discrete random variables X {\displaystyle X} and Y {\displaystyle Y} is defined as the entropy of the joint distribution of X {\displaystyle X} and Y {\displaystyle Y} :

H ( X , Y ) = E X , Y [ log p ( x , y ) ] = x , y p ( x , y ) log p ( x , y ) {\displaystyle \mathrm {H} (X,Y)=\mathbb {E} _{X,Y}\left=-\sum _{x,y}p(x,y)\log p(x,y)\,}

If X {\displaystyle X} and Y {\displaystyle Y} are independent, then the joint entropy is simply the sum of their individual entropies.

(Note: The joint entropy should not be confused with the cross entropy, despite similar notations.)

Conditional entropy (equivocation)

Given a particular value of a random variable Y {\displaystyle Y} , the conditional entropy of X {\displaystyle X} given Y = y {\displaystyle Y=y} is defined as:

H ( X | y ) = E [ X | Y ] [ log p ( x | y ) ] = x X p ( x | y ) log p ( x | y ) {\displaystyle \mathrm {H} (X|y)=\mathbb {E} _{\left}=-\sum _{x\in X}p(x|y)\log p(x|y)}

where p ( x | y ) = p ( x , y ) p ( y ) {\displaystyle p(x|y)={\frac {p(x,y)}{p(y)}}} is the conditional probability of x {\displaystyle x} given y {\displaystyle y} .

The conditional entropy of X {\displaystyle X} given Y {\displaystyle Y} , also called the equivocation of X {\displaystyle X} about Y {\displaystyle Y} is then given by:

H ( X | Y ) = E Y [ H ( X | y ) ] = y Y p ( y ) x X p ( x | y ) log p ( x | y ) = x , y p ( x , y ) log p ( y ) p ( x , y ) . {\displaystyle \mathrm {H} (X|Y)=\mathbb {E} _{Y}\left=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=\sum _{x,y}p(x,y)\log {\frac {p(y)}{p(x,y)}}.}

This uses the conditional expectation from probability theory.

A basic property of the conditional entropy is that:

H ( X | Y ) = H ( X , Y ) H ( Y ) . {\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X,Y)-\mathrm {H} (Y).\,}

Kullback–Leibler divergence (information gain)

The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions, a "true" probability distribution p {\displaystyle p} , and an arbitrary probability distribution q {\displaystyle q} . If we compress data in a manner that assumes q {\displaystyle q} is the distribution underlying some data, when, in reality, p {\displaystyle p} is the correct distribution, Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression, or, mathematically,

D K L ( p ( X ) q ( X ) ) = x X p ( x ) log p ( x ) q ( x ) . {\displaystyle D_{\mathrm {KL} }{\bigl (}p(X)\|q(X){\bigr )}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.}

It is in some sense the "distance" from q {\displaystyle q} to p {\displaystyle p} , although it is not a true metric due to its not being symmetric.

Mutual information (transinformation)

It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The mutual information of X {\displaystyle X} relative to Y {\displaystyle Y} (which represents conceptually the average amount of information about X {\displaystyle X} that can be gained by observing Y {\displaystyle Y} ) is given by:

I ( X ; Y ) = y Y p ( y ) x X p ( x | y ) log p ( x | y ) p ( x ) = x , y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) . {\displaystyle \operatorname {I} (X;Y)=\sum _{y\in Y}p(y)\sum _{x\in X}{p(x|y)\log {\frac {p(x|y)}{p(x)}}}=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}.}

A basic property of the mutual information is that:

I ( X ; Y ) = H ( X ) H ( X | Y ) . {\displaystyle \operatorname {I} (X;Y)=\mathrm {H} (X)-\mathrm {H} (X|Y).\,}

That is, knowing Y {\displaystyle Y} , we can save an average of I ( X ; Y ) {\displaystyle \operatorname {I} (X;Y)} bits in encoding X {\displaystyle X} compared to not knowing Y {\displaystyle Y} . Mutual information is symmetric:

I ( X ; Y ) = I ( Y ; X ) = H ( X ) + H ( Y ) H ( X , Y ) . {\displaystyle \operatorname {I} (X;Y)=\operatorname {I} (Y;X)=\mathrm {H} (X)+\mathrm {H} (Y)-\mathrm {H} (X,Y).\,}


Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) of the posterior probability distribution of X {\displaystyle X} given the value of Y {\displaystyle Y} to the prior distribution on X {\displaystyle X} :

I ( X ; Y ) = E p ( y ) [ D K L ( p ( X | Y = y ) p ( X ) ) ] . {\displaystyle \operatorname {I} (X;Y)=\mathbb {E} _{p(y)}\left.}

In other words, this is a measure of how much, on the average, the probability distribution on X {\displaystyle X} will change if we are given the value of Y {\displaystyle Y} . This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:

I ( X ; Y ) = D K L ( p ( X , Y ) p ( X ) p ( Y ) ) . {\displaystyle \operatorname {I} (X;Y)=D_{\mathrm {KL} }{\bigl (}p(X,Y)\|p(X)p(Y){\bigr )}.}

Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.

Differential entropy

Main article: Differential entropy

The basic measures of discrete entropy have been extended by analogy to continuous spaces by replacing sums with integrals and probability mass functions with probability density functions. Although, in both cases, mutual information expresses the number of bits of information common to the two sources in question, the analogy does not imply identical properties; for example, differential entropy may be negative.

The differential analogies of entropy, joint entropy, conditional entropy, and mutual information are defined as follows:

h ( X ) = X f ( x ) log f ( x ) d x {\displaystyle h(X)=-\int _{X}f(x)\log f(x)\,dx}
h ( X , Y ) = Y X f ( x , y ) log f ( x , y ) d x d y {\displaystyle h(X,Y)=-\int _{Y}\int _{X}f(x,y)\log f(x,y)\,dx\,dy}
h ( X | y ) = X f ( x | y ) log f ( x | y ) d x {\displaystyle h(X|y)=-\int _{X}f(x|y)\log f(x|y)\,dx}
h ( X | Y ) = Y X f ( x , y ) log f ( y ) f ( x , y ) d x d y {\displaystyle h(X|Y)=\int _{Y}\int _{X}f(x,y)\log {\frac {f(y)}{f(x,y)}}\,dx\,dy}
I ( X ; Y ) = Y X f ( x , y ) log f ( x , y ) f ( x ) f ( y ) d x d y {\displaystyle \operatorname {I} (X;Y)=\int _{Y}\int _{X}f(x,y)\log {\frac {f(x,y)}{f(x)f(y)}}\,dx\,dy}

where f ( x , y ) {\displaystyle f(x,y)} is the joint density function, f ( x ) {\displaystyle f(x)} and f ( y ) {\displaystyle f(y)} are the marginal distributions, and f ( x | y ) {\displaystyle f(x|y)} is the conditional distribution.

See also

References

  1. D.J.C. Mackay (2003). Information theory, inferences, and learning algorithms. Bibcode:2003itil.book.....M.
  2. Stam, A.J. (1959). "Some inequalities satisfied by the quantities of information of Fisher and Shannon". Information and Control. 2 (2): 101–112. doi:10.1016/S0019-9958(59)90348-1.
  3. "Three approaches to the definition of the concept "quantity of information"" (PDF).
Category: