Misplaced Pages

Conditional dependence

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
See also: Conditional independence
A Bayesian network illustrating conditional dependence

In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if A {\displaystyle A} and B {\displaystyle B} are two events that individually increase the probability of a third event C , {\displaystyle C,} and do not directly affect each other, then initially (when it has not been observed whether or not the event C {\displaystyle C} occurs) P ( A B ) = P ( A )  and  P ( B A ) = P ( B ) {\displaystyle \operatorname {P} (A\mid B)=\operatorname {P} (A)\quad {\text{ and }}\quad \operatorname {P} (B\mid A)=\operatorname {P} (B)} ( A  and  B {\displaystyle A{\text{ and }}B} are independent).

But suppose that now C {\displaystyle C} is observed to occur. If event B {\displaystyle B} occurs then the probability of occurrence of the event A {\displaystyle A} will decrease because its positive relation to C {\displaystyle C} is less necessary as an explanation for the occurrence of C {\displaystyle C} (similarly, event A {\displaystyle A} occurring will decrease the probability of occurrence of B {\displaystyle B} ). Hence, now the two events A {\displaystyle A} and B {\displaystyle B} are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. We have P ( A C  and  B ) < P ( A C ) . {\displaystyle \operatorname {P} (A\mid C{\text{ and }}B)<\operatorname {P} (A\mid C).}

Conditional dependence of A and B given C is the logical negation of conditional independence ( ( A B ) C ) {\displaystyle ((A\perp \!\!\!\perp B)\mid C)} . In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.

Example

In essence probability is influenced by a person's information about the possible occurrence of an event. For example, let the event A {\displaystyle A} be 'I have a new phone'; event B {\displaystyle B} be 'I have a new watch'; and event C {\displaystyle C} be 'I am happy'; and suppose that having either a new phone or a new watch increases the probability of my being happy. Let us assume that the event C {\displaystyle C} has occurred – meaning 'I am happy'. Now if another person sees my new watch, he/she will reason that my likelihood of being happy was increased by my new watch, so there is less need to attribute my happiness to a new phone.

To make the example more numerically specific, suppose that there are four possible states Ω = { s 1 , s 2 , s 3 , s 4 } , {\displaystyle \Omega =\left\{s_{1},s_{2},s_{3},s_{4}\right\},} given in the middle four columns of the following table, in which the occurrence of event A {\displaystyle A} is signified by a 1 {\displaystyle 1} in row A {\displaystyle A} and its non-occurrence is signified by a 0 , {\displaystyle 0,} and likewise for B {\displaystyle B} and C . {\displaystyle C.} That is, A = { s 2 , s 4 } , B = { s 3 , s 4 } , {\displaystyle A=\left\{s_{2},s_{4}\right\},B=\left\{s_{3},s_{4}\right\},} and C = { s 2 , s 3 , s 4 } . {\displaystyle C=\left\{s_{2},s_{3},s_{4}\right\}.} The probability of s i {\displaystyle s_{i}} is 1 / 4 {\displaystyle 1/4} for every i . {\displaystyle i.}

Event P ( s 1 ) = 1 / 4 {\displaystyle \operatorname {P} (s_{1})=1/4} P ( s 2 ) = 1 / 4 {\displaystyle \operatorname {P} (s_{2})=1/4} P ( s 3 ) = 1 / 4 {\displaystyle \operatorname {P} (s_{3})=1/4} P ( s 4 ) = 1 / 4 {\displaystyle \operatorname {P} (s_{4})=1/4} Probability of event
A {\displaystyle A} 0 1 0 1 1 2 {\displaystyle {\tfrac {1}{2}}}
B {\displaystyle B} 0 0 1 1 1 2 {\displaystyle {\tfrac {1}{2}}}
C {\displaystyle C} 0 1 1 1 3 4 {\displaystyle {\tfrac {3}{4}}}

and so

Event s 1 {\displaystyle s_{1}} s 2 {\displaystyle s_{2}} s 3 {\displaystyle s_{3}} s 4 {\displaystyle s_{4}} Probability of event
A B {\displaystyle A\cap B} 0 0 0 1 1 4 {\displaystyle {\tfrac {1}{4}}}
A C {\displaystyle A\cap C} 0 1 0 1 1 2 {\displaystyle {\tfrac {1}{2}}}
B C {\displaystyle B\cap C} 0 0 1 1 1 2 {\displaystyle {\tfrac {1}{2}}}
A B C {\displaystyle A\cap B\cap C} 0 0 0 1 1 4 {\displaystyle {\tfrac {1}{4}}}

In this example, C {\displaystyle C} occurs if and only if at least one of A , B {\displaystyle A,B} occurs. Unconditionally (that is, without reference to C {\displaystyle C} ), A {\displaystyle A} and B {\displaystyle B} are independent of each other because P ( A ) {\displaystyle \operatorname {P} (A)} —the sum of the probabilities associated with a 1 {\displaystyle 1} in row A {\displaystyle A} —is 1 2 , {\displaystyle {\tfrac {1}{2}},} while P ( A B ) = P ( A  and  B ) / P ( B ) = 1 / 4 1 / 2 = 1 2 = P ( A ) . {\displaystyle \operatorname {P} (A\mid B)=\operatorname {P} (A{\text{ and }}B)/\operatorname {P} (B)={\tfrac {1/4}{1/2}}={\tfrac {1}{2}}=\operatorname {P} (A).} But conditional on C {\displaystyle C} having occurred (the last three columns in the table), we have P ( A C ) = P ( A  and  C ) / P ( C ) = 1 / 2 3 / 4 = 2 3 {\displaystyle \operatorname {P} (A\mid C)=\operatorname {P} (A{\text{ and }}C)/\operatorname {P} (C)={\tfrac {1/2}{3/4}}={\tfrac {2}{3}}} while P ( A C  and  B ) = P ( A  and  C  and  B ) / P ( C  and  B ) = 1 / 4 1 / 2 = 1 2 < P ( A C ) . {\displaystyle \operatorname {P} (A\mid C{\text{ and }}B)=\operatorname {P} (A{\text{ and }}C{\text{ and }}B)/\operatorname {P} (C{\text{ and }}B)={\tfrac {1/4}{1/2}}={\tfrac {1}{2}}<\operatorname {P} (A\mid C).} Since in the presence of C {\displaystyle C} the probability of A {\displaystyle A} is affected by the presence or absence of B , A {\displaystyle B,A} and B {\displaystyle B} are mutually dependent conditional on C . {\displaystyle C.}

See also

References

  1. Introduction to Artificial Intelligence by Sebastian Thrun and Peter Norvig, 2011 "Unit 3: Conditional Dependence"
  2. Introduction to learning Bayesian Networks from Data by Dirk Husmeier "Introduction to Learning Bayesian Networks from Data -Dirk Husmeier"
  3. Conditional Independence in Statistical theory "Conditional Independence in Statistical Theory", A. P. Dawid" Archived 2013-12-27 at the Wayback Machine
  4. Probabilistic independence on Britannica "Probability->Applications of conditional probability->independence (equation 7) "
  5. Introduction to Artificial Intelligence by Sebastian Thrun and Peter Norvig, 2011 "Unit 3: Explaining Away"
  6. Bouckaert, Remco R. (1994). "11. Conditional dependence in probabilistic networks". In Cheeseman, P.; Oldford, R. W. (eds.). Selecting Models from Data, Artificial Intelligence and Statistics IV. Lecture Notes in Statistics. Vol. 89. Springer-Verlag. pp. 101–111, especially 104. ISBN 978-0-387-94281-0.
  7. Conditional Independence in Statistical theory "Conditional Independence in Statistical Theory", A. P. Dawid Archived 2013-12-27 at the Wayback Machine
Category: