Misplaced Pages

Confusion of the inverse

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Fallacy of transposed conditional) Logical fallacy

Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A).

Examples

Example 1

Relative
size
Malignant Benign Total
Test
positive
0.8
(true positive)
9.9
(false positive)
10.7
Test
negative
0.2
(false negative)
89.1
(true negative)
89.3
Total 1 99 100

In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. A test can detect 80% of malignancies and has a 10% false positive rate. What is the probability of malignancy given a positive test result? Approximately 95 out of 100 physicians responded the probability of malignancy would be about 75%, apparently because the physicians believed that the chances of malignancy given a positive test result were approximately the same as the chances of a positive test result given malignancy.

The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem:

P ( malignant | positive ) = P ( positive | malignant ) P ( malignant ) P ( positive | malignant ) P ( malignant ) + P ( positive | benign ) P ( benign ) = ( 0.80 0.01 ) ( 0.80 0.01 ) + ( 0.10 0.99 ) = 0.075 {\displaystyle {\begin{aligned}&{}\qquad P({\text{malignant}}|{\text{positive}})\\&={\frac {P({\text{positive}}|{\text{malignant}})P({\text{malignant}})}{P({\text{positive}}|{\text{malignant}})P({\text{malignant}})+P({\text{positive}}|{\text{benign}})P({\text{benign}})}}\\&={\frac {(0.80\cdot 0.01)}{(0.80\cdot 0.01)+(0.10\cdot 0.99)}}=0.075\end{aligned}}}

Other examples of confusion include:

  • Hard drug users tend to use marijuana; therefore, marijuana users tend to use hard drugs (the first probability is marijuana use given hard drug use, the second is hard drug use given marijuana use).
  • Most accidents occur within 25 miles from home; therefore, you are safest when you are far from home.
  • Terrorists tend to have an engineering background; so, engineers have a tendency towards terrorism.

For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion.

Example 2

See also: False positive paradox
Relative
size (%)
Ill Well Total
Test
positive
0.99
(true positive)
0.99
(false positive)
1.98
Test
negative
0.01
(false negative)
98.01
(true negative)
98.02
Total 1 99 100

In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening results: If a person not having the disease is incorrectly found to have it by the initial test, they will most likely be distressed, and even if they subsequently take a more careful test and are told they are well, their lives may still be affected negatively. If they undertake unnecessary treatment for the disease, they may be harmed by the treatment's side effects and costs.

The magnitude of this problem is best understood in terms of conditional probabilities.

Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random,

P ( ill ) = 1 % = 0.01  and  P ( well ) = 99 % = 0.99. {\displaystyle P({\text{ill}})=1\%=0.01{\text{ and }}P({\text{well}})=99\%=0.99.}

Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result (and hence 99% chance of getting a true negative result, a number known as the specificity of the test), i.e.

P ( positive | well ) = 1 % ,  and  P ( negative | well ) = 99 % . {\displaystyle P({\text{positive}}|{\text{well}})=1\%,{\text{ and }}P({\text{negative}}|{\text{well}})=99\%.}

Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result (and 99% chance of getting a true positive result, known as the sensitivity of the test), i.e.

P ( negative | ill ) = 1 %  and  P ( positive | ill ) = 99 % . {\displaystyle P({\text{negative}}|{\text{ill}})=1\%{\text{ and }}P({\text{positive}}|{\text{ill}})=99\%.}

Calculations

The fraction of individuals in the whole group who are well and test negative (true negative):

P ( well negative ) = P ( well ) × P ( negative | well ) = 99 % × 99 % = 98.01 % . {\displaystyle P({\text{well}}\cap {\text{negative}})=P({\text{well}})\times P({\text{negative}}|{\text{well}})=99\%\times 99\%=98.01\%.}

The fraction of individuals in the whole group who are ill and test positive (true positive):

P ( ill positive ) = P ( ill ) × P ( positive | ill ) = 1 % × 99 % = 0.99 % . {\displaystyle P({\text{ill}}\cap {\text{positive}})=P({\text{ill}})\times P({\text{positive}}|{\text{ill}})=1\%\times 99\%=0.99\%.}

The fraction of individuals in the whole group who have false positive results:

P ( well positive ) = P ( well ) × P ( positive | well ) = 99 % × 1 % = 0.99 % . {\displaystyle P({\text{well}}\cap {\text{positive}})=P({\text{well}})\times P({\text{positive}}|{\text{well}})=99\%\times 1\%=0.99\%.}

The fraction of individuals in the whole group who have false negative results:

P ( ill negative ) = P ( ill ) × P ( negative | ill ) = 1 % × 1 % = 0.01 % . {\displaystyle P({\text{ill}}\cap {\text{negative}})=P({\text{ill}})\times P({\text{negative}}|{\text{ill}})=1\%\times 1\%=0.01\%.}

Furthermore, the fraction of individuals in the whole group who test positive:

P ( positive ) = P ( well   positive ) + P ( ill positive ) = 0.99 % + 0.99 % = 1.98 % . {\displaystyle {\begin{aligned}P({\text{positive}})&{}=P({\text{well }}\cap {\text{ positive}})+P({\text{ill}}\cap {\text{positive}})\\&{}=0.99\%+0.99\%=1.98\%.\end{aligned}}}

Finally, the probability that an individual actually has the disease, given that the test result is positive:

P ( ill | positive ) = P ( ill positive ) P ( positive ) = 0.99 % 1.98 % = 50 % . {\displaystyle P({\text{ill}}|{\text{positive}})={\frac {P({\text{ill}}\cap {\text{positive}})}{P({\text{positive}})}}={\frac {0.99\%}{1.98\%}}=50\%.}

Conclusion

In this example, it should be easy to relate to the difference between the conditional probabilities P(positive | ill) which with the assumed probabilities is 99%, and P(ill | positive) which is 50%: the first is the probability that an individual who has the disease tests positive; the second is the probability that an individual who tests positive actually has the disease. Thus, with the probabilities picked in this example, roughly the same number of individuals receive the benefits of early treatment as are distressed by false positives; these positive and negative effects can then be considered in deciding whether to carry out the screening, or if possible whether to adjust the test criteria to decrease the number of false positives (possibly at the expense of more false negatives).

See also

References

  1. Plous, Scott (1993). The Psychology of Judgment and Decisionmaking. pp. 131–134. ISBN 978-0-07-050477-6.
  2. Villejoubert, Gaëlle; Mandel, David (2002). "The inverse fallacy: An account of deviations from Bayes's Theorem and the additivity principle". Memory & Cognition. 30 (5): 171–178. doi:10.3758/BF03195278. PMID 12035879.
  3. Eddy, David M. (1982). "Probabilistic reasoning in clinical medicine: Problems and opportunities". In Kahneman, D.; Slovic, P.; Tversky, A. (eds.). Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press. pp. 249–267. ISBN 0-521-24064-6. Description simplified as in Plous (1993).
  4. Eddy (1982, p. 253). "Unfortunately, most physicians (approximately 95 out of 100 in an informal sample taken by the author) misinterpret the statements about the accuracy of the test and estimate P(ca|pos) to be about 75%."
  5. ^ Hastie, Reid; Robyn Dawes (2001). Rational Choice in an Uncertain World. pp. 122–123. ISBN 978-0-7619-2275-9.
  6. see "Engineers make good terrorists?". Slashdot. 2008-04-03. Retrieved 2008-04-25.
Categories: