Misplaced Pages

False positive rate

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Comparisonwise error rate) Chance of wrongly rejecting the null hypothesis
This article relies largely or entirely on a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources.
Find sources: "False positive rate" – news · newspapers · books · scholar · JSTOR (July 2016)

In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification).

The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio.

Definition

The false positive rate is F P R = F P F P + T N {\displaystyle {\boldsymbol {\mathrm {FPR} }}={\frac {\mathrm {FP} }{\mathrm {FP} +\mathrm {TN} }}}

where F P {\displaystyle \mathrm {FP} } is the number of false positives, T N {\displaystyle \mathrm {TN} } is the number of true negatives and N = F P + T N {\displaystyle N=\mathrm {FP} +\mathrm {TN} } is the total number of ground truth negatives.

The significance level used to test each hypothesis is set based on the form of inference (simultaneous inference vs. selective inference) and its supporting criteria (for example FWER or FDR), that were pre-determined by the researcher.

When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply V / m 0 {\displaystyle V/m_{0}} .

Since V is a random variable and m 0 {\displaystyle m_{0}} is a constant ( V m 0 {\displaystyle V\leq m_{0}} ), the false positive ratio is also a random variable, ranging between 0–1.
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio, expressed by E ( V / m 0 ) {\displaystyle E(V/m_{0})} .

It is worth noticing that the two definitions ("false positive ratio" / "false positive rate") are somewhat interchangeable. For example, in the referenced article V / m 0 {\displaystyle V/m_{0}} serves as the false positive "rate" rather than as its "ratio".

Classification of multiple hypothesis tests

Main article: Classification of multiple hypothesis tests

The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi  yields the following random variables:

Null hypothesis is true (H0) Alternative hypothesis is true (HA) Total
Test is declared significant V S R
Test is declared non-significant U T m R {\displaystyle m-R}
Total m 0 {\displaystyle m_{0}} m m 0 {\displaystyle m-m_{0}} m

In m hypothesis tests of which m 0 {\displaystyle m_{0}} are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.

Comparison with other error rates

This section possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (February 2013) (Learn how and when to remove this message)

While the false positive rate is mathematically equal to the type I error rate, it is viewed as a separate term for the following reasons:

  • The type I error rate is often associated with the a-priori setting of the significance level by the researcher: the significance level represents an acceptable error rate considering that all null hypotheses are true (the "global null" hypothesis). The choice of a significance level may thus be somewhat arbitrary (i.e. setting 10% (0.1), 5% (0.05), 1% (0.01) etc.)
As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the "global null" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.
  • Moreover, false positive rate is usually used regarding a medical test or diagnostic device (i.e. "the false positive rate of a certain diagnostic device is 1%"), while type I error is a term associated with statistical tests, where the meaning of the word "positive" is not as clear (i.e. "the type I error of a test is 1%").

The false positive rate should also not be confused with the family-wise error rate, which is defined as F W E R = Pr ( V 1 ) {\displaystyle {\boldsymbol {\mathrm {FWER} }}=\Pr(V\geq 1)\,} . As the number of tests grows, the familywise error rate usually converges to 1 while the false positive rate remains fixed.

Lastly, it is important to note the profound difference between the false positive rate and the false discovery rate: while the first is defined as E ( V / m 0 ) {\displaystyle E(V/m_{0})} , the second is defined as E ( V / R ) {\displaystyle E(V/R)} .

See also

References

  1. Burke, Donald; Brundage, John; Redfield, Robert (1988). "Measurement of the False Positive Rate in a Screening Program for Human Immunodeficiency Virus Infections". The New England Journal of Medicine. 319 (15): 961–964. doi:10.1056/NEJM198810133191501. PMID 3419477.
Categories: