Misplaced Pages

Classical test theory

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Classical test theory (CTT) is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. It is a theory of testing based on the idea that a person's observed or obtained score on a test is the sum of a true score (error-free score) and an error score. Generally speaking, the aim of classical test theory is to understand and improve the reliability of psychological tests.

Classical test theory may be regarded as roughly synonymous with true score theory. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as item response theory, which sometimes bear the appellation "modern" as in "modern latent trait theory".

Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick (1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications.

History

Classical test theory was born only after the following three achievements or ideas were conceptualized:

1. a recognition of the presence of errors in measurements,

2. a conception of that error as a random variable,

3. a conception of correlation and how to index it.

In 1904, Charles Spearman was responsible for figuring out how to correct a correlation coefficient for attenuation due to measurement error and how to obtain the index of reliability needed in making the correction. Spearman's finding is thought to be the beginning of Classical Test Theory by some (Traub, 1997). Others who had an influence in the Classical Test Theory's framework include: George Udny Yule, Truman Lee Kelley, Fritz Kuder & Marion Richardson involved in making the Kuder–Richardson Formulas, Louis Guttman, and, most recently, Melvin Novick, not to mention others over the next quarter century after Spearman's initial findings.

Definitions

Classical test theory assumes that each person has a true score,T, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only an observed score, X. It is assumed that observed score = true score plus some error:

                X         =       T      +    E
          observed score     true score     error

Classical test theory is concerned with the relations between the three variables X {\displaystyle X} , T {\displaystyle T} , and E {\displaystyle E} in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of reliability. The reliability of the observed test scores X {\displaystyle X} , which is denoted as ρ X T 2 {\displaystyle {\rho _{XT}^{2}}} , is defined as the ratio of true score variance σ T 2 {\displaystyle {\sigma _{T}^{2}}} to the observed score variance σ X 2 {\displaystyle {\sigma _{X}^{2}}} :

ρ X T 2 = σ T 2 σ X 2 {\displaystyle \rho _{XT}^{2}={\frac {\sigma _{T}^{2}}{\sigma _{X}^{2}}}}

Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to

ρ X T 2 = σ T 2 σ X 2 = σ T 2 σ T 2 + σ E 2 {\displaystyle \rho _{XT}^{2}={\frac {\sigma _{T}^{2}}{\sigma _{X}^{2}}}={\frac {\sigma _{T}^{2}}{\sigma _{T}^{2}+\sigma _{E}^{2}}}}

This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the absolute value of the correlation between true and observed scores.

Evaluating tests and scores: Reliability

Main article: Reliability (psychometrics)

Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be acquired by diverse means. One way of estimating reliability is by constructing a so-called parallel test. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that

E [ X i ] = E [ X i ] {\displaystyle \mathbb {E} =\mathbb {E} }

and

σ E i 2 = σ E i 2 {\displaystyle \sigma _{E_{i}}^{2}=\sigma _{E'_{i}}^{2}}

Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof).

ρ X X = σ X X σ X σ X = σ T 2 σ X 2 = ρ X T 2 {\displaystyle \rho _{XX'}={\frac {\sigma _{XX'}}{\sigma _{X}\sigma _{X'}}}={\frac {\sigma _{T}^{2}}{\sigma _{X}^{2}}}=\rho _{XT}^{2}}

Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's α {\displaystyle {\alpha }} . Consider a test consisting of k {\displaystyle k} items u j {\displaystyle u_{j}} , j = 1 , , k {\displaystyle j=1,\ldots ,k} . The total test score is defined as the sum of the individual item scores, so that for individual i {\displaystyle i}

X i = j = 1 k U i j {\displaystyle X_{i}=\sum _{j=1}^{k}U_{ij}}

Then Cronbach's alpha equals

α = k k 1 ( 1 j = 1 k σ U j 2 σ X 2 ) {\displaystyle \alpha ={\frac {k}{k-1}}\left(1-{\frac {\sum _{j=1}^{k}\sigma _{U_{j}}^{2}}{\sigma _{X}^{2}}}\right)}

Cronbach's α {\displaystyle {\alpha }} can be shown to provide a lower bound for reliability under rather mild assumptions. Thus, the reliability of test scores in a population is always higher than the value of Cronbach's α {\displaystyle {\alpha }} in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's α {\displaystyle {\alpha }} is included in many standard statistical packages such as SPSS and SAS.

As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for α {\displaystyle {\alpha }} , say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing. These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear.

Evaluating items: P and item-total correlations

Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as item difficulty. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as item discrimination. In addition, these statistics are calculated for each response of the oft-used multiple choice item, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software.

Alternatives

Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in item response theory (IRT) and generalizability theory (G-theory). However, IRT is not included in standard statistical packages like SPSS, but SAS can estimate IRT models via PROC IRT and PROC MCMC and there are IRT packages for the open source statistical programming language R (e.g., CTT). While commercial packages routinely provide estimates of Cronbach's α {\displaystyle {\alpha }} , specialized psychometric software may be preferred for IRT or G-theory. However, general statistical packages often do not provide a complete classical analysis (Cronbach's α {\displaystyle {\alpha }} is only one of many important statistics), and in many cases, specialized software for classical analysis is also necessary.

Shortcomings

One of the most important or well-known shortcomings of classical test theory is that examinee characteristics and test characteristics cannot be separated: each can only be interpreted in the context of the other. Another shortcoming lies in the definition of reliability that exists in classical test theory, which states that reliability is "the correlation between test scores on parallel forms of a test". The problem with this is that there are differing opinions of what parallel tests are. Various reliability coefficients provide either lower bound estimates of reliability or reliability estimates with unknown biases. A third shortcoming involves the standard error of measurement. The problem here is that, according to classical test theory, the standard error of measurement is assumed to be the same for all examinees. However, as Hambleton explains in his book, scores on any test are unequally precise measures for examinees of different ability, thus making the assumption of equal errors of measurement for all examinees implausible (Hambleton, Swaminathan, Rogers, 1991, p. 4). A fourth, and final shortcoming of the classical test theory is that it is test oriented, rather than item oriented. In other words, classical test theory cannot help us make predictions of how well an individual or even a group of examinees might do on a test item.

See also

Notes

  1. National Council on Measurement in Education http://www.ncme.org/ncme/NCME/Resource_Center/Glossary/NCME/Resource_Center/Glossary1.aspx?hkey=4bb87415-44dc-4088-9ed9-e8515326a061#anchorC Archived 22 July 2017 at the Wayback Machine
  2. Traub, R. (1997). Classical Test Theory in Historical Perspective. Educational Measurement: Issues and Practice 16 (4), 8–14. doi:doi:10.1111/j.1745-3992.1997.tb00603.x
  3. Pui-Wa Lei and Qiong Wu (2007). "CTTITEM: SAS macro and SPSS syntax for classical item analysis". Behavior Research Methods. 39 (3): 527–530. doi:10.3758/BF03193021. PMID 17958163.
  4. Streiner, D. L. (2003). "Starting at the Beginning: An Introduction to Coefficient Alpha and Internal Consistency". Journal of Personality Assessment. 80 (1): 99–103. doi:10.1207/S15327752JPA8001_18. hdl:11655/5356. PMID 12584072. S2CID 3679277.
  5. ^ Hambleton, R., Swaminathan, H., Rogers, H. (1991). Fundamentals of Item Response Theory. Newbury Park, California: Sage Publications, Inc.

References

  • Allen, M.J., & Yen, W. M. (2002). Introduction to Measurement Theory. Long Grove, IL: Waveland Press.
  • Novick, M.R. (1966) The axioms and principal results of classical test theory Journal of Mathematical Psychology Volume 3, Issue 1, February 1966, Pages 1-18
  • Lord, F. M. & Novick, M. R. (1968). Statistical theories of mental test scores. Reading MA: Addison-Welsley Publishing Company

Further reading

  • Gregory, Robert J. (2011). Psychological Testing: History, Principles, and Applications (Sixth ed.). Boston: Allyn & Bacon. ISBN 978-0-205-78214-7.
  • Hogan, Thomas P.; Brooke Cannon (2007). Psychological Testing: A Practical Introduction (Second ed.). Hoboken (NJ): John Wiley & Sons. ISBN 978-0-471-73807-7.

External links

Categories: