Maintaining Content Validity in Computerized Adaptive Testing

Adv Health Sci Educ Theory Pract. 1998;3(1):29-41. doi: 10.1023/A:1009789314011.

Abstract

A major advantage of using computerized adaptive testing (CAT) is improved measurement efficiency; better score reliability or mastery decisions can result from targeting item selections to the abilities of examinees. However, this type of engineering solution can result in differential content for different examinees at various levels of ability. This paper empirically demonstrates some of the trade-offs which can occur when content balancing is imposed in CAT forms or conversely, when it is ignored. That is, the content validity of a CAT form can actually change across a score scale when content balancing is ignored. On the other hand, efficiency and score precision can be severely reduced by over specifying content restrictions in a CAT form. The results from two simulation studies are presented as a means of highlighting some of the trade-offs that could occur between content and statistical considerations in CAT form assembly.