Binary items and beyond: a simulation of computer adaptive testing using the Rasch partial credit model
- PMID: 18180552
Binary items and beyond: a simulation of computer adaptive testing using the Rasch partial credit model
Abstract
Past research on Computer Adaptive Testing (CAT) has focused almost exclusively on the use of binary items and minimizing the number of items to be administrated. To address this situation, extensive computer simulations were performed using partial credit items with two, three, four, and five response categories. Other variables manipulated include the number of available items, the number of respondents used to calibrate the items, and various manipulations of respondents' true locations. Three item selection strategies were used, and the theoretically optimal Maximum Information method was compared to random item selection and Bayesian Maximum Falsification approaches. The Rasch partial credit model proved to be quite robust to various imperfections, and systematic distortions did occur mainly in the absence of sufficient numbers of items located near the trait or performance levels of interest. The findings further indicate that having small numbers of items is more problematic in practice than having small numbers of respondents to calibrate these items. Most importantly, increasing the number of response categories consistently improved CAT's efficiency as well as the general quality of the results. In fact, increasing the number of response categories proved to have a greater positive impact than did the choice of item selection method, as the Maximum Information approach performed only slightly better than the Maximum Falsification approach. Accordingly, issues related to the efficiency of item selection methods are far less important than is commonly suggested in the literature. However, being based on computer simulations only, the preceding presumes that actual respondents behave according to the Rasch model. CAT research could thus benefit from empirical studies aimed at determining whether, and if so, how, selection strategies impact performance.
Similar articles
-
Rasch fit statistics as a test of the invariance of item parameter estimates.J Appl Meas. 2003;4(2):153-63. J Appl Meas. 2003. PMID: 12748407
-
Computerized adaptive testing: a mixture item selection approach for constrained situations.Br J Math Stat Psychol. 2005 Nov;58(Pt 2):239-57. doi: 10.1348/000711005X62945. Br J Math Stat Psychol. 2005. PMID: 16293199
-
Using the dichotomous Rasch model to analyze polytomous items.J Appl Meas. 2013;14(1):44-56. J Appl Meas. 2013. PMID: 23442327
-
Computer adaptive testing.J Appl Meas. 2005;6(1):109-27. J Appl Meas. 2005. PMID: 15701948 Review.
-
Safety and nutritional assessment of GM plants and derived food and feed: the role of animal feeding trials.Food Chem Toxicol. 2008 Mar;46 Suppl 1:S2-70. doi: 10.1016/j.fct.2008.02.008. Epub 2008 Feb 13. Food Chem Toxicol. 2008. PMID: 18328408 Review.
Cited by
-
Sample Size Requirements for Applying Mixed Polytomous Item Response Models: Results of a Monte Carlo Simulation Study.Front Psychol. 2019 Nov 13;10:2494. doi: 10.3389/fpsyg.2019.02494. eCollection 2019. Front Psychol. 2019. PMID: 31798490 Free PMC article.
-
Self efficacy for fruit, vegetable and water intakes: Expanded and abbreviated scales from item response modeling analyses.Int J Behav Nutr Phys Act. 2010 Mar 29;7:25. doi: 10.1186/1479-5868-7-25. Int J Behav Nutr Phys Act. 2010. PMID: 20350316 Free PMC article.
MeSH terms
LinkOut - more resources
Miscellaneous