Background: Cognitive function is an important outcome in brain-tumor clinical trials. Cognitive examiners are often needed across multiple sites, many of whom have no prior testing experience. To ensure quality, we looked at examiner errors in administering a commonly used cognitive test battery, determined whether the errors were correctable upon central review, and considered whether the same errors would be detected using onsite electronic data entry.
Methods: We looked at 500 cognitive exams administered for brain-tumor trials led by the Alliance for Clinical Trials in Oncology (Alliance). Of 2277 tests examined, 32 noncorrectable errors were detected with routine central review (1.4% of tests administered), and thus removed from the database of the respective trial. The invalidation rate for each test was 0.8% for each part of the Hopkins Verbal Learning Test-Revised, 0.8% for Controlled Oral Word Association, 1.8% for Trail Making Test-A and 2.6% for Trail Making Test-B. It was estimated that, with onsite data entry and no central review, 4.9% of the tests entered would have uncorrected errors and 1.3% of entered tests would be frankly invalid but not removed.
Conclusions: Cognitive test results are useful and robust outcome measures for brain-tumor clinical trials. Error rates are extremely low, and almost all are correctable with central review of scoring, which is easy to accomplish. We caution that many errors could be missed if onsite electronic entry is utilized instead of central review, and it would be important to mitigate the risk of invalid scores being entered.
Keywords: clinical trials; cognitive testing; neurocognitive.