Background and purpose: The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance.
Educational activity and setting: Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance.
Findings: Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats.
Discussion and summary: Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance.
Keywords: Assessment; Computer-based testing; Educational technology; Faculty development.
Copyright © 2017 Elsevier Inc. All rights reserved.