Introduction: A converging pair of studies investigated the validity of a simulator for measuring driving performance/skill. STUDY 1: A concurrent validity study compared novice driver performance during an on-road driving test with their performance on a comparable simulated driving test.
Results: Results showed a reasonable degree of concordance in terms of the distribution of driving errors on-road and errors on the simulator. Moreover, there was a significant relationship between the two when driver performance was rank ordered according to errors, further establishing the relative validity of the simulator. However, specific driving errors on the two tasks were not closely related suggesting that absolute validity could not be established and that overall performance is needed to establish the level of skill. STUDY 2: A discriminant validity study compared driving performance on the simulator across three groups of drivers who differ in their level of experience--a group of true beginners who had no driving experience, a group of novice drivers who had completed driver education and had a learner's permit, and a group of fully licensed, experienced drivers.
Results: The findings showed significant differences among the groups in the expected direction--the various measures of driving errors showed that beginners performed worse than novice drivers and that experienced drivers had the fewest errors. Collectively, the results of the concurrent and discriminant validity studies support the use of the simulator as a valid measure of driving performance for research purposes.
Impact on industry: These findings support the use of a driving simulator as a valid measure of driving performance for research purposes. Future research should continue to examine validity between on-road driving performance and performance on a driving simulator and the use of simulated driving tests in the evaluation of driver education/training programs.
Copyright © 2011 Elsevier Ltd. All rights reserved.