Purpose: Although most agree that supportive learning environments (LEs) are essential for effective medical education, an accurate assessment of LE quality has been challenging for educators and administrators. Two previous reviews assessed LE tools used in the health professions; however, both have shortcomings. The primary goal of this systematic review was to explore the validity evidence for the interpretation of scores from LE tools.
Method: The authors searched ERIC, PsycINFO, and PubMed for peer-reviewed studies that provided quantitative data on medical students' and/or residents' perceptions of the LE published through 2012 in the United States and internationally. They also searched SCOPUS and the reference lists of included studies for subsequent publications that assessed the LE tools. From each study, the authors extracted descriptive, sample, and validity evidence (content, response process, internal structure, relationship to other variables) information. They calculated a total validity evidence score for each tool.
Results: The authors identified 15 tools that assessed the LE in medical school and 13 that did so in residency. The majority of studies (17; 61%) provided some form of content validity evidence. Studies were less likely to provide evidence of internal structure, response process, and relationship to other variables.
Conclusions: Given the limited validity evidence for scores from existing LE tools, new tools may be needed to assess medical students' and residents' perceptions of the LE. Any new tools would need robust validity evidence testing and sampling across multiple institutions with trainees at multiple levels to establish their utility.