Purpose: Looking for a valid, reliable, and feasible method to collect data on the performances of practicing family physicians, the authors compare the measurement characteristics of a multiple-station examination (MSE) using standardized patients with those of a video assessment of regular consultations in daily practice (practice video assessment, PVA).
Method: In a cross-sectional study, consultations of 90 family physicians were videotaped both in an MSE and in their daily practices. Peer-observers used a validated instrument (MAAS-Global) to assess the physicians' communication with patients and their medical performances. The physicians were randomly divided into two groups, comparable for demographic characteristics, and half underwent the assessments in reverse order to test for time-order effects. Content validity, criterion validity, reliability, and feasibility of the two methods were compared.
Results: Content validity of the PVA was superior to that of the MSE, since the domain of general family practice care was better covered. Observed participants judged the videotaped practice consultations to be "natural," whereas hardly any family physician, after reviewing the videotaped consultations of the MSE, recognized his or her usual working style. Specific criteria made it possible to standardize real practice. Concerning criterion validity, only the medical-performance components of the two methods correlated. No correlation was found for the communication components. Real-practice performance proved to be less influenced by observation than was performance during the MSE. The reliabilities of the two methods, expected to be better in the controlled MSE, were comparable. The administration of the PVA was more flexible, less costly, and better accepted by the family physicians than was that of the MSE.
Conclusion: Assessment for quality improvement of family physicians' practices by video observation in daily practice is superior to video assessment in a simulated setting using standardized patients.