Radiographic misinterpretation rates have been suggested as a quality assurance tool for assessing emergency departments and individual physicians, but have not been defined for emergency medicine residency programs. A study was conducted to define misinterpretation rates for an emergency medicine residency program, compare misinterpretation rates among various radiographic studies, and determine differences with respect to level of training. A total of 12,395 radiographic studies interpreted by emergency physicians during a consecutive 12-month period were entered into a computerized data base as part of our quality assurance program. The radiologist's interpretation was defined as correct. Clinical significance of all discrepancies was determined prospectively by ED faculty. Four hundred seventy-five (3.4%) total errors and 350 (2.8%) clinically significant errors were found. There was a difference in clinically significant misinterpretation rates among the seven most frequently obtained radiograph studies (P less than .0005, chi 2), accounted for by the 9% misinterpretation rates for facial films. No difference (P = .421) was noted among full-time, part-time, third-year, second-year, and "other" physicians. This finding is likely due to faculty review of residents' readings. Evaluation of misinterpretation rates as a quality assurance tool is necessary to determine the role of radiographic quality assurance in emergency medicine resident training. Educational activities should be directed toward radiographic studies with higher-than-average reported misinterpretation rates.