Physicians as a group have neither consistently defined nor systematically measured the quality of medical practice. To referring clinicians and patients, a good radiologist is one who is accessible, recommends appropriate imaging studies, and provides timely consultation and reports with high interpretation accuracy. For determining the interpretation accuracy of cases with pathologic or surgical proof, the author proposes tracking data on positive predictive value, disease detection rates, and abnormal interpretation rates for individual radiologists. For imaging studies with no pathologic proof or adequate clinical follow-up, the author proposes measuring the concordance and discordance of the interpretations within a peer group. The monitoring of interpretation accuracy can be achieved through periodic imaging, pathologic correlation, regular peer review of randomly selected cases, or subscription to the ACR's RADPEER system. Challenges facing the implementation of an effective peer-review system include physician time, subjectivity in assessing discordant interpretations, lengthy and equivocal interpretations, and the potential misassignment of false-positive interpretations.