Purpose: Virtually all radiologists participate in peer review, but to our knowledge, this is the first detailed study of their opinions toward various aspects of the process.
Methods: The study qualified for quality assurance exemption from the institutional review board. A questionnaire sent to all radiology faculty at our institution assessed their views about peer review in general, as well as case selection and scoring, consensus section review for rating and presentation of errors, and impact on radiologist performance.
Results: Of 52 questionnaires sent, 50 were completed (response rate, 96.2%). Of these, 44% agreed that our RADPEER-like system is a waste of time, and 58% believed it is done merely to meet hospital/regulatory requirements. Conversely, 46% agreed that peer review improves radiologist performance, 32% agreed that it decreases medical error, and 42% believed that peer review results are valuable to protect radiologists in cases referred to the medical board. A large majority perform all peer reviews close to the deadline, and substantial minorities frequently or almost always select more than one previous examination for a single medical record number (28%), consciously select "less time intensive" cases (22%), and intentionally avoid cases requiring more time to peer review (30%).
Discussion: Almost one-half of respondents agreed that peer review has value, but as currently performed is a waste of time. The method for selecting cases raises serious questions regarding selection bias. A new approach is needed that stresses education of all radiologists by learning from the mistakes of others.
Keywords: Peer review; performance improvement.
Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.