Background: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients' requests in office practice.
Methods: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system's reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the kappa statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct validity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit.
Results: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (kappa = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006).
Conclusions: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients' requests in adult primary care. Further research is needed to confirm the system's validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.