Caution! AI Bot Has Entered the Patient Chat: ChatGPT Has Limitations in Providing Accurate Urologic Healthcare Advice

Urology. 2023 Oct:180:278-284. doi: 10.1016/j.urology.2023.07.010. Epub 2023 Jul 17.

Abstract

Objective: To conduct the first study examining the accuracy of ChatGPT, an artificial intelligence (AI) chatbot, derived patient counseling responses based on clinical care guidelines in urology using a validated questionnaire.

Methods: We asked ChatGPT a set of 13 urological guideline-based questions three times. Answers were evaluated for appropriateness and using Brief DISCERN (BD), a validated healthcare information assessment questionnaire. Data analysis included descriptive statistics and Student's t test (SAS Studio).

Results: 60% (115/195) of ChatGPT responses were deemed appropriate. Variability existed between responses to the same prompt, with 25% of the 13 question sets having discordant appropriateness designations. The average BD score was 16.8 ± 3.59. Only 7 (54%) of 13 topics and 21 (54%) of 39 responses met the BD cut-off score of ≥16 to denote good-quality content. Appropriateness was associated with higher overall and Relevance domain scores (both P < .01). The lowest BD domain scores were for Source categories, since ChatGPT does not provide references by default. With prompting, 92.3% had ≥1 incorrect, misinterpreted, or nonfunctional citations.

Conclusion: While ChatGPT provides appropriate responses to urological questions more than half of the time, it misinterprets clinical care guidelines, dismisses important contextual information, conceals its sources, and provides inappropriate references. Chatbot models hold great promise, but users should be cautious when interpreting healthcare-related advice from existing AI models. Additional training and modifications are needed before these AI models will be ready for reliable use by patients and providers.

MeSH terms

  • Artificial Intelligence*
  • Data Analysis
  • Health Facilities
  • Humans
  • Software
  • Urology*