Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study

Respir Res. 2025 Dec 16;27(1):19. doi: 10.1186/s12931-025-03438-9.

Abstract

Background: Chronic obstructive pulmonary disease (COPD) remains an underestimated and underdiagnosed condition due to low disease awareness. Generative Artificial Intelligence (AI) chatbots are convenient and accessible sources of medical information, but evaluation of the quality of answers provided by patient-generated questions about COPD has not been performed to date.

Objective: To assess and compare accuracy, comprehensiveness, understandability and reliability of different AI chatbots in response to patient-generated questions on the clinical management of COPD.

Methods: A cross-sectional study was conducted in collaboration with the European Respiratory Society (ERS), the European Lung Foundation (ELF), and the ERS CONNECT Clinical Research Collaboration (CRC). Fifteen real questions formulated by ELF COPD patient representatives were divided into three difficulty tiers (easy, medium, difficult) and submitted to ChatGPT (version 3.5), Bard, and Copilot. Experts assessed accuracy and comprehensiveness on a 0–10 scale; patients assessed understandability using the same scale. Reliability was assessed by two investigators. Reviewers were blinded to which AI system generated the answers, and only those who completed all evaluations were included in the analysis.

Results: ChatGPT responses were the most reliable (14/15), followed by Copilot (12/15) and Bard (11/15). ChatGPT scored higher for accuracy (8.0 [7.0 – 9.0]) and comprehensiveness (8.0 [6.8 – 9.0]) than Bard (6.0 [5.0 – 8.0] and 6.0 [5.0 – 7.0]) and Copilot (6.0 [5.0 – 7.3] and 6.0 [5.0 – 8.0]) (both P < 0.001). Understandability was similar across all software (ChatGPT: 8.0 [8.0–10.0]; Bard: 9.0 [8.0–10.0]; Copilot: 9.0 [8.0–10.0]) (P = 0.53). No significant effect was detected according to the difficulty of the question.

Conclusion: Our findings suggest that AI chatbots, particularly ChatGPT, can provide accurate, comprehensive and understandable answers to patients’ questions.

Keywords: AI; Accuracy; Artificial intelligence; COPD; Comprehensiveness; Disease awareness; Reliability; Understandability.