This study examines the quality, completeness, accuracy, and readability of responses generated by ChatGPT on Myofascial Pain Syndrome (MPS), a common chronic pain condition characterized by muscle pain and tenderness. Given the increasing reliance on AI chatbots for health information, the study aims to evaluate the suitability of ChatGPT in providing accessible and reliable information on MPS. Using Google Trends data, we identified the most frequently searched keywords related to MPS and entered them into the GPT-4 version of ChatGPT. The responses were evaluated with the Enhanced Quality Information Profile (EQIP) scale, Likert scales, and Flesch-Kincaid readability metrics. Results indicated that while ChatGPT's responses generally scored well in accuracy, they displayed variability in readability, suggesting a range of accessibility levels for different audience segments. The study identified the Philippines, Thailand, and the United States as the top three countries searching for MPS-related information. Despite promising results in information accessibility, ChatGPT's responses lack the depth required for comprehensive patient care and cannot substitute for professional medical consultation. Enhancements in quality control, along with the use of reliable medical sources, are recommended to improve the chatbot's capacity to provide accurate and comprehensible health information. This study underscores the importance of integrating human oversight in AI systems to better serve the public's health information needs.
Keywords: AI chatbot; ChatGPT; Myofascial pain syndrome; Patient information.
Copyright © 2025. Published by Elsevier Ltd.