Background: ChatGPT excels in natural language tasks, but its performance in the Chinese National Medical Licensing Examination (NMLE) and Chinese medical education remains underexplored. Meanwhile, Chinese corpus-based large language models (LLMs) such as ERNIE Bot, Tongyi Qianwen, Doubao, and DeepSeek have emerged, yet their effectiveness in the NMLE awaits systematic evaluation.
Objective: This study aimed to quantitatively compare the performance of 6 LLMs (GPT-3.5, GPT-4, ERNIE Bot, Tongyi Qianwen, Doubao, and DeepSeek) in answering NMLE questions from 2018 to 2024 and analyze their feasibility as supplementary tools in Chinese medical education.
Methods: We selected questions from the 4 content units of the NMLE's General Written test (2018-2024), preprocessed image- and table-based content into standardized text, and input the questions into each model. We evaluated the accuracy, comprehensiveness, and logical coherence of the responses, with quantitative comparison centered on scores and accuracy rates against the official answer keys (passing score: 360/600).
Results: GPT-4 outperformed GPT-3.5 across all units, achieving average accuracies of 66.57% (SD 3.21%; unit 1), 69.05% (SD 2.87%; unit 2), 71.71% (SD 2.53%; unit 3), and 80.67% (SD 2.19%; unit 4), with consistent scores above the passing threshold. Among the Chinese models, DeepSeek demonstrated the highest overall performance, with an average score of 454.8 (SD 17.3) and average accuracies of 73.2% (unit 1, SD 2.89%) and 71.5% (unit 3, SD 2.64%), as well as average accuracies of 70.3% (unit 2, SD 3.02%) and 78.2% (unit 4, SD 2.47%). ERNIE Bot (mean score 442.3, SD 19.6; unit 1 accuracy =70.8%, SD 3.01%; unit 2 accuracy =68.7%, SD 3.15%; unit 3 accuracy =69.1%, SD 2.93%; unit 4 accuracy =68.3%, SD 2.76%), Tongyi Qianwen (mean score 426.5, SD 21.4; unit 1 accuracy =67.4%, SD 3.22%; unit 2 accuracy =65.9%, SD 3.31%; unit 3 accuracy =66.2%, SD 3.08%; unit 4 accuracy =67.2%, SD 2.89%), and Doubao (mean score 413.7, SD 23.1; unit 1 accuracy =65.2%, SD 3.45%; unit 2 accuracy =63.8%, SD 3.52%; unit 3 accuracy =64.1%, SD 3.27%; unit 4 accuracy =62.8%, SD 3.11%) all exceeded the passing score. DeepSeek's overall average accuracy (75.8%, SD 2.73%) was significantly higher than those of the other Chinese models (χ²₁=11.4, P=.001 vs ERNIE Bot; χ²₁=28.7, P<.001 vs Tongyi Qianwen; χ²₁=45.3, P<.001 vs Doubao). GPT-4's overall average accuracy (77.0%, SD 2.58%) was slightly higher than that of DeepSeek but not statistically significant (χ²₁=2.2, P=.14), while both outperformed GPT-3.5 (overall accuracy =68.5%, SD 3.67%; χ²₁=89.8, P<.001 for GPT-4 vs GPT-3.5; χ²₁=76.3, P<.001 for DeepSeek vs GPT-3.5).
Conclusions: GPT-4 and Chinese-developed LLMs such as DeepSeek show potential as supplementary tools in Chinese medical education given their solid performance on the NMLE. However, further optimization is required for complex reasoning, multimodal processing, and dynamic knowledge updates, with human medical expertise remaining central to clinical practice and education.
Keywords: AI; ChatGPT; Chinese National Medical Licensing Examination; ERNIE Bot; Tongyi Qianwen; artificial intelligence; medical student.
© Yanyu Diao, Mengyuan Wu, Jingwen Xu, Yifeng Pan. Originally published in JMIR Human Factors (https://humanfactors.jmir.org).