Background and objectives: The Congress of Neurological Surgeons Self-Assessment for Neurological Surgeons questions are widely used by neurosurgical residents to prepare for written board examinations. Recently, these questions have also served as benchmarks for evaluating large language models' (LLMs) neurosurgical knowledge. LLMs show significant promise for transforming neurosurgical practice; however, they are susceptible to in-text distractions and confounding factors. Given the increasing use of generative artificial intelligence and ambient dictation technologies, clinical text is at a larger risk for the inclusion of extraneous details. The aim of this study was to assess the performance of state-of-the-art LLMs on neurosurgery board-like questions and to evaluate their robustness to the inclusion of distractor statements.
Methods: A comprehensive evaluation was conducted using 28 state-of-the-art LLMs. These models were tested on 2904 neurosurgery board examination questions derived from the Congress of Neurological Surgeons Self-Assessment for Neurological Surgeons. In addition, the study introduced a distraction framework to assess the fragility of these models. The framework incorporated simple, irrelevant distractor statements containing polysemous words with clinical meanings used in nonclinical contexts to determine the extent to which such distractions degrade model performance on standard medical benchmarks.
Results: Six of the 28 tested LLMs achieved board-passing outcomes, with the top-performing models scoring over 15.7% above the passing threshold. When exposed to distractions, accuracy across various model architectures was significantly reduced-by as much as 20.4%-with 1 model failing that had previously passed. Both general-purpose and medical open-source models experienced greater performance declines compared with proprietary variants when subjected to the added distractors.
Conclusion: While current LLMs demonstrate an impressive ability to answer neurosurgery board-like examination questions, their performance is markedly vulnerable to extraneous, distracting information. These findings underscore the critical need for developing novel mitigation strategies aimed at bolstering LLM resilience against in-text distractions, particularly for safe and effective clinical deployment.
Keywords: Benchmarks; LLMs; Neurosurgical boards questions; SANS.
Copyright © 2025 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Congress of Neurological Surgeons.