Medicine for artificial intelligence: applying a medical framework to AI anomalies

Front Artif Intell. 2025 Oct 9:8:1698717. doi: 10.3389/frai.2025.1698717. eCollection 2025.

Abstract

We propose Medicine for Artificial Intelligence (MAI), a clinical framework that reconceptualizes AI anomalies as diseases requiring systematic screening, differential diagnosis, treatment, and follow-up. Contemporary discourse on failures (e.g., "hallucination") is ad hoc and fragmented across domains, impeding cumulative knowledge and reproducible management. MAI adapts medical nosology to AI by formalizing core constructs-disease, symptom, diagnosis, treatment, and classification-and mapping a clinical workflow (examination → diagnosis → intervention) onto the AI lifecycle. As a proof-of-concept, we developed DSA-1, a prototype taxonomy of 45 disorders across nine functional chapters. This approach clarifies ambiguous failure modes (e.g., distinguishing hallucination subtypes), links diagnoses to actionable interventions and evaluation metrics, and supports lifecycle practices, including triage and "AI health checks." MAI further maps epidemiology, severity, and detectability to risk-assessment constructs, complementing top-down governance with bottom-up technical resolution. By aligning clinical methodology with AI engineering and coordinating researchers, clinicians, and regulators, MAI offers a reproducible foundation for safer, more resilient, and auditable AI systems.

Keywords: AI anomaly; classification; failure taxonomy; medical analogy; risk assessment.