Purpose of review: The integration of artificial intelligence (AI) into toxicology marks a profound paradigm shift in chemical safety science. No longer limited to automating traditional workflows, AI is redefining how we assess risk, interpret complex biological data, and inform regulatory decision-making. This article explores the convergence of AI and other new approach methodologies (NAMs), emphasizing key trends such as multimodal learning, causal inference, explainable AI (xAI), generative modeling, and federated learning.
Recent findings: These technologies enable more human-relevant, mechanistically grounded, and ethically aligned toxicological predictions-surpassing the reproducibility and scalability of animal-based methods. However, the dynamic nature of AI models challenges traditional validation paradigms. To address this, we introduced the e-validation framework, which operationalizes the TREAT principles (Trustworthiness, Reproducibility, Explainability, Applicability, Transparency) and incorporates AI-powered modules for reference chemical selection, virtual study simulation, mechanistic cross-validation, and post-validation surveillance through companion agents. Ethical considerations-including bias audits, equity audits, and participatory governance-are also foregrounded as critical elements for responsible AI adoption. The emergence of a co-pilot model, where AI augments but does not replace human judgment, offers a pragmatic path forward. Supported by evidence from the 2025 Stanford AI Index and recent regulatory advances, we argue that the infrastructure, economics, and policy momentum are now aligned for global-scale deployment of AI-based toxicology. The future of the field lies not in replicating legacy practices, but in reinventing toxicology as an adaptive, transparent, and ethically grounded science that delivers more accurate, inclusive, and human-centric safety assessments. Artificial intelligence (AI) is changing how we test chemicals for safety. Instead of using animals, new computer-based tools can predict how substances affect human health more quickly, accurately, and ethically. This article looks at how these technologies-like smart data systems, models that explain their reasoning, and even AI "agents" that run simulations-can improve toxicology. We also introduce a new idea called "e-validation", which uses AI to help validate these methods in real-time, not just once. This ensures the models stay up to date and reliable. But using AI safely means tackling big questions: Can we trust results we don't fully understand? How do we prevent unfairness or bias in the data? We suggest a "co-pilot" model, where AI supports, but doesn't replace, human experts. With better data sharing, strong ethics, and smarter oversight, AI can help make chemical safety testing more human-focused, fair, and effective.
Keywords: Artificial Intelligence (AI); Bias audit; Causal modeling; Chemical risk assessment; Digital twins; Ethical toxicology; Explainable AI (xAI); Federated learning; Human relevance; New Approach Methodologies (NAM); Regulatory science; Responsible AI; TREAT principles; Toxicology; e-Validation.
© 2025. The Author(s).