Evidence, ethics and the promise of artificial intelligence in psychiatry

J Med Ethics. 2023 Aug;49(8):573-579. doi: 10.1136/jme-2022-108447. Epub 2022 Dec 29.

Abstract

Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine's understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.

Keywords: Decision Making; Ethics- Medical; Mental Health; Psychiatry.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence
  • Clinical Decision-Making
  • Humans
  • Mental Disorders* / diagnosis
  • Precision Medicine
  • Psychiatry*