Artificial intelligence and increasing misinformation

Br J Psychiatry. 2024 Feb;224(2):33-35. doi: 10.1192/bjp.2023.136.

Abstract

With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.

Keywords: Generative artificial intelligence; artificial intelligence; disinformation; misinformation; technology.

MeSH terms

  • Artificial Intelligence
  • Communication
  • Humans
  • Mental Disorders*
  • Psychiatrists
  • Psychiatry*