Artificial intelligence in laboratory medicine: fundamental ethical issues and normative key-points

Clin Chem Lab Med. 2022 Apr 12;60(12):1867-1874. doi: 10.1515/cclm-2022-0096. Print 2022 Nov 25.


The contribution of laboratory medicine in delivering value-based care depends on active cooperation and trust between pathologist and clinician. The effectiveness of medicine more in general depends in turn on active cooperation and trust between clinician and patient. From the second half of the 20th century, the art of medicine is challenged by the spread of artificial intelligence (AI) technologies, recently showing comparable performances to flesh-and-bone doctors in some diagnostic specialties. Being the principle source of data in medicine, the laboratory is a natural ground where AI technologies can disclose the best of their potential. In order to maximize the expected outcomes and minimize risks, it is crucial to define ethical requirements for data collection and interpretation by-design, clarify whether they are enhanced or challenged by specific uses of AI technologies, and preserve these data under rigorous but feasible norms. From 2018 onwards, the European Commission (EC) is making efforts to lay the foundations of sustainable AI development among European countries and partners, both from a cultural and a normative perspective. Alongside with the work of the EC, the United Kingdom provided worthy-considering complementary advice in order to put science and technology at the service of patients and doctors. In this paper we discuss the main ethical challenges associated with the use of AI technologies in pathology and laboratory medicine, and summarize the most pertaining key-points from the guidelines and frameworks before-mentioned.

Keywords: European Commission; artificial intelligence; data protection; informed consent; loop thinking; medical responsibility.

Publication types

  • Review

MeSH terms

  • Artificial Intelligence*
  • Europe
  • Humans
  • United Kingdom