Boosting wisdom of the crowd for medical image annotation using training performance and task features

Cogn Res Princ Implic. 2024 May 20;9(1):31. doi: 10.1186/s41235-024-00558-6.

Abstract

A crucial bottleneck in medical artificial intelligence (AI) is high-quality labeled medical datasets. In this paper, we test a large variety of wisdom of the crowd algorithms to label medical images that were initially classified by individuals recruited through an app-based platform. Individuals classified skin lesions from the International Skin Lesion Challenge 2018 into 7 different categories. There was a large dispersion in the geographical location, experience, training, and performance of the recruited individuals. We tested several wisdom of the crowd algorithms of varying complexity from a simple unweighted average to more complex Bayesian models that account for individual patterns of errors. Using a switchboard analysis, we observe that the best-performing algorithms rely on selecting top performers, weighting decisions by training accuracy, and take into account the task environment. These algorithms far exceed expert performance. We conclude by discussing the implications of these approaches for the development of medical AI.

MeSH terms

  • Adult
  • Algorithms
  • Artificial Intelligence*
  • Bayes Theorem
  • Crowdsourcing
  • Humans