The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.
Keywords: AI; application; artificial intelligence; benefits; biases; biomedical; care; design; development; health; human-centered; human-centered AI; patient; research.
©You Chen, Ellen Wright Clayton, Laurie Lovett Novak, Shilo Anders, Bradley Malin. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 24.03.2023.