A unique color-coded visualization system with multimodal information fusion and deep learning in a longitudinal study of Alzheimer's disease

Artif Intell Med. 2023 Jun:140:102543. doi: 10.1016/j.artmed.2023.102543. Epub 2023 Apr 7.

Abstract

Purpose: Automated diagnosis and prognosis of Alzheimer's Disease remain a challenging problem that machine learning (ML) techniques have attempted to resolve in the last decade. This study introduces a first-of-its-kind color-coded visualization mechanism driven by an integrated ML model to predict disease trajectory in a 2-year longitudinal study. The main aim of this study is to help capture visually in 2D and 3D renderings the diagnosis and prognosis of AD, therefore augmenting our understanding of the processes of multiclass classification and regression analysis.

Method: The proposed method, Machine Learning for Visualizing AD (ML4VisAD), is designed to predict disease progression through a visual output. This newly developed model takes baseline measurements as input to generate a color-coded visual image that reflects disease progression at different time points. The architecture of the network relies on convolutional neural networks. With 1123 subjects selected from the ADNI QT-PAD dataset, we use a 10-fold cross-validation process to evaluate the method. Multimodal inputs* include neuroimaging data (MRI, PET), neuropsychological test scores (excluding MMSE, CDR-SB, and ADAS to avoid bias), cerebrospinal fluid (CSF) biomarkers with measures of amyloid beta (ABETA), phosphorylated tau protein (PTAU), total tau protein (TAU), and risk factors that include age, gender, years of education, and ApoE4 gene.

Findings/results: Based on subjective scores reached by three raters, the results showed an accuracy of 0.82 ± 0.03 for a 3-way classification and 0.68 ± 0.05 for a 5-way classification. The visual renderings were generated in 0.08 msec for a 23 × 23 output image and in 0.17 ms for a 45 × 45 output image. Through visualization, this study (1) demonstrates that the ML visual output augments the prospects for a more accurate diagnosis and (2) highlights why multiclass classification and regression analysis are incredibly challenging. An online survey was conducted to gauge this visualization platform's merits and obtain valuable feedback from users. All implementation codes are shared online on GitHub.

Conclusion: This approach makes it possible to visualize the many nuances that lead to a specific classification or prediction in the disease trajectory, all in context to multimodal measurements taken at baseline. This ML model can serve as a multiclass classification and prediction model while reinforcing the diagnosis and prognosis capabilities by including a visualization platform.

Keywords: Alzheimer's disease; Deep learning; Diagnosis; Prognosis; Trustfulness visualization.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.
  • Research Support, N.I.H., Extramural

MeSH terms

  • Alzheimer Disease* / diagnostic imaging
  • Amyloid beta-Peptides / cerebrospinal fluid
  • Cognitive Dysfunction* / diagnosis
  • Deep Learning*
  • Disease Progression
  • Humans
  • Longitudinal Studies
  • Magnetic Resonance Imaging / methods
  • tau Proteins / cerebrospinal fluid

Substances

  • tau Proteins
  • Amyloid beta-Peptides