Outlier Detection and Cross-Modal Representation Learning for Multimodal Alzheimer's Disease Diagnosis

IEEE Trans Neural Syst Rehabil Eng. 2025:33:4646-4656. doi: 10.1109/TNSRE.2025.3634138.

Abstract

The early diagnosis of Alzheimer's disease (AD) is crucial because individuals may first experience mild cognitive impairment (MCI), which can then develop into AD, enabling timely intervention, slowing disease progression, and advancing the understanding of AD pathology. However, existing methods face two major challenges: first, they lack effective mechanisms to handle abnormal samples in neuroimaging data, which can distort model learning; second, they do not fully exploit complementary structural information across modalities, leading to insufficient discriminative power. To tackle these problems, we propose a model for outlier detection and cross-modal representation learning. This model leverages graph fusion for effective cross-modal information utilization and introduces multiple latent space mappings. Additionally, an outlier detection vector assigns lower learning weights to more anomalous samples, mitigating their impact. An alternating optimization algorithm ensures convergence and optimizes the objective function. Experimental comparisons with related algorithms on AD datasets demonstrate our method's superiority. These results confirm that explicitly addressing abnormal data and enhancing cross-modal fusion are essential for improving both the robustness and accuracy of AD early diagnosis.

MeSH terms

  • Aged
  • Algorithms
  • Alzheimer Disease* / diagnosis
  • Alzheimer Disease* / diagnostic imaging
  • Cognitive Dysfunction / diagnostic imaging
  • Databases, Factual
  • Early Diagnosis
  • Female
  • Humans
  • Machine Learning*
  • Magnetic Resonance Imaging
  • Male
  • Multimodal Imaging
  • Neuroimaging / methods
  • Reproducibility of Results