Self-supervised graph contrastive learning with diffusion augmentation for functional MRI analysis and brain disorder detection

Med Image Anal. 2024 Nov 29:101:103403. doi: 10.1016/j.media.2024.103403. Online ahead of print.

Abstract

Resting-state functional magnetic resonance imaging (rs-fMRI) provides a non-invasive imaging technique to study patterns of brain activity, and is increasingly used to facilitate automated brain disorder analysis. Existing fMRI-based learning methods often rely on labeled data to construct learning models, while the data annotation process typically requires significant time and resource investment. Graph contrastive learning offers a promising solution to address the small labeled data issue, by augmenting fMRI time series for self-supervised learning. However, data augmentation strategies employed in these approaches may damage the original blood-oxygen-level-dependent (BOLD) signals, thus hindering subsequent fMRI feature extraction. In this paper, we propose a self-supervised graph contrastive learning framework with diffusion augmentation (GCDA) for functional MRI analysis. The GCDA consists of a pretext model and a task-specific model. In the pretext model, we first augment each brain functional connectivity network derived from fMRI through a graph diffusion augmentation (GDA) module, and then use two graph isomorphism networks with shared parameters to extract features in a self-supervised contrastive learning manner. The pretext model can be optimized without the need for labeled training data, while the GDA focuses on perturbing graph edges and nodes, thus preserving the integrity of original BOLD signals. The task-specific model involves fine-tuning the trained pretext model to adapt to downstream tasks. Experimental results on two rs-fMRI cohorts with a total of 1230 subjects demonstrate the effectiveness of our method compared with several state-of-the-arts.

Keywords: Contrastive learning; Data augmentation; Diffusion model; Functional MRI.