Background and objective: Accurate evaluation of the axillary lymph node (ALN) status in early-stage breast cancer is crucial for prognosis and guiding treatment decisions, while also preventing unnecessary surgeries and postoperative complications. Numerous methods have been proposed for ALN status classification using radiomic images which face the challenge of low diagnostic accuracy. This study aims to explore the potential of multi-modality that integrates clinical parameters and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to provide complementary information for image features, thereby enhancing the model's ability to predict ALN metastasis.
Methods: We proposed a Graph-based Multi-Modality network (GMM-Net) that combines DCE-MRI and clinical parameters for preoperative prediction. GMM-Net consists of Text Encoder, Local-Global Graph Neural Module (LGGNM) and Multi-Modality Feature Fusion module (MMF). Text Encoder is used to extract features from clinical parameters, and LGGNM captures MRI features by learning cross-region feature correlations within individual slices and connects tumor-related regions across adjacent slices to capture the complete spatial distribution of lesions. Finally, the MMF calculates the similarity between MRI and clinical parameter modalities for each patient.
Results: A paired breast tumor classification dataset containing 260 cases with 13 clinical indicators was utilized to develop and evaluate the proposed method. Experimental results demonstrate that the GMM-Net achieves an accuracy of 0.8482 and an AUC of 0.8461, outperforming other single-modality approaches.
Conclusion: In this study, a graph-based multi-modality framework was proposed to extract clinical and DCE-MRI features for the preoperative assessment of ALN status. Numerous results demonstrated the potential of GMM-Net in non-invasively predicting the preoperative assessment of ALN status in early-stage of breast cancer.
Keywords: Axillary lymph node; Breast cancer; DCE-MRI; Graph; Multi-modality.
Copyright © 2025 Elsevier B.V. All rights reserved.