Background: Machine learning shows great potential in science but struggles with complex, high-dimensional multi-omics data. PD progression is long, diagnosed mainly by clinical signs. This paper proposes a novel decision fusion method to improve the precision of the classification of progression of PD using imaging with clinical data.
Methods: A Cross-Modal Fusion Prediction Model (CMFP) is proposed, with key steps that involve data preparation, modelling, and prediction. The data encompasses three modalities: clinical, DTI (diffusion tensor imaging), and DAT (dopamine transporter), with Lasso used for the selection of features. Individual modalities are classified using AdaBoost and the results are integrated into the new fusion strategy, CMF, to obtain a novel model. Finally, this model is used for predictions.
Results: The predictive performance of CMFP on the progression of PD achieved an AUC of 77.91%. This represents improvements of 24.48%, 30.78%, and 32.7% in AUC compared to predictions solely with clinical data, DTI data and DAT data, respectively. The combined prediction of clinical and DTI data demonstrated statistical significance compared to predictions based solely on clinical data, with a p-value of 9.183e-4. Additionally, this method identified crucial brain regions and important clinical metrics associated with PD. It should be noted that using the DTI metric along the perivascular space (DTI-ALPS) to predict and evaluate the progression of PD has relatively more advantages compared to DTI-clinical fusion prediction. Among them, the ACC can increase by 3.85%.
Conclusion: The results indicate that CMFP is effective, contributing to overcoming the limitations of low predictive performance in single-modal data and enhancing the accuracy of the PD progression predictions.
Copyright: © 2025 Wen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.