[A multi-modal feature fusion classification model based on distance matching and discriminative representation learning for differentiation of high-grade glioma from solitary brain metastasis]

Nan Fang Yi Ke Da Xue Xue Bao. 2024 Jan 20;44(1):138-145. doi: 10.12122/j.issn.1673-4254.2024.01.16.
[Article in Chinese]

Abstract

Objective: To explore the performance of a new multimodal feature fusion classification model based on distance matching and discriminative representation learning for differentiating high-grade glioma (HGG) from solitary brain metastasis (SBM).

Methods: We collected multi-parametric magnetic resonance imaging (MRI) data from 61 patients with HGG and 60 with SBM, and delineated regions of interest (ROI) on T1WI, T2WI, T2-weighted fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) images. The radiomics features were extracted from each sequence using Pyradiomics and fused using a multimodal feature fusion classification model based on distance matching and discriminative representation learning to obtain a classification model. The discriminative performance of the classification model for differentiating HGG from SBM was evaluated using five-fold cross-validation with metrics of specificity, sensitivity, accuracy, and the area under the ROC curve (AUC) and quantitatively compared with other feature fusion models. Visual experiments were conducted to examine the fused features obtained by the proposed model to validate its feasibility and effectiveness.

Results: The five-fold cross-validation results showed that the proposed multimodal feature fusion classification model had a specificity of 0.871, a sensitivity of 0.817, an accuracy of 0.843, and an AUC of 0.930 for distinguishing HGG from SBM. This feature fusion method exhibited excellent discriminative performance in the visual experiments.

Conclusion: The proposed multimodal feature fusion classification model has an excellent ability for differentiating HGG from SBM with significant advantages over other feature fusion classification models in discrimination and classification tasks between HGG and SBM.

目的: 探索基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤(HGG)与单发性脑转移(SBM)中的鉴别能力和应用价值。

方法: 收集了121例患者(61例HGG和60例SBM)的多参数磁共振成像(MRI)扫描图像,在T1W1、T2W1、T2加权液体衰减反转恢复(T2_FLAIR)和T1WI增强图像(CE_T1WI)4种常规轴位MRI图像上勾画目标感兴趣区域(ROI),并使用开源影像组学工具Pyradiomics从4个MRI序列分别提取影像组学特征。使用本研究提出的基于距匹配及判别表征学习的多模态特征融合分类模型对4个MRI序列的影像组学特征进行融合并得到分类模型。采用五折交叉验证方法和特异性(SPE)、灵敏度(SEN)、准确率(ACC)、ROC曲线下面积(AUC)评价该分类模型的鉴别性能。将本研究所提模型与其他特征融合分类模型对于HGG与SBM的鉴别能力进行定量比较,同时对本研究提出特征融合方法得到的融合特征进行样本散点可视化实验,验证本研究所提出的多模态特征融合分类模型的可行性和有效性。

结果: 五折交叉验证结果显示本研究所提出的基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤与单发性脑转移瘤中的SPE、SEN、ACC、AUC分别为:0.871、0.817、0.843、0.930,且特征融合方法在可视化实验中具有优秀的表现。

结论: 基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤与单发性脑转移瘤中的应用具有优秀的鉴别能力和较高的应用价值。同时,与其他特征融合分类模型相比,本研究提出的分类模型在HGG与SBM的鉴别分类任务中具有较大的优势。

Keywords: discriminant analysis; feature fusion; high-grade glioma; shared representation learning; solitary brain metastasis.

Publication types

  • English Abstract

MeSH terms

  • Area Under Curve
  • Brain Neoplasms* / pathology
  • Glioma* / pathology
  • Humans
  • Magnetic Resonance Imaging / methods
  • Radiomics
  • Retrospective Studies

Grants and funding

国家自然科学基金(81874216,62106058,81971574);广东省自然科学基金(2022A1515011410);广州市科技项目的资助(202201011662);广州市重点实验室建设项目(202201020376)