模态(人机交互)
计算机科学
人工智能
模式识别(心理学)
磁共振成像
边距(机器学习)
模式
特征(语言学)
放射科
机器学习
医学
社会科学
语言学
哲学
社会学
作者
Mengyun Qiao,Chencheng Liu,Zeju Li,Jin Zhou,Qin Xiao,Shichong Zhou,Cai Chang,Yajia Gu,Yi Guo,Yuanyuan Wang
标识
DOI:10.1109/jbhi.2022.3140236
摘要
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and ultrasound (US), which are two common modalities for clinical breast tumor diagnosis besides Mammograms, can provide different and complementary information for the same tumor regions. Although many machine learning methods have been proposed for breast tumor classification based on either single modality, it remains unclear how to further boost the classification performance by utilizing paired multi-modality information with different dimensions. In this paper, we propose MRI-US multi-modality network (MUM-Net) to classify breast tumor into different subtypes based on 3D MR and 2D US images. The key insight of MUM-Net is that we explicitly distill modality-agnostic features for tumor classification. Specifically, we first adopt a discrimination-adaption module to decompose features into modality-agnostic and modality-specific ones with min-max training strategies. Then, we propose a feature fusion module to increase the compactness of the modality-agnostic features by utilizing an affinity matrix with nearest neighbour selection. We build a paired MRI-US breast tumor classification dataset containing 502 cases with three clinical indicators to validate the proposed method. In three tasks including lymph node metastasis, histological grade and Ki-67 level, MUM-Net achieves AUC scores of 0.8581, 0.8965 and 0.8577, outperforming other counterparts which are based on single task or single modality by a wide margin. In addition, we find that the extracted modality-agnostic features can help the network focus on the tumor regions in both modalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI