计算机科学
分级(工程)
人工智能
融合
串联(数学)
特征(语言学)
模式识别(心理学)
情态动词
多模态
图像融合
机器学习
作者
Shangxuan Li,Yanyan Xie,Guangyi Wang,Lijuan Zhang,Wu Zhou
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2022-03-28
卷期号:PP
标识
DOI:10.1109/jbhi.2022.3161466
摘要
Multimodal medical imaging plays a crucial role in the diagnosis and characterization of lesions. However, challenges remain in lesion characterization based on multimodal feature fusion. First, current fusion methods have not thoroughly studied the relative importance of characterization modals. In addition, multimodal feature fusion cannot provide the contribution of different modal information to inform critical decision-making. In this study, we propose an adaptive multimodal fusion method with an attention-guided deep supervision net for grading hepatocellular carcinoma (HCC). Specifically, our proposed framework comprises two modules: attention-based adaptive feature fusion and attention-guided deep supervision net. The former uses the attention mechanism at the feature fusion level to generate weights for adaptive feature concatenation and balances the importance of features among various modals. The latter uses the weight generated by the attention mechanism as the weight coefficient of each loss to balance the contribution of the corresponding modal to the total loss function. The experimental results of grading clinical HCC with contrast-enhanced MR demonstrated the effectiveness of the proposed method. A significant performance improvement was achieved compared with existing fusion methods. In addition, the weight coefficient of attention in multimodal fusion has demonstrated great significance in clinical interpretation.
科研通智能强力驱动
Strongly Powered by AbleSci AI