模态(人机交互)
计算机科学
人工智能
情态动词
特征(语言学)
保险丝(电气)
特征提取
模式
模式识别(心理学)
深度学习
图像融合
图像(数学)
电气工程
工程类
哲学
社会学
化学
高分子化学
语言学
社会科学
作者
Tingting Chen,Xinjun Ma,Xingde Ying,Wenzhe Wang,Chunnv Yuan,Weiguo Lü,Danny Z. Chen,Jian Wu
标识
DOI:10.1109/isbi.2019.8759303
摘要
Fusion of multi-modal information from a patient's screening tests can help improve the diagnostic accuracy of cervical dysplasia. In this paper, we present a novel multi-modal deep learning fusion network, called MultiFuseNet, for cervical dysplasia diagnosis, utilizing multi-modal data from cervical screening results. To exploit the relations among different image modalities, we propose an Attention Mutual-Enhance (AME) module to fuse features of each modality at the feature extraction stage. Specifically, we first develop the Fused Faster R-CNN with AME modules for automatic cervix region detection and fused image feature learning, and then incorporate non-image information into the learning model to jointly learn non-linear correlations among all the modalities. To effectively train the Fused Faster R-CNN, we employ an alternating training scheme. Experimental results show the effectiveness of our method, which achieves an average accuracy of 87.4% (88.6% sensitivity and 86.1% specificity) on a large dataset, outperforming the methods using any single modality alone and the known multi-modal methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI