计算机科学
弹性成像
人工智能
超声波
情态动词
放射科
医学
模式识别(心理学)
模态(人机交互)
加权
机器学习
化学
高分子化学
作者
Ruobing Huang,Zehui Lin,Haoran Dou,Jian Wang,Juzheng Miao,Guangquan Zhou,Xiaohong Jia,Wenwen Xu,Zihan Mei,Yijie Dong,Xin Yang,JianQiao Zhou,Dong Ni
标识
DOI:10.1016/j.media.2021.102137
摘要
Recently, more clinicians have realized the diagnostic value of multi-modal ultrasound in breast cancer identification and began to incorporate Doppler imaging and Elastography in the routine examination. However, accurately recognizing patterns of malignancy in different types of sonography requires expertise. Furthermore, an accurate and robust diagnosis requires proper weights of multi-modal information as well as the ability to process missing data in practice. These two aspects are often overlooked by existing computer-aided diagnosis (CAD) approaches. To overcome these challenges, we propose a novel framework (called AW3M) that utilizes four types of sonography (i.e. B-mode, Doppler, Shear-wave Elastography, and Strain Elastography) jointly to assist breast cancer diagnosis. It can extract both modality-specific and modality-invariant features using a multi-stream CNN model equipped with self-supervised consistency loss. Instead of assigning the weights of different streams empirically, AW3M automatically learns the optimal weights using reinforcement learning techniques. Furthermore, we design a light-weight recovery block that can be inserted to a trained model to handle different modality-missing scenarios. Experimental results on a large multi-modal dataset demonstrate that our method can achieve promising performance compared with state-of-the-art methods. The AW3M framework is also tested on another independent B-mode dataset to prove its efficacy in general settings. Results show that the proposed recovery block can learn from the joint distribution of multi-modal features to further boost the classification accuracy given single modality input during the test.
科研通智能强力驱动
Strongly Powered by AbleSci AI