计算机科学
杠杆(统计)
参数化复杂度
情态动词
人工智能
构造(python库)
模式识别(心理学)
机器学习
算法
化学
高分子化学
程序设计语言
作者
Yang Li,Beiji Zou,Jing Wu,Yulan Dai,Harrison X. Bai,Zhicheng Jiao
标识
DOI:10.1109/bibm55620.2022.9995556
摘要
Accurate ovarian tumor differentiation is a challenging task where the benign and malignant tumors share similar T1C and T2WI MRI appearances. Therefore, it is necessary to leverage additional multi-modal data, e.g., the age, CA125level, and other clinical information, which are helpful but rarely exploited. In this paper, we propose a dynamic fusion network that can adaptively make full use of multi-modal data, including MRI and clinical information, to realize precise ovarian tumor differentiation. Specifically, we design a dynamic nonlinear module (D-Non-L module) on the top of the image representation. The D-Non-L module is formulated as an iterative nonlinear projection parameterized by the learned features of the patient-wise clinical information. With the help of this module, the interaction between clinical features and image features could be achieved to adaptively improve the discrimination of visual representations. Moreover, we construct a dual-path-based architecture to fully exploit the complementary information from T1C and T2WI MRIs. Extensive experimental results on the locally organized ovarian tumor dataset demonstrate that our methods are superior to the single-modal and single-path-based methods. And the proposed dynamic non-linear module obtains the best performance compared with other multi-modal fusion strategies.
科研通智能强力驱动
Strongly Powered by AbleSci AI