情态动词
计算机科学
模式
分级(工程)
人工智能
机器学习
自然语言处理
数据挖掘
社会科学
工程类
社会学
土木工程
化学
高分子化学
作者
Zilin Lu,Mengkang Lu,Yong Xia
标识
DOI:10.1007/978-3-031-18814-5_1
摘要
Clinical decision of oncology comes from multi-modal information, such as morphological information from histopathology and molecular profiles from genomics. Most of the existing multi-modal learning models achieve better performance than single-modal models. However, these multi-modal models only focus on the interactive information between modalities, which ignore the internal relationship between multiple tasks. Both survival analysis task and tumor grading task can provide reliable information for pathologists in the diagnosis and prognosis of cancer. In this work, we present a Multi-modal and Multi-task Fusion ( $$\mathrm {M^{2}F}$$ ) model to make use of the potential connection between modalities and tasks. The co-attention module in multi-modal transformer extractor can excavate the intrinsic information between modalities more effectively than the original fusion methods. Joint training of tumor grading branch and survival analysis branch, instead of separating them, can make full use of the complementary information between tasks to improve the performance of the model. We validate our $$\mathrm {M^{2}F}$$ model on glioma datasets from the Cancer Genome Atlas (TCGA). Experiment results show our $$\mathrm {M^{2}F}$$ model is superior to existing multi-modal models, which proves the effectiveness of our model.
科研通智能强力驱动
Strongly Powered by AbleSci AI