计算机科学
变压器
图像(数学)
领域(数学分析)
人工智能
计算机视觉
数据挖掘
情报检索
电气工程
数学
电压
工程类
数学分析
作者
Huaibo Hao,Jie Xue,Pu Huang,Liwen Ren,Dengwang Li
标识
DOI:10.1016/j.eswa.2024.123318
摘要
Domain missing poses a common challenge in medical clinical practice, limiting diagnostic accuracy compared to the complete multi-domain images that provide complementary information. We propose QGFormer to address this issue by flexibly imputing missing domains from any available source domain using a single model, which is challenging due to (1) the inherent limitation of CNNs to capture long-range dependencies, (2) the difficulty in modeling the inter- and intra-domain dependencies of multi-domain images, and (3) inefficiencies in fusing domain-specific features associated with missing domains. To tackle these challenges, we introduce two spatial-domanial attentions (SDAs), which establish intra-domain (spatial dimension) and inter-domain (domain dimension) dependencies independently or jointly. QGFormer, constructed based on SDAs, comprises three components: Encoder, Decoder and Fusion. The Encoder and Decoder form the backbone, modeling contextual dependencies to create a hierarchical representation of features. The QGFormer Fusion then adaptively aggregates these representations to synthesize specific missing domains from coarse to fine, guided by learnable domain queries. This process is interpretable because the attention scores in Fusion indicate how much attention the target domains pay to different inputs and regions. In addition, the scalable architecture enables QGFormer to segment tumors with domain missing by replacing domain queries with segment queries. Extensive experiments demonstrate that our approach achieves consistent improvements in multi-domain imputation, cross-domain image translation and multitask of synthesis and segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI