计算机科学
人工智能
分割
图像分割
领域(数学分析)
对抗制
班级(哲学)
模式识别(心理学)
计算机视觉
图像(数学)
域适应
适应(眼睛)
对偶(语法数字)
匹配(统计)
机器学习
数学
艺术
数学分析
文学类
物理
光学
统计
分类器(UML)
作者
Xu Chen,Tianshu Kuang,Han Deng,Steve H. Fung,Jaime Gateño,James J. Xia,Pew‐Thian Yap
标识
DOI:10.1109/tmi.2022.3186698
摘要
Domain adaptation techniques have been demonstrated to be effective in addressing label deficiency challenges in medical image segmentation. However, conventional domain adaptation based approaches often concentrate on matching global marginal distributions between different domains in a class-agnostic fashion. In this paper, we present a dual-attention domain-adaptative segmentation network (DADASeg-Net) for cross-modality medical image segmentation. The key contribution of DADASeg-Net is a novel dual adversarial attention mechanism, which regularizes the domain adaptation module with two attention maps respectively from the space and class perspectives. Specifically, the spatial attention map guides the domain adaptation module to focus on regions that are challenging to align in adaptation. The class attention map encourages the domain adaptation module to capture class-specific instead of class-agnostic knowledge for distribution alignment. DADASeg-Net shows superior performance in two challenging medical image segmentation tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI