计算机科学
分割
分类器(UML)
人工智能
源代码
域适应
领域(数学分析)
特征(语言学)
适应(眼睛)
模式识别(心理学)
像素
图像分割
学习迁移
标记数据
计算机视觉
程序设计语言
数学分析
哲学
物理
光学
语言学
数学
作者
Qinji Yu,Nan Xi,Junsong Yuan,Ziyu Zhou,Kang Dang,Xiaowei Ding
标识
DOI:10.1007/978-3-031-43990-2_1
摘要
Unsupervised domain adaptation (UDA) has increasingly gained interests for its capacity to transfer the knowledge learned from a labeled source domain to an unlabeled target domain. However, typical UDA methods require concurrent access to both the source and target domain data, which largely limits its application in medical scenarios where source data is often unavailable due to privacy concern. To tackle the source data-absent problem, we present a novel two-stage source-free domain adaptation (SFDA) framework for medical image segmentation, where only a well-trained source segmentation model and unlabeled target data are available during domain adaptation. Specifically, in the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes, which preserve the information of source features. Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost. On top of that, a contrastive learning stage is further devised to utilize those pixels with unreliable predictions for a more compact target feature distribution. Extensive experiments on a cross-modality medical segmentation task demonstrate the superiority of our method in large domain discrepancy settings compared with the state-of-the-art SFDA approaches and even some UDA methods. Code is available at: https://github.com/CSCYQJ/MICCAI23-ProtoContra-SFDA .
科研通智能强力驱动
Strongly Powered by AbleSci AI