计算机科学
分割
人工智能
一致性(知识库)
领域(数学分析)
域适应
注释
过程(计算)
图像分割
模式识别(心理学)
计算机视觉
机器学习
数学
分类器(UML)
操作系统
数学分析
作者
Chen Yang,Xiaoqing Guo,Meilu Zhu,Bulat Ibragimov,Yixuan Yuan
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2021-05-04
卷期号:25 (10): 3886-3897
被引量:25
标识
DOI:10.1109/jbhi.2021.3077271
摘要
Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI