A novel one-to-multiple unsupervised domain adaptation framework for abdominal organ segmentation

计算机科学 分割 人工智能 模式识别(心理学) 一致性(知识库) 领域(数学分析) 相似性(几何) 图像(数学) 数学 数学分析
作者
Xiaowei Xu,Yinan Chen,Jianghao Wu,Jiangshan Lu,Yuxiang Ye,Yechong Huang,Xin Dou,Kang Li,Guotai Wang,Shaoting Zhang,Wei Gong
出处
期刊:Medical Image Analysis [Elsevier BV]
卷期号:88: 102873-102873 被引量:11
标识
DOI:10.1016/j.media.2023.102873
摘要

Abdominal multi-organ segmentation in multi-sequence magnetic resonance images (MRI) is of great significance in many clinical scenarios, e.g., MRI-oriented pre-operative treatment planning. Labeling multiple organs on a single MR sequence is a time-consuming and labor-intensive task, let alone manual labeling on multiple MR sequences. Training a model by one sequence and generalizing it to other domains is one way to reduce the burden of manual annotation, but the existence of domain gap often leads to poor generalization performance of such methods. Image translation-based unsupervised domain adaptation (UDA) is a common way to address this domain gap issue. However, existing methods focus less on keeping anatomical consistency and are limited by one-to-one domain adaptation, leading to low efficiency for adapting a model to multiple target domains. This work proposes a unified framework called OMUDA for one-to-multiple unsupervised domain-adaptive segmentation, where disentanglement between content and style is used to efficiently translate a source domain image into multiple target domains. Moreover, generator refactoring and style constraint are conducted in OMUDA for better maintaining cross-modality structural consistency and reducing domain aliasing. The average Dice Similarity Coefficients (DSCs) of OMUDA for multiple sequences and organs on the in-house test set, the AMOS22 dataset and the CHAOS dataset are 85.51%, 82.66% and 91.38%, respectively, which are slightly lower than those of CycleGAN(85.66% and 83.40%) in the first two data sets and slightly higher than CycleGAN(91.36%) in the last dataset. But compared with CycleGAN, OMUDA reduces floating-point calculations by about 87 percent in the training phase and about 30 percent in the inference stage respectively. The quantitative results in both segmentation performance and training efficiency demonstrate the usability of OMUDA in some practical scenes, such as the initial phase of product development.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
欢呼山雁完成签到,获得积分10
刚刚
刚刚
青春梦完成签到,获得积分10
刚刚
wrf77_完成签到,获得积分10
1秒前
BowieHuang应助任性的小丸子采纳,获得10
1秒前
2秒前
米饭多加水完成签到 ,获得积分10
2秒前
勤劳的水之完成签到,获得积分10
2秒前
2秒前
小马甲应助Yuan88采纳,获得10
2秒前
西西弗斯完成签到,获得积分0
3秒前
倒霉蛋完成签到,获得积分20
3秒前
宋佳珍发布了新的文献求助10
3秒前
3秒前
传奇3应助Nano采纳,获得10
4秒前
SppikeFPS完成签到,获得积分10
4秒前
打打应助45采纳,获得10
4秒前
4秒前
4秒前
5秒前
科研通AI6应助阿龙采纳,获得10
6秒前
6秒前
迷路的以蓝完成签到,获得积分20
6秒前
傲娇诗完成签到,获得积分10
6秒前
心灵美盼烟完成签到,获得积分10
6秒前
晚来风与雪完成签到 ,获得积分10
7秒前
科研通AI6应助cuizhiyu采纳,获得30
7秒前
xxx发布了新的文献求助10
7秒前
xzh发布了新的文献求助10
7秒前
Li应助倒霉蛋采纳,获得30
8秒前
江子完成签到 ,获得积分10
8秒前
8秒前
8秒前
繁星背后发布了新的文献求助10
8秒前
Ava应助浪费采纳,获得10
9秒前
9秒前
努力发芽的小黄豆完成签到 ,获得积分10
9秒前
罗兴鲜发布了新的文献求助10
9秒前
ZMY完成签到 ,获得积分20
10秒前
10秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
Vertebrate Palaeontology, 5th Edition 340
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5260162
求助须知:如何正确求助?哪些是违规求助? 4421632
关于积分的说明 13763676
捐赠科研通 4295814
什么是DOI,文献DOI怎么找? 2357032
邀请新用户注册赠送积分活动 1353405
关于科研通互助平台的介绍 1314609