A novel one-to-multiple unsupervised domain adaptation framework for abdominal organ segmentation

计算机科学 分割 人工智能 模式识别(心理学) 一致性(知识库) 领域(数学分析) 相似性(几何) 图像(数学) 数学 数学分析
作者
Xiaowei Xu,Yinan Chen,Jianghao Wu,Jiangshan Lu,Yuxiang Ye,Yechong Huang,Xin Dou,Kang Li,Guotai Wang,Shaoting Zhang,Wei Gong
出处
期刊:Medical Image Analysis [Elsevier]
卷期号:88: 102873-102873 被引量:11
标识
DOI:10.1016/j.media.2023.102873
摘要

Abdominal multi-organ segmentation in multi-sequence magnetic resonance images (MRI) is of great significance in many clinical scenarios, e.g., MRI-oriented pre-operative treatment planning. Labeling multiple organs on a single MR sequence is a time-consuming and labor-intensive task, let alone manual labeling on multiple MR sequences. Training a model by one sequence and generalizing it to other domains is one way to reduce the burden of manual annotation, but the existence of domain gap often leads to poor generalization performance of such methods. Image translation-based unsupervised domain adaptation (UDA) is a common way to address this domain gap issue. However, existing methods focus less on keeping anatomical consistency and are limited by one-to-one domain adaptation, leading to low efficiency for adapting a model to multiple target domains. This work proposes a unified framework called OMUDA for one-to-multiple unsupervised domain-adaptive segmentation, where disentanglement between content and style is used to efficiently translate a source domain image into multiple target domains. Moreover, generator refactoring and style constraint are conducted in OMUDA for better maintaining cross-modality structural consistency and reducing domain aliasing. The average Dice Similarity Coefficients (DSCs) of OMUDA for multiple sequences and organs on the in-house test set, the AMOS22 dataset and the CHAOS dataset are 85.51%, 82.66% and 91.38%, respectively, which are slightly lower than those of CycleGAN(85.66% and 83.40%) in the first two data sets and slightly higher than CycleGAN(91.36%) in the last dataset. But compared with CycleGAN, OMUDA reduces floating-point calculations by about 87 percent in the training phase and about 30 percent in the inference stage respectively. The quantitative results in both segmentation performance and training efficiency demonstrate the usability of OMUDA in some practical scenes, such as the initial phase of product development.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI2S应助大吉采纳,获得10
刚刚
蓝莓发布了新的文献求助10
刚刚
1秒前
2秒前
2秒前
领导范儿应助El采纳,获得30
2秒前
Lucas应助momo采纳,获得10
3秒前
西西里关注了科研通微信公众号
4秒前
今天也要好好学习完成签到,获得积分10
4秒前
善学以致用应助细心尔蓝采纳,获得10
5秒前
5秒前
wanci应助Ll采纳,获得30
6秒前
研友_ZAVod8完成签到,获得积分10
6秒前
shinian发布了新的文献求助10
6秒前
Maymay完成签到 ,获得积分10
6秒前
lwei完成签到,获得积分20
6秒前
爆米花应助广发牛勿采纳,获得10
6秒前
6秒前
张慧蓉发布了新的文献求助10
7秒前
原山何野发布了新的文献求助10
7秒前
8秒前
852应助xiaoluo采纳,获得10
8秒前
8秒前
赖嘉顿发布了新的文献求助10
8秒前
8秒前
花见月开发布了新的文献求助10
8秒前
9秒前
Jasper应助sunset采纳,获得10
9秒前
10秒前
打打应助盛欢采纳,获得10
10秒前
10秒前
11秒前
乌拉拉关注了科研通微信公众号
11秒前
情怀应助神勇的女孩采纳,获得10
11秒前
11秒前
赖嘉顿发布了新的文献求助10
12秒前
krzysku发布了新的文献求助10
13秒前
13秒前
核桃发布了新的文献求助10
13秒前
我要成功完成签到,获得积分10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Holistic Discourse Analysis 600
Vertébrés continentaux du Crétacé supérieur de Provence (Sud-Est de la France) 600
Routledge Handbook on Spaces of Mental Health and Wellbeing 500
Elle ou lui ? Histoire des transsexuels en France 500
FUNDAMENTAL STUDY OF ADAPTIVE CONTROL SYSTEMS 500
Nanoelectronics and Information Technology: Advanced Electronic Materials and Novel Devices 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5321239
求助须知:如何正确求助?哪些是违规求助? 4463064
关于积分的说明 13888665
捐赠科研通 4354148
什么是DOI,文献DOI怎么找? 2391585
邀请新用户注册赠送积分活动 1385183
关于科研通互助平台的介绍 1354924