亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Cross-Modal Learning via Adversarial Loss and Covariate Shift for Enhanced Liver Segmentation

协变量 对抗制 情态动词 分割 计算机科学 人工智能 机器学习 化学 高分子化学
作者
Savaş Özkan,M. Alper Selver,Bora Baydar,Ali Emre Kavur,Cemre Candemir,Gözde Bozdağı Akar
出处
期刊:IEEE transactions on emerging topics in computational intelligence [Institute of Electrical and Electronics Engineers]
卷期号:8 (4): 2723-2735 被引量:1
标识
DOI:10.1109/tetci.2024.3369868
摘要

Despite the widespread use of deep learning methods for semantic segmentation from single imaging modalities, their performance for exploiting multi-domain data still needs to improve. However, the decision-making process in radiology is often guided by data from multiple sources, such as pre-operative evaluation of living donated liver transplantation donors. In such cases, cross-modality performances of deep models become more important. Unfortunately, the domain-dependency of existing techniques limits their clinical acceptability, primarily confining their performance to individual domains. This issue is further formulated as a multi-source domain adaptation problem, which is an emerging field mainly due to the diverse pattern characteristics exhibited from cross-modality data. This paper presents a novel method that can learn robust representations from unpaired cross-modal (CT-MR) data by encapsulating distinct and shared patterns from multiple modalities. In our solution, the covariate shift property is maintained with structural modifications in our architecture. Also, an adversarial loss is adopted to boost the representation capacity. As a result, sparse and rich representations are obtained. Another superiority of our model is that no information about modalities is needed at the training or inference phase. Tests on unpaired CT and MR liver data obtained from the cross-modality task of the CHAOS grand challenge demonstrate that our approach achieves state-of-the-art results with a large margin in both individual metrics and overall scores.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI2S应助科研通管家采纳,获得10
16秒前
31秒前
35秒前
Kevin发布了新的文献求助10
53秒前
lessismore发布了新的文献求助10
1分钟前
HYQ关闭了HYQ文献求助
2分钟前
CodeCraft应助科研通管家采纳,获得10
2分钟前
小蘑菇应助科研通管家采纳,获得10
2分钟前
Kevin完成签到,获得积分10
2分钟前
Benhnhk21完成签到,获得积分10
2分钟前
漂亮的秋天完成签到 ,获得积分10
3分钟前
yummm完成签到 ,获得积分10
3分钟前
量子星尘发布了新的文献求助10
3分钟前
核桃应助不安的靖柔采纳,获得10
3分钟前
核桃应助不安的靖柔采纳,获得10
3分钟前
不安的靖柔完成签到,获得积分10
4分钟前
科研通AI2S应助科研通管家采纳,获得10
6分钟前
whj完成签到 ,获得积分10
7分钟前
7分钟前
迟梦琪发布了新的文献求助10
7分钟前
HYQ发布了新的文献求助10
7分钟前
迟梦琪完成签到,获得积分20
7分钟前
三世完成签到 ,获得积分10
8分钟前
gszy1975完成签到,获得积分10
8分钟前
8分钟前
红影完成签到,获得积分10
8分钟前
细腻笑卉发布了新的文献求助20
9分钟前
细腻笑卉完成签到 ,获得积分10
9分钟前
量子星尘发布了新的文献求助10
10分钟前
科研通AI2S应助科研通管家采纳,获得10
10分钟前
feihua1完成签到 ,获得积分10
11分钟前
12分钟前
tranphucthinh发布了新的文献求助10
12分钟前
tranphucthinh完成签到,获得积分10
12分钟前
CodeCraft应助章赛采纳,获得10
13分钟前
14分钟前
SciGPT应助小冯看不懂采纳,获得10
14分钟前
科研通AI5应助羞涩的寒松采纳,获得10
14分钟前
熊熊完成签到 ,获得积分10
14分钟前
14分钟前
高分求助中
Comprehensive Toxicology Fourth Edition 24000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
LRZ Gitlab附件(3D Matching of TerraSAR-X Derived Ground Control Points to Mobile Mapping Data 附件) 2000
World Nuclear Fuel Report: Global Scenarios for Demand and Supply Availability 2025-2040 800
The Social Work Ethics Casebook(2nd,Frederic G. R) 600
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 500
AASHTO LRFD Bridge Design Specifications (10th Edition) with 2025 Errata 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5127256
求助须知:如何正确求助?哪些是违规求助? 4330378
关于积分的说明 13493304
捐赠科研通 4165925
什么是DOI,文献DOI怎么找? 2283680
邀请新用户注册赠送积分活动 1284704
关于科研通互助平台的介绍 1224683