协变量
对抗制
情态动词
分割
计算机科学
人工智能
机器学习
化学
高分子化学
作者
Savaş Özkan,M. Alper Selver,Bora Baydar,Ali Emre Kavur,Cemre Candemir,Gözde Bozdağı Akar
出处
期刊:IEEE transactions on emerging topics in computational intelligence
[Institute of Electrical and Electronics Engineers]
日期:2024-03-08
卷期号:8 (4): 2723-2735
被引量:1
标识
DOI:10.1109/tetci.2024.3369868
摘要
Despite the widespread use of deep learning methods for semantic segmentation from single imaging modalities, their performance for exploiting multi-domain data still needs to improve. However, the decision-making process in radiology is often guided by data from multiple sources, such as pre-operative evaluation of living donated liver transplantation donors. In such cases, cross-modality performances of deep models become more important. Unfortunately, the domain-dependency of existing techniques limits their clinical acceptability, primarily confining their performance to individual domains. This issue is further formulated as a multi-source domain adaptation problem, which is an emerging field mainly due to the diverse pattern characteristics exhibited from cross-modality data. This paper presents a novel method that can learn robust representations from unpaired cross-modal (CT-MR) data by encapsulating distinct and shared patterns from multiple modalities. In our solution, the covariate shift property is maintained with structural modifications in our architecture. Also, an adversarial loss is adopted to boost the representation capacity. As a result, sparse and rich representations are obtained. Another superiority of our model is that no information about modalities is needed at the training or inference phase. Tests on unpaired CT and MR liver data obtained from the cross-modality task of the CHAOS grand challenge demonstrate that our approach achieves state-of-the-art results with a large margin in both individual metrics and overall scores.
科研通智能强力驱动
Strongly Powered by AbleSci AI