翻译(生物学)
图像(数学)
扩散
图像翻译
计算机科学
人工智能
计算机视觉
物理
生物
生物化学
信使核糖核酸
基因
热力学
作者
Yu-Wen Chen,Nicholas Konz,Hanxue Gu,Haoyu Dong,Yaqian Chen,Li Lin,Sang Hoon Lee,Maciej A. Mazurowski
出处
期刊:Cornell University - arXiv
日期:2024-03-15
标识
DOI:10.48550/arxiv.2403.10786
摘要
Accurately translating medical images across different modalities (e.g., CT to MRI) has numerous downstream clinical and machine learning applications. While several methods have been proposed to achieve this, they often prioritize perceptual quality with respect to output domain features over preserving anatomical fidelity. However, maintaining anatomy during translation is essential for many tasks, e.g., when leveraging masks from the input domain to develop a segmentation model with images translated to the output domain. To address these challenges, we propose ContourDiff, a novel framework that leverages domain-invariant anatomical contour representations of images. These representations are simple to extract from images, yet form precise spatial constraints on their anatomical content. We introduce a diffusion model that converts contour representations of images from arbitrary input domains into images in the output domain of interest. By applying the contour as a constraint at every diffusion sampling step, we ensure the preservation of anatomical content. We evaluate our method by training a segmentation model on images translated from CT to MRI with their original CT masks and testing its performance on real MRIs. Our method outperforms other unpaired image translation methods by a significant margin, furthermore without the need to access any input domain information during training.
科研通智能强力驱动
Strongly Powered by AbleSci AI