计算机科学
图像翻译
人工智能
翻译(生物学)
推论
对抗制
图像(数学)
医学影像学
计算机视觉
模式识别(心理学)
磁共振弥散成像
忠诚
磁共振成像
放射科
医学
电信
生物化学
化学
信使核糖核酸
基因
作者
Muzaffer Özbey,Onat Dalmaz,Salman U. H. Dar,Hasan A. Bedel,Şaban Özturk,Alper Güngör,Tolga Çukur
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2023-06-28
卷期号:42 (12): 3524-3539
被引量:140
标识
DOI:10.1109/tmi.2023.3290149
摘要
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI