计算机科学
人工智能
图像合成
噪音(视频)
一致性(知识库)
转化(遗传学)
非线性系统
像素
图像(数学)
模式识别(心理学)
对比度(视觉)
对抗制
计算机视觉
物理
基因
化学
量子力学
生物化学
作者
Salman Ul Hassan Dar,Mahmut Yurt,Levent Karacan,Aykut Erdem,Erkut Erdem,Tolga Çukur
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2019-02-26
卷期号:38 (10): 2375-2388
被引量:425
标识
DOI:10.1109/tmi.2019.2901750
摘要
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T 1 - and T 2 - weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.
科研通智能强力驱动
Strongly Powered by AbleSci AI