生成对抗网络
模式识别(心理学)
对抗制
深度学习
体素
分割
人类连接体项目
计算机视觉
人工神经网络
迭代重建
鉴别器
神经影像学
卷积神经网络
医学影像学
人脑
作者
Shengye Hu,Baiying Lei,Shuqiang Wang,Yong Wang,Zhiguang Feng,Yanyan Shen
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2021-08-24
卷期号:: 1-1
被引量:6
标识
DOI:10.1109/tmi.2021.3107013
摘要
Fusing multi-modality medical images, such as magnetic resonance (MR) imaging and positron emission tomography (PET), can provide various anatomical and functional information about the human body. However, PET data is not always available for several reasons, such as high cost, radiation hazard, and other limitations. This paper proposes a 3D end-to-end synthesis network called Bidirectional Mapping Generative Adversarial Networks (BMGAN). Image contexts and latent vectors are effectively used for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high-dimensional latent space. Moreover, the 3D Dense-UNet generator architecture and the hybrid loss functions are further constructed to improve the visual quality of cross-modality synthetic images. The most appealing part is that the proposed method can synthesize perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive methods in terms of quantitative measures, qualitative displays, and evaluation metrics for classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI