计算机科学
模态(人机交互)
人工智能
规范化(社会学)
模式识别(心理学)
特征(语言学)
编码器
仿射变换
图像(数学)
一致性(知识库)
数学
操作系统
人类学
哲学
社会学
语言学
纯数学
作者
Bo Zhan,Luping Zhou,Zhiang Li,Xi Wu,Yi‐Fei Pu,Jiliu Zhou,Yan Wang,Dinggang Shen
标识
DOI:10.1016/j.knosys.2022.109362
摘要
Magnetic resonance imaging (MRI) technique can generate various tissue contrasts by using different pulse sequences and parameters. However, obtaining multiple different contrast images for the same patient is sometimes time-consuming and costly. In this paper, we propose a novel generative adversarial network based on decoupled dual feature representations (D2FE-GAN) for cross-modality MRI synthesis. Inspired by the previous works of image style transferring, we argue that the MRI images can be viewed as a compound of underlying information shared among the bodies of modalities (e.g., semantic information), and representative information varying with the styles of modalities (e.g., edges, contrasts). Different from the existing GAN-based methods that pay attention to either the body consistency or the style refinement, the proposed D2FE-GAN method considers both aspects for better synthesis. Specifically, our method decouples the underlying information and the representative information from the source modality and target modality, respectively, through two dissimilar encoders. In response to the invisibility of target modality in testing phase, we propose to employ a Residual Network firstly to generate an intermediate modality as the pseudo target modality. Subsequently, the decoupled two kinds of information will be integrated through a decoder. Here, we introduce the Adaptive Instance Normalization layer, in which the affine parameters are replaced by the mean and standard deviation of the representative information, thus completing the fusion processing of feature space information. Experimental results on BRATS2015 dataset and IXI dataset show that the proposed method outperforms the state-of-the-art image synthesis approaches in both qualitative and quantitative measures.
科研通智能强力驱动
Strongly Powered by AbleSci AI