计算机科学
背景(考古学)
特征(语言学)
人工智能
模态(人机交互)
特征学习
编码器
分割
深度学习
特征向量
模式
过程(计算)
模式识别(心理学)
机器学习
古生物学
社会科学
哲学
语言学
社会学
生物
操作系统
作者
Pu Huang,Dengwang Li,Zhicheng Jiao,Dongming Wei,Bing Cao,Zhanhao Mo,Qian Wang,Han Zhang,Dinggang Shen
标识
DOI:10.1016/j.media.2022.102472
摘要
Multi-modal structural Magnetic Resonance Image (MRI) provides complementary information and has been used widely for diagnosis and treatment planning of gliomas. While machine learning is popularly adopted to process and analyze MRI images, most existing tools are based on complete sets of multi-modality images that are costly and sometimes impossible to acquire in real clinical scenarios. In this work, we address the challenge of multi-modality glioma MRI synthesis often with incomplete MRI modalities. We propose 3D Common-feature learning-based Context-aware Generative Adversarial Network (CoCa-GAN) for this purpose. In particular, our proposed CoCa-GAN method adopts the encoder-decoder architecture to map the input modalities into a common feature space by the encoder, from which (1) the missing target modality(-ies) can be synthesized by the decoder, and also (2) the jointly conducted segmentation of the gliomas can help the synthesis task to better focus on the tumor regions. The synthesis and segmentation tasks share the same common feature space, while multi-task learning boosts both their performances. In particular, for the encoder to derive the common feature space, we propose and validate two different models, i.e., (1) early-fusion CoCa-GAN (eCoCa-GAN) and (2) intermediate-fusion CoCa-GAN (iCoCa-GAN). The experimental results demonstrate that the proposed iCoCa-GAN outperforms other state-of-the-art methods in synthesis of missing image modalities. Moreover, our method is flexible to handle the arbitrary combination of input/output image modalities, which makes it feasible to process brain tumor MRI data in real clinical circumstances.
科研通智能强力驱动
Strongly Powered by AbleSci AI