分割
融合
接头(建筑物)
人工智能
任务(项目管理)
计算机科学
医学
生物医学工程
工程类
结构工程
系统工程
语言学
哲学
作者
Yiwen Zhang,Liming Zhong,Hai Shu,Zhenhui Dai,Kaiyi Zheng,Zefeiyun Chen,Qianjin Feng,Xuetao Wang,Wei Yang
出处
期刊:IEEE transactions on artificial intelligence
[Institute of Electrical and Electronics Engineers]
日期:2022-06-30
卷期号:4 (5): 1246-1257
被引量:6
标识
DOI:10.1109/tai.2022.3187388
摘要
The synthesis of computed tomography (CT) images from magnetic resonance imaging (MR) images and segmentation of target and organs-at-risk (OARs) are two important tasks in MR-only radiotherapy treatment planning (RTP). Some methods have been proposed to utilize the paired MR and CT images for MR-CT synthesis or target and OARs segmentation. However, these methods usually handle synthesis and segmentation as two separate tasks, and ignore the inevitable registration errors in paired images after standard registration. In this article, we propose a cross-task feedback fusion generative adversarial network (CTFF-GAN) for joint MR-CT synthesis and segmentation of target and OARs to enhance each task's performance. Specifically, we propose a cross-task feedback fusion (CTFF) module to feedback the semantic information from the segmentation task to the synthesis task for the anatomical structure correction in synthetic CT images. Besides, we use CT images synthesized from MR images for multimodal segmentation to eliminate the registration errors. Moreover, we develop a multitask discriminator to urge the generator to devote more attention to the organ boundaries. Experiments on our nasopharyngeal carcinoma dataset show that CTFF-GAN achieves impressive performance with MAE of 70.69 $\pm$ 10.50 HU, SSIM of 0.755 $\pm$ 0.03, and PSNR of 27.44 $\pm$ 1.20 dB in synthetic CT, and the mean dice of 0.783 $\pm$ 0.075 in target and OARs segmentation. Our CTFF-GAN outperforms state-of-the-art methods in both the synthesis and segmentation tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI