计算机科学
解码方法
人工智能
分割
发电机(电路理论)
转化(遗传学)
对比度(视觉)
图像(数学)
对偶(语法数字)
编码(内存)
计算机视觉
深度学习
模式识别(心理学)
算法
艺术
功率(物理)
生物化学
物理
化学
文学类
量子力学
基因
作者
Yulin Yang,Qingqing Chen,Yinhao Li,Fang Wang,Xian‐Hua Han,Yutaro Iwamoto,Jing Liu,Lanfen Lin,Hongjie Hu,Yen‐Wei Chen
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2024-05-20
卷期号:28 (8): 4737-4750
标识
DOI:10.1109/jbhi.2024.3403199
摘要
Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.
科研通智能强力驱动
Strongly Powered by AbleSci AI