计算机科学
人工智能
鉴别器
计算机视觉
迭代重建
编码器
正电子发射断层摄影术
模式识别(心理学)
特征(语言学)
图像质量
模态(人机交互)
体素
核医学
图像(数学)
医学
电信
语言学
哲学
探测器
操作系统
作者
Yan Wang,Yanmei Luo,Chen Zu,Bo Zhan,Zhengyang Jiao,Xi Wu,Jiliu Zhou,Dinggang Shen,Luping Zhou
标识
DOI:10.1016/j.media.2023.102983
摘要
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI