图像去噪
降噪
背景(考古学)
计算机科学
人工智能
计算机视觉
情态动词
图像(数学)
模式识别(心理学)
材料科学
古生物学
高分子化学
生物
作者
Jiaqi Cui,Yan Wang,Luping Zhou,Yuchen Fei,Jiliu Zhou,Dinggang Shen
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
被引量:2
标识
DOI:10.1109/tcsvt.2024.3398686
摘要
To obtain high-quality Positron emission tomography (PET) images while minimizing radiation hazards, various methods have been developed to acquire standard-dose PET (SPET) images from low-dose PET (LPET) images. Recent efforts mainly focus on improving the denoising quality by utilizing multi-modal inputs. However, these methods exhibit certain limitations. First, they neglect the varied significance of each modality in denoising. Second, they rely on inflexible voxel-based representations, failing to explicitly preserve intricate structures and contexts in images. To alleviate these problems, we propose a 3D Point-based Multi-modal Context Clusters GAN, namely PMC2-GAN, for obtaining high-quality SPET images from LPET and magnetic resonance imaging (MRI) images. Specifically, we transform the 3D image into unorganized points to flexibly and precisely express its complex structure. Moreover, a self-context clusters (Self-CC) block is devised to explore fine-grained contextual relationships of the image from the perspective of points. Additionally, considering the diverse importance of different modalities, we introduce a cross-context clusters (Cross-CC) block, which prioritizes PET as the primary modality while regarding MRI as the auxiliary one, to effectively integrate the knowledge from the two modalities. Overall, built on the smart integration of Self- and Cross-CC blocks, our PMC 2 -GAN follows GAN architecture. Extensive experiments validate our superiority.
科研通智能强力驱动
Strongly Powered by AbleSci AI