纹理(宇宙学)
人工智能
计算机视觉
计算机科学
扩散
计算机断层摄影术
数学
模式识别(心理学)
医学
放射科
图像(数学)
物理
热力学
作者
Youjian Zhang,Li Li,Wei Wang,Xinquan Yang,Haotian Zhou,Jiahui He,Yaoqin Xie,Yuming Jiang,Wei Sun,Xinyuan Zhang,G. S. Zhou,Zhicheng Zhang
标识
DOI:10.1016/j.media.2024.103362
摘要
Cone beam computed tomography (CBCT) serves as a vital imaging modality in diverse clinical applications, but is constrained by inherent limitations such as reduced image quality and increased noise. In contrast, computed tomography (CT) offers superior resolution and tissue contrast. Bridging the gap between these modalities through CBCT-to-CT synthesis becomes imperative. Deep learning techniques have enhanced this synthesis, yet challenges with generative adversarial networks persist. Denoising Diffusion Probabilistic Models have emerged as a promising alternative in image synthesis. In this study, we propose a novel texture-preserving diffusion model for CBCT-to-CT synthesis that incorporates adaptive high-frequency optimization and a dual-mode feature fusion module. Our method aims to enhance high-frequency details, effectively fuse cross-modality features, and preserve fine image structures. Extensive validation demonstrates superior performance over existing methods, showcasing better generalization. The proposed model offers a transformative pathway to augment diagnostic accuracy and refine treatment planning across various clinical settings. This work represents a pivotal step toward non-invasive, safer, and high-quality CBCT-to-CT synthesis, advancing personalized diagnostic imaging practices.
科研通智能强力驱动
Strongly Powered by AbleSci AI