图像去噪
降噪
概率逻辑
计算机科学
人工智能
计算机视觉
模式识别(心理学)
作者
Junbo Peng,Richard L. J. Qiu,Jacob Wynne,Chih‐Wei Chang,Shaoyan Pan,Tonghe Wang,Justin Roper,Tian Liu,Pretesh Patel,David S. Yu,Xiaofeng Yang
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2303.02649
摘要
Background: Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. Purpose: This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT domain for the image quality improvement of CBCT. Methods: The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform standard Gaussian noise to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. Conclusions: The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
科研通智能强力驱动
Strongly Powered by AbleSci AI