霍恩斯菲尔德秤
人工智能
图像质量
影像引导放射治疗
计算机科学
锥束ct
降噪
医学影像学
图像配准
分割
磁共振弥散成像
核医学
计算机视觉
医学
图像(数学)
计算机断层摄影术
放射科
磁共振成像
作者
Junbo Peng,Richard L. J. Qiu,Jacob Wynne,Chih‐Wei Chang,Shaoyan Pan,Tonghe Wang,Justin Roper,Tian Liu,Pretesh Patel,David S. Yu,Xiaofeng Yang
摘要
Abstract Background Daily or weekly cone‐beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image‐guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. Purpose This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. Methods The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time‐embedded U‐net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head‐and‐neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR) and normalized cross‐correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model‐based sCT generation methods. Results In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. Conclusions The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT‐based organ segmentation and dose calculation for online ART.
科研通智能强力驱动
Strongly Powered by AbleSci AI