情态动词
融合
放射治疗
计算机科学
核医学
人工智能
医学
放射科
材料科学
哲学
语言学
高分子化学
作者
Jiawei Sun,Nannan Cao,Hui Bi,Liugang Gao,Kai Xie,Tao Lin,Jianfeng Sui,Xinye Ni
标识
DOI:10.1016/j.compbiomed.2024.108868
摘要
In non-coplanar radiotherapy, DR is commonly used for image guiding which needs to fuse intraoperative DR with preoperative CT. But this fusion task performs poorly, suffering from unaligned and dimensional differences between DR and CT. CT reconstruction estimated from DR could facilitate this challenge. Thus, We propose a unified generation and registration framework, named DiffRecon, for intraoperative CT reconstruction based on DR using the diffusion model. Specifically, we use the generation model for synthesizing intraoperative CTs to eliminate dimensional differences and the registration model for aligning synthetic CTs to improve reconstruction. To ensure clinical usability, CT is not only estimated from DR but the preoperative CT is also introduced as prior. We design a dual-encoder to learn prior knowledge and spatial deformation among pre- and intra-operative CT pairs and DR parallelly for 2D/3D feature deformable conversion. To calibrate the cross-modal fusion, we insert cross-attention modules to enhance the 2D/3D feature interaction between dual encoders. DiffRecon has been evaluated by both image quality metrics and dosimetric indicators. The high image synthesis metrics are with RMSE of 0.02±0.01, PSNR of 44.92±3.26, and SSIM of 0.994±0.003. The mean gamma passing rates between rCT and sCT for 1%/1 mm, 2%/2 mm and 3%/3 mm acceptance criteria are 95.2%, 99.4% and 99.9% respectively. The proposed DiffRecon can reconstruct CT accurately from a single DR projection with excellent image generation quality and dosimetric accuracy. These demonstrate that the method can be applied in non-coplanar adaptive radiotherapy workflows.
科研通智能强力驱动
Strongly Powered by AbleSci AI