轮廓
计算机科学
生成对抗网络
人工智能
残余物
计算机断层摄影术
基本事实
碘造影剂
对比度(视觉)
放射科
深度学习
计算机视觉
医学
算法
计算机图形学(图像)
作者
Huiqiao Xie,Yang Lei,Tonghe Wang,Pretesh Patel,Walter J. Curran,Tian Liu,Xiangyang Tang,Xiaofeng Yang
出处
期刊:Medical Imaging 2018: Physics of Medical Imaging
日期:2021-02-13
卷期号:: 141-141
被引量:7
摘要
Contrast-enhanced computed tomography (CECT) has been commonly used in clinical practice of radiotherapy for enhanced tumor and organs at risk (OARs) delineation since it provides additional visualization of soft tissue and vessel anatomy. However, the additional CECT scan leads to increased radiation dose, prolonged scan time, risks of contrast induced nephropathy (CIN), potential requirement of image registration to non-contrast simulation CT, as well as elevated cost, etc. Hypothesizing that the non-contrast simulation CT contains sufficient features to differentiate blood and other soft tissues, in this study, we propose a novel deep learning-based method for generation of CECT images from non-contrast CT. The method exploits a cycle-consistent generative adversarial network (CycleGAN) framework to learn a mapping from non-contrast CT to CECT. A residual U-Net was employed as the generator of the CycleGAN to force the model to learn the specific difference between the non-contrast CT and CECT. The proposed algorithm was evaluated with 20 sets of abdomen patient data with a manor of five-fold cross validation. Each patient was scanned at the same position with non-contrast simulation CT and CECT. The CECT images were treated as training target in training and ground truth in testing. The non-contrast simulation CT served as the input. The preliminary results of visual and quantitative inspections suggest that the proposed method could effectively generate CECT images from non-contrast CT. This method could improve anatomy definition and contouring in radiotherapy without additional clinic efforts in CECT scanning.
科研通智能强力驱动
Strongly Powered by AbleSci AI