医学
窄带成像
鼻咽癌
放射科
深度学习
内窥镜检查
作者
Jianwei Xu,Jun Wang,Xianzhang Bian,Ji-Qing Zhu,Cheng-Wei Tie,Xiaoqing Liu,Zhiyong Zhou,Xiaoguang Ni,Dahong Qian
摘要
OBJECTIVES/HYPOTHESIS To develop a deep-learning-based automatic diagnosis system for identifying nasopharyngeal carcinoma (NPC) from noncancer (inflammation and hyperplasia), using both white light imaging (WLI) and narrow-band imaging (NBI) nasopharyngoscopy images. STUDY DESIGN Retrospective study. METHODS A total of 4,783 nasopharyngoscopy images (2,898 WLI and 1,885 NBI) of 671 patients were collected and a novel deep convolutional neural network (DCNN) framework was developed named Siamese deep convolutional neural network (S-DCNN), which can simultaneously utilize WLI and NBI images to improve the classification performance. To verify the effectiveness of combining the above-mentioned two modal images for prediction, we compared the proposed S-DCNN with two baseline models, namely DCNN-1 (only considering WLI images) and DCNN-2 (only considering NBI images). RESULTS In the threefold cross-validation, an overall accuracy and area under the curve of the three DCNNs achieved 94.9% (95% confidence interval [CI] 93.3%-96.5%) and 0.986 (95% CI 0.982-0.992), 87.0% (95% CI 84.2%-89.7%) and 0.930 (95% CI 0.906-0.961), and 92.8% (95% CI 90.4%-95.3%) and 0.971 (95% CI 0.953-0.992), respectively. The accuracy of S-DCNN is significantly improved compared with DCNN-1 (P-value <.001) and DCNN-2 (P-value = .008). CONCLUSION Using the deep-learning technology to automatically diagnose NPC under nasopharyngoscopy can provide valuable reference for NPC screening. Superior performance can be obtained by simultaneously utilizing the multimodal features of NBI image and WLI image of the same patient. LEVEL OF EVIDENCE 3 Laryngoscope, 2021.
科研通智能强力驱动
Strongly Powered by AbleSci AI