人工智能
深度学习
计算机科学
模态(人机交互)
图像融合
宫颈癌
医学影像学
正电子发射断层摄影术
模式识别(心理学)
图像配准
计算机视觉
癌症
图像(数学)
放射科
医学
内科学
作者
Yue Ming,Xiying Dong,Jihuai Zhao,Zefu Chen,Hao Wang,Nan Wu
出处
期刊:Methods
[Elsevier]
日期:2022-05-20
卷期号:205: 46-52
被引量:25
标识
DOI:10.1016/j.ymeth.2022.05.004
摘要
Cervical cancer is the fourth most common cancer in women, and its precise detection plays a critical role in disease treatment and prognosis prediction. Fluorodeoxyglucose positron emission tomography and computed tomography, i.e., FDG-PET/CT and PET/CT, have established roles with superior sensitivity and specificity in most cancer imaging applications. However, a typical FDG-PET/CT analysis involves the time-consuming process of interpreting hundreds of images, and the intense image screening work has greatly hindered clinicians. We propose a computer-aided deep learning-based framework to detect cervical cancer using multimodal medical images to increase the efficiency of clinical diagnosis. This framework has three components: image registration, multimodal image fusion, and lesion object detection. Compared to traditional approaches, our adaptive image fusion method fuses multimodal medical images. We discuss the performance of deep learning in each modality, and we conduct extensive experiments to compare the performance of different image fusion methods with some state-of-the-art (SOTA) object-detection deep learning-based methods in images with different modalities. Compared with PET, which has the highest recognition accuracy in single-modality images, the recognition accuracy of our proposed method on multiple object detection models is improved by an average of 6.06%. And compared with the best results of other multimodal fusion methods, our results have an average improvement of 8.9%.
科研通智能强力驱动
Strongly Powered by AbleSci AI