磁共振成像
分割
计算机科学
人工智能
编码器
有效扩散系数
豪斯多夫距离
相似性(几何)
Sørensen–骰子系数
模式识别(心理学)
图像分割
放射科
图像(数学)
医学
操作系统
作者
Shan Jin,Hongming Xu,Yue Dong,Xinyu Hao,Fengying Qin,Qi Xu,Yong Zhu,Fengyu Cong
摘要
Abstract Automatic cervical cancer segmentation in multimodal magnetic resonance imaging (MRI) is essential because tumor location and delineation can support patients' diagnosis and treatment planning. To meet this clinical demand, we present an encoder–decoder deep learning architecture which employs an EfficientNet encoder in the UNet++ architecture (E‐UNet++). EfficientNet helps in effectively encoding multiscale image features. The nested decoders with skip connections aggregate multiscale features from low‐level to high‐level, which helps in detecting fine‐grained details. A cohort of 228 cervical cancer patients with multimodal MRI sequences, including T2‐weighted imaging, diffusion‐weighted imaging, apparent diffusion coefficient imaging, contrast enhancement T1‐weighted imaging, and dynamic contrast‐enhanced imaging (DCE), has been explored. Evaluations are performed by considering either single or multimodal MRI with standard segmentation quantitative metrics: dice similarity coefficient (DSC), intersection over union (IOU), and 95% Hausdorff distance (HD). Our results show that the E‐UNet++ model can achieve DSC values of 0.681–0.786, IOU values of 0.558–0.678, and 95% HD values of 3.779–7.411 pixels in different single sequences. Meanwhile, it provides DSC values of 0.644 and 0.687 on three DCE subsequences and all MRI sequences together. Our designed model is superior to other comparative models, which shows the potential to be used as an artificial intelligence tool for cervical cancer segmentation in multimodal MRI.
科研通智能强力驱动
Strongly Powered by AbleSci AI