分割
人工智能
计算机科学
前列腺
磁共振成像
数据集
模式识别(心理学)
医学
放射科
癌症
内科学
作者
Renato Cuocolo,Albert Comelli,Alessandro Stefano,Viviana Benfante,Navdeep Dahiya,Arnaldo Stanzione,Anna Castaldo,Davide Raffaele De Lucia,Anthony Yezzi,Massimo Imbriaco
摘要
Background Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate‐specific antigen. Purpose This study compared different deep learning methods for whole‐gland and zonal prostate segmentation. Study Type Retrospective. Population A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence A 3 T, TSE T 2 ‐weighted. Assessment Four operators performed manual segmentation of the whole‐gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U‐net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5‐fold cross‐validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results The best DSC ( P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U‐net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion Deep learning networks can accurately segment the prostate using T 2 ‐weighted images. Evidence Level 4 Technical Efficacy Stage 2
科研通智能强力驱动
Strongly Powered by AbleSci AI