医学
乳房成像
乳房磁振造影
麦克内马尔试验
接收机工作特性
放射科
乳腺癌
深度学习
人工智能
核医学
医学物理学
乳腺摄影术
计算机科学
癌症
内科学
统计
数学
作者
Sarah Eskreis‐Winkler,Elizabeth Sutton,Donna D’Alessio,Katherine Gallagher,Nicole B. Saphier,Joseph N. Stember,Danny F. Martinez,Elizabeth A. Morris,Katja Pinker
摘要
Background Background parenchymal enhancement (BPE) is assessed on breast MRI reports as mandated by the Breast Imaging Reporting and Data System (BI‐RADS) but is prone to inter and intrareader variation. Semiautomated and fully automated BPE assessment tools have been developed but none has surpassed radiologist BPE designations. Purpose To develop a deep learning model for automated BPE classification and to compare its performance with current standard‐of‐care radiology report BPE designations. Study Type Retrospective. Population Consecutive high‐risk patients (i.e. >20% lifetime risk of breast cancer) who underwent contrast‐enhanced screening breast MRI from October 2013 to January 2019. The study included 5224 breast MRIs, divided into 3998 training, 444 validation, and 782 testing exams. On radiology reports, 1286 exams were categorized as high BPE (i.e., marked or moderate) and 3938 as low BPE (i.e., mild or minimal). Field Strength/Sequence A 1.5 T or 3 T system; one precontrast and three postcontrast phases of fat‐saturated T1‐weighted dynamic contrast‐enhanced imaging. Assessment Breast MRIs were used to develop two deep learning models (Slab artificial intelligence (AI); maximum intensity projection [MIP] AI) for BPE categorization using radiology report BPE labels. Models were tested on a heldout test sets using radiology report BPE and three‐reader averaged consensus as the reference standards. Statistical Tests Model performance was assessed using receiver operating characteristic curve analysis. Associations between high BPE and BI‐RADS assessments were evaluated using McNemar's chi‐square test ( α * = 0.025). Results The Slab AI model significantly outperformed the MIP AI model across the full test set (area under the curve of 0.84 vs. 0.79) using the radiology report reference standard. Using three‐reader consensus BPE labels reference standard, our AI model significantly outperformed radiology report BPE labels. Finally, the AI model was significantly more likely than the radiologist to assign “high BPE” to suspicious breast MRIs and significantly less likely than the radiologist to assign “high BPE” to negative breast MRIs. Data Conclusion Fully automated BPE assessments for breast MRIs could be more accurate than BPE assessments from radiology reports. Level of Evidence 4 Technical Efficacy Stage 3
科研通智能强力驱动
Strongly Powered by AbleSci AI