医学
淋巴结
H&E染色
接收机工作特性
放射科
乳腺癌
深度学习
算法
淋巴
癌症
试验装置
病理
人工智能
内科学
机器学习
染色
计算机科学
作者
Babak Ehteshami Bejnordi,Mitko Veta,Paul Johannes van Diest,Bram van Ginneken,Nico Karssemeijer,Geert Litjens,Jeroen van der Laak,Meyke Hermsen,Quirine F. Manson,Maschenka Balkenhol,Oscar Geessink,Nikolas Stathonikos,Marcory CRF van Dijk,Peter Bult,Francisco Beça,Andrew H. Beck,D. Wang,Aditya Khosla,Rishab Gargeya,Humayun Irshad
出处
期刊:JAMA
[American Medical Association]
日期:2017-12-12
卷期号:318 (22): 2199-2199
被引量:2798
标识
DOI:10.1001/jama.2017.14585
摘要
Importance
Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective
Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Design, Setting, and Participants
Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures
Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures
The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results
The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884];P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance
In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
科研通智能强力驱动
Strongly Powered by AbleSci AI