黑色素瘤
机器学习
计算机科学
人工神经网络
皮肤病科
模式识别(心理学)
标识
DOI:10.1093/annonc/mdy519
摘要
In a recently published article in the Annals of Oncology [1.Haenssle H. Fink C. Schneiderbauer R. et al.Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.Ann Oncol. 2018; 29: 1836-1842Abstract Full Text Full Text PDF PubMed Scopus (649) Google Scholar], Haenssle et al. compare the performance of a deep learning model with that of 58 dermatologists. The article was of high general quality, yet their aspects of methodology requires clarification. First, they underestimate human performance by using a metric that they call the receiver operating characteristic (ROC) area. This is not the same metric as the ROC-area under the curve (AUC), which they compare it to. The ROC-AUC is the calculated area under the ROC curve, whereas the ROC area is the average of sensitivity and specificity at a given operating point. Comparing two different metrics as if they are the same is inappropriate. In this article, we as readers cannot calculate the ROC-AUC for the dermatologist group with the data provided, but we can calculate the ROC-area for the model at the specified operating points. These are presented in Table 1, which shows no difference between the model and dermatologists in these experiments.Table 1The performance of the CNN and dermatologists on the taskSensitivitySpecificityAUCROC areaCNN (0.5 threshold)9563.88679aROC area for the model (not presented in the article).Derm L186.671.3–79AUC, area under the curve; ROC, receiver operating characteristic curve.a ROC area for the model (not presented in the article). Open table in a new tab AUC, area under the curve; ROC, receiver operating characteristic curve. The authors also present sensitivity and specificity results at the level of human sensitivity. Second is that the mechanism for selecting this operating point is not stated, but it is likely this occurred post-experiment. We see evidence for this in the section ‘Diagnostic accuracy of CNN versus dermatologists’, where several operating points are chosen for the AI system, which appear to exactly match the level of human sensitivity. If this decision was made using the training data, the sensitivity on the test data would almost certainly be slightly different than the human level. I note that in Figure 2A of Haenssle et al., the ROC curve is very steep in both directions in the region of interest, and a very small change in operating point could lead to a very large reduction in either specificity or sensitivity (into the 70s for both metrics). This suggests that the model performance may be significantly overestimated. I expect the model of Haenssle et al. performs very well, but the methods applied overestimate the performance of the model and underestimate the performance of the human experts. The methodologies used require clarification and may raise questions about the validity of the results and the conclusions of the article. None declared.
科研通智能强力驱动
Strongly Powered by AbleSci AI