可解释性
计算机科学
人工智能
深度学习
机器学习
模式识别(心理学)
散列函数
上下文图像分类
人工神经网络
k-最近邻算法
任务(项目管理)
图像(数学)
数据挖掘
计算机安全
经济
管理
作者
Tingying Peng,Melanie Boxberg,Wilko Weichert,Nassir Navab,Carsten Marr
摘要
Abstract Deep neural networks have achieved tremendous success in image recognition, classification and object detection. However, deep learning is often criticised for its lack of transparency and general inability to rationalize its predictions. The issue of poor model interpretability becomes critical in medical applications, as a model that is not understood and trusted by physicians is unlikely to be used in daily clinical practice. In this work, we develop a novel multi-task deep learning framework for simultaneous histopathology image classification and retrieval, leveraging on the classic concept of k-nearest neighbors to improve model interpretability. For a test image, we retrieve the most similar images from our training databases. These retrieved nearest neighbours can be used to classify the test image with a confidence score, and provide a human-interpretable explanation of our classification. Our original framework can be built on top of any existing classification network (and therefore benefit from pretrained models), by (i) adding a triplet loss function with a novel triplet sampling strategy to compare distances between samples and (ii) a Cauchy hashing loss function to accelerate neighbour searching. We evaluate our method on colorectal cancer histology slides, and show that the confidence estimates are strongly correlated with model performance. The explanations provided by nearest neighbors are intuitive and useful for expert evaluation by giving insights into understanding possible model failures, and can support clinical decision making by comparing archived images and patient records with the actual case.
科研通智能强力驱动
Strongly Powered by AbleSci AI