可解释性
计算机科学
人工智能
判别式
上下文图像分类
机器学习
模式识别(心理学)
图像(数学)
深度学习
分类器(UML)
数据挖掘
作者
Zhenliang Li,Liming Yuan,Haixia Xu,Rui Cheng,Xianbin Wen
标识
DOI:10.1109/bibm49941.2020.9313518
摘要
Existing Multi-Instance learning (MIL) methods for medical image classification typically segment an image (bag) into small patches (instances) and learn a classifier to predict the label of an unknown bag. Most of such methods assume that instances within a bag are independently and identically distributed. However, instances in the same bag often interact with each other. In this paper, we propose an Induced SelfAttention based deep MIL method that uses the self-attention mechanism for learning the global structure information within a bag. To alleviate the computational complexity of the naive implementation of self-attention, we introduce an inducing point based scheme into the self-attention block. We show empirically that the proposed method is superior to other deep MIL methods in terms of performance and interpretability on three medical image data sets. We also employ a synthetic MIL data set to provide an intensive analysis of the effectiveness of our method. The experimental results reveal that the induced self-attention mechanism can learn very discriminative and different features for target and non-target instances within a bag, and thus fits more generalized MIL problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI