可解释性
计算机科学
人工智能
卷积神经网络
深度学习
代表(政治)
模式识别(心理学)
像素
图像(数学)
比例(比率)
深层神经网络
机器学习
物理
法学
政治
量子力学
政治学
作者
Hangchen Xiang,Junyi Shen,Qingguo Yan,Meilian Xu,Xiaoshuang Shi,Xiaofeng Zhu
标识
DOI:10.1016/j.media.2023.102890
摘要
Recently, convolutional neural networks (CNNs) directly using whole slide images (WSIs) for tumor diagnosis and analysis have attracted considerable attention, because they only utilize the slide-level label for model training without any additional annotations. However, it is still a challenging task to directly handle gigapixel WSIs, due to the billions of pixels and intra-variations in each WSI. To overcome this problem, in this paper, we propose a novel end-to-end interpretable deep MIL framework for WSI analysis, by using a two-branch deep neural network and a multi-scale representation attention mechanism to directly extract features from all patches of each WSI. Specifically, we first divide each WSI into bag-, patch- and cell-level images, and then assign the slide-level label to its corresponding bag-level images, so that WSI classification becomes a MIL problem. Additionally, we design a novel multi-scale representation attention mechanism, and embed it into a two-branch deep network to simultaneously mine the bag with a correct label, the significant patches and their cell-level information. Extensive experiments demonstrate the superior performance of the proposed framework over recent state-of-the-art methods, in term of classification accuracy and model interpretability. All source codes are released at: https://github.com/xhangchen/MRAN/.
科研通智能强力驱动
Strongly Powered by AbleSci AI