可解释性
判别式
人工智能
计算机科学
语义学(计算机科学)
规范化(社会学)
代表(政治)
卷积神经网络
集合(抽象数据类型)
模式识别(心理学)
辍学(神经网络)
机器学习
自然语言处理
政治学
社会学
程序设计语言
法学
政治
人类学
作者
David Bau,Bolei Zhou,Aditya Khosla,Aude Oliva,Antonio Torralba
出处
期刊:Computer Vision and Pattern Recognition
日期:2017-07-01
卷期号:: 3319-3327
被引量:1169
标识
DOI:10.1109/cvpr.2017.354
摘要
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.
科研通智能强力驱动
Strongly Powered by AbleSci AI