可解释性
计算机科学
机器学习
人工智能
集合(抽象数据类型)
任务(项目管理)
一套
序列(生物学)
初始化
深度学习
人工神经网络
生物
历史
遗传学
经济
考古
管理
程序设计语言
作者
Brandon Michael Carter,Max Bileschi,Jamie Smith,Theo Sanderson,Drew Bryant,David Belanger,Lucy J. Colwell
标识
DOI:10.1089/cmb.2019.0339
摘要
In many application domains, neural networks are highly accurate and have been deployed at large scale. However, users often do not have good tools for understanding how these models arrive at their predictions. This has hindered adoption in fields such as the life and medical sciences, where researchers require that models base their decisions on underlying biological phenomena rather than peculiarities of the dataset. We propose a set of methods for critiquing deep learning models and demonstrate their application for protein family classification, a task for which high-accuracy models have considerable potential impact. Our methods extend the Sufficient Input Subsets (SIS) technique, which we use to identify subsets of features in each protein sequence that are alone sufficient for classification. Our suite of tools analyzes these subsets to shed light on the decision-making criteria employed by models trained on this task. These tools show that while deep models may perform classification for biologically relevant reasons, their behavior varies considerably across the choice of network architecture and parameter initialization. While the techniques that we develop are specific to the protein sequence classification task, the approach taken generalizes to a broad set of scientific contexts in which model interpretability is essential.
科研通智能强力驱动
Strongly Powered by AbleSci AI