可解释性
机器学习
水准点(测量)
先验与后验
计算机科学
人工智能
理论(学习稳定性)
正规化(语言学)
深层神经网络
钥匙(锁)
人工神经网络
认识论
大地测量学
计算机安全
哲学
地理
作者
David Alvarez-Melis,Tommi Jaakkola
出处
期刊:Cornell University - arXiv
日期:2018-06-20
被引量:222
摘要
Most recent work on interpretability of complex machine learning models has focused on estimating $\textit{a posteriori}$ explanations for previously trained models around specific predictions. $\textit{Self-explaining}$ models where interpretability plays a key role already during learning have received much less attention. We propose three desiderata for explanations in general -- explicitness, faithfulness, and stability -- and show that existing methods do not satisfy them. In response, we design self-explaining models in stages, progressively generalizing linear classifiers to complex yet architecturally explicit models. Faithfulness and stability are enforced via regularization specifically tailored to such models. Experimental results across various benchmark datasets show that our framework offers a promising direction for reconciling model complexity and interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI