计算机科学
自然(考古学)
自然语言处理
自然语言
人工智能
考古
历史
作者
Siwen Luo,Hamish Ivison,Soyeon Caren Han,Josiah Poon
摘要
As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for Natural Language Processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are specifically divided into three categories: (1) interpreting the model’s predictions through related input features; (2) interpreting through natural language explanation; (3) probing the hidden states of models and word representations.
科研通智能强力驱动
Strongly Powered by AbleSci AI