可解释性
计算机科学
机制(生物学)
人工智能
深度学习
代表(政治)
光学(聚焦)
领域(数学)
深层神经网络
机器学习
认知科学
特征(语言学)
心理学
哲学
物理
光学
认识论
法学
纯数学
政治
语言学
数学
政治学
作者
Zhaoyang Niu,Guoqiang Zhong,Hui Yu
标识
DOI:10.1016/j.neucom.2021.03.091
摘要
Attention has arguably become one of the most important concepts in the deep learning field. It is inspired by the biological systems of humans that tend to focus on the distinctive parts when processing large amounts of information. With the development of deep neural networks, attention mechanism has been widely used in diverse application domains. This paper aims to give an overview of the state-of-the-art attention models proposed in recent years. Toward a better general understanding of attention mechanisms, we define a unified model that is suitable for most attention structures. Each step of the attention mechanism implemented in the model is described in detail. Furthermore, we classify existing attention models according to four criteria: the softness of attention, forms of input feature, input representation, and output representation. Besides, we summarize network architectures used in conjunction with the attention mechanism and describe some typical applications of attention mechanism. Finally, we discuss the interpretability that attention brings to deep learning and present its potential future trends.
科研通智能强力驱动
Strongly Powered by AbleSci AI