强化学习
增强学习
计算机科学
人工智能
动作(物理)
功能(生物学)
人工神经网络
危害
深度学习
适应(眼睛)
领域(数学分析)
算法
数学
机器学习
心理学
物理
数学分析
生物
神经科学
进化生物学
社会心理学
量子力学
作者
Hado van Hasselt,Arthur Guez,David Silver
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2016-03-02
卷期号:30 (1)
被引量:2077
标识
DOI:10.1609/aaai.v30i1.10295
摘要
The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
科研通智能强力驱动
Strongly Powered by AbleSci AI