自举(财务)
强化学习
计算机科学
水准点(测量)
悲观
外推法
一般化
人工智能
过度拟合
贝尔曼方程
功能(生物学)
价值(数学)
机器学习
数学优化
数学
计量经济学
统计
人工神经网络
数学分析
哲学
大地测量学
认识论
进化生物学
生物
地理
作者
Chenjia Bai,Lingxiao Wang,Zhuoran Yang,Zhihong Deng,Animesh Garg,Peng Liu,Zhaoran Wang
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:17
标识
DOI:10.48550/arxiv.2202.11566
摘要
Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment. Directly applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions. Previous methods tackle such problem by penalizing the Q-values of OOD actions or constraining the trained policy to be close to the behavior policy. Nevertheless, such methods typically prevent the generalization of value functions beyond the offline data and also lack precise characterization of OOD data. In this paper, we propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints. Specifically, PBRL conducts uncertainty quantification via the disagreement of bootstrapped Q-functions, and performs pessimistic updates by penalizing the value function based on the estimated uncertainty. To tackle the extrapolating error, we further propose a novel OOD sampling method. We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL. Extensive experiments on D4RL benchmark show that PBRL has better performance compared to the state-of-the-art algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI