Nonstationary Reinforcement Learning: The Blessing of (More) Optimism

后悔 强化学习 马尔可夫决策过程 计算机科学 杠杆(统计) 时差学习 上下界 库存控制 背景(考古学) 数学优化 机器学习 人工智能 马尔可夫过程 数学 运筹学 统计 数学分析 古生物学 生物
作者
Wang Chi Cheung,David Simchi‐Levi,Ruihao Zhu
出处
期刊:Management Science [Institute for Operations Research and the Management Sciences]
卷期号:69 (10): 5722-5739 被引量:24
标识
DOI:10.1287/mnsc.2023.4704
摘要

Motivated by operations research applications, such as inventory control and real-time bidding, we consider undiscounted reinforcement learning in Markov decision processes under model uncertainty and temporal drifts. In this setting, both the latent reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the sliding window upper confidence bound for reinforcement learning with confidence-widening (SWUCRL2-CW) algorithm and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the bandit-over-reinforcement learning algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound but in a parameter-free manner (i.e., without knowing the variation budgets). Finally, we conduct numerical experiments to show that our proposed algorithms achieve superior empirical performance compared with existing algorithms. Notably, under nonstationarity, historical data samples may falsely indicate that state transition rarely happens. This thus presents a significant challenge when one tries to apply the conventional optimism in the face of uncertainty principle to achieve a low dynamic regret bound. We overcome this challenge by proposing a novel confidence-widening technique that incorporates additional optimism into our learning algorithms. To extend our theoretical findings, we demonstrate, in the context of single-item inventory control with lost sales, fixed cost, and zero lead time, how one can leverage special structures on the state transition distributions to achieve improved dynamic regret bound in time-varying demand environments. This paper was accepted by J. George Shanthikumar, data science. Funding: The authors acknowledge support from the Massachusetts Institute of Technology (MIT) Data Science Laboratory and the MIT–IBM partnership in artificial intelligence. W. C. Cheung acknowledges support from the Singapore Ministry of Education [Tier 2 Grant MOE-T2EP20121-0012]. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2023.4704 .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
瘦瘦煎饼完成签到 ,获得积分10
刚刚
刚刚
SYHWW发布了新的文献求助10
刚刚
小于发布了新的文献求助10
刚刚
专注白安发布了新的文献求助10
1秒前
含蓄夏瑶完成签到,获得积分10
2秒前
2秒前
爆米花应助大大泡泡糖采纳,获得10
2秒前
共享精神应助Wshtiiiii采纳,获得10
2秒前
天天快乐应助Yulin Yu采纳,获得10
3秒前
3秒前
科研通AI2S应助扎心采纳,获得10
3秒前
焱焱发布了新的文献求助10
4秒前
小哲完成签到 ,获得积分10
4秒前
5秒前
6秒前
赘婿应助往昔采纳,获得10
6秒前
7秒前
海4015应助选课采纳,获得30
7秒前
LE发布了新的文献求助30
8秒前
9秒前
萧水白应助一一采纳,获得10
9秒前
六个核桃发布了新的文献求助10
9秒前
10秒前
11秒前
顾矜应助STAN采纳,获得10
11秒前
11秒前
bearx发布了新的文献求助10
12秒前
13秒前
777完成签到 ,获得积分20
14秒前
852应助焱焱采纳,获得10
14秒前
往昔发布了新的文献求助10
16秒前
16秒前
Yulin Yu发布了新的文献求助10
17秒前
18秒前
21秒前
ao123完成签到,获得积分10
21秒前
研友_VZG7GZ应助18R13采纳,获得10
21秒前
22秒前
22秒前
高分求助中
System in Systemic Functional Linguistics A System-based Theory of Language 1000
The Data Economy: Tools and Applications 1000
Bayesian Models of Cognition:Reverse Engineering the Mind 800
Essentials of thematic analysis 700
Mantiden - Faszinierende Lauerjäger – Buch gebraucht kaufen 600
PraxisRatgeber Mantiden., faszinierende Lauerjäger. – Buch gebraucht kaufe 600
A Dissection Guide & Atlas to the Rabbit 600
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3119837
求助须知:如何正确求助?哪些是违规求助? 2770280
关于积分的说明 7703883
捐赠科研通 2425650
什么是DOI,文献DOI怎么找? 1288160
科研通“疑难数据库(出版商)”最低求助积分说明 620913
版权声明 599970