后悔
强化学习
马尔可夫决策过程
计算机科学
杠杆(统计)
时差学习
上下界
库存控制
背景(考古学)
数学优化
机器学习
人工智能
马尔可夫过程
数学
运筹学
统计
数学分析
古生物学
生物
作者
Wang Chi Cheung,David Simchi‐Levi,Ruihao Zhu
出处
期刊:Management Science
[Institute for Operations Research and the Management Sciences]
日期:2023-02-22
卷期号:69 (10): 5722-5739
被引量:24
标识
DOI:10.1287/mnsc.2023.4704
摘要
Motivated by operations research applications, such as inventory control and real-time bidding, we consider undiscounted reinforcement learning in Markov decision processes under model uncertainty and temporal drifts. In this setting, both the latent reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the sliding window upper confidence bound for reinforcement learning with confidence-widening (SWUCRL2-CW) algorithm and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the bandit-over-reinforcement learning algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound but in a parameter-free manner (i.e., without knowing the variation budgets). Finally, we conduct numerical experiments to show that our proposed algorithms achieve superior empirical performance compared with existing algorithms. Notably, under nonstationarity, historical data samples may falsely indicate that state transition rarely happens. This thus presents a significant challenge when one tries to apply the conventional optimism in the face of uncertainty principle to achieve a low dynamic regret bound. We overcome this challenge by proposing a novel confidence-widening technique that incorporates additional optimism into our learning algorithms. To extend our theoretical findings, we demonstrate, in the context of single-item inventory control with lost sales, fixed cost, and zero lead time, how one can leverage special structures on the state transition distributions to achieve improved dynamic regret bound in time-varying demand environments. This paper was accepted by J. George Shanthikumar, data science. Funding: The authors acknowledge support from the Massachusetts Institute of Technology (MIT) Data Science Laboratory and the MIT–IBM partnership in artificial intelligence. W. C. Cheung acknowledges support from the Singapore Ministry of Education [Tier 2 Grant MOE-T2EP20121-0012]. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2023.4704 .
科研通智能强力驱动
Strongly Powered by AbleSci AI