马尔可夫决策过程
数学优化
次梯度方法
计算机科学
启发式
调度(生产过程)
状态空间
经济调度
马尔可夫过程
数学
功率(物理)
量子力学
电力系统
统计
物理
作者
Yi Chen,Jing Dong,Zhaoran Wang
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:11
标识
DOI:10.48550/arxiv.2101.10895
摘要
In many operations management problems, we need to make decisions sequentially to minimize the cost while satisfying certain constraints. One modeling approach to study such problems is constrained Markov decision process (CMDP). When solving the CMDP to derive good operational policies, there are two key challenges: one is the prohibitively large state space and action space; the other is the hard-to-compute transition kernel. In this work, we develop a sampling-based primal-dual algorithm to solve CMDPs. Our approach alternatively applies regularized policy iteration to improve the policy and subgradient ascent to maintain the constraints. Under mild regularity conditions, we show that the algorithm converges at rate $ O(\log(T)/\sqrt{T})$, where T is the number of iterations. When the CMDP has a weakly coupled structure, our approach can substantially reduce the dimension of the problem through an embedded decomposition. We apply the algorithm to two important applications with weakly coupled structures: multi-product inventory management and multi-class queue scheduling, and show that it generates controls that outperform state-of-art heuristics.
科研通智能强力驱动
Strongly Powered by AbleSci AI