计算机科学
数学优化
马尔可夫决策过程
灵活性(工程)
时间范围
线性规划
部分可观测马尔可夫决策过程
可见的
动态规划
线性近似
马尔可夫链
马尔可夫过程
算法
马尔可夫模型
数学
非线性系统
量子力学
统计
机器学习
物理
作者
Robert K. Helmeczi,Can Kavaklioğlu,Mücahit Çevik
标识
DOI:10.1007/s10489-023-04603-7
摘要
Constrained partially observable Markov decision processes (CPOMDPs) have been used to model various real-world phenomena. However, they are notoriously difficult to solve to optimality, and there exist only a few approximation methods for obtaining high-quality solutions. In this study, grid-based approximations are used in combination with linear programming (LP) models to generate approximate policies for CPOMDPs. A detailed numerical study is conducted with six CPOMDP problem instances considering both their finite and infinite horizon formulations. The quality of approximation algorithms for solving unconstrained POMDP problems is established through a comparative analysis with exact solution methods. Then, the performance of the LP-based CPOMDP solution approaches for varying budget levels is evaluated. Finally, the flexibility of LP-based approaches is demonstrated by applying deterministic policy constraints, and a detailed investigation into their impact on rewards and CPU run time is provided. For most of the finite horizon problems, deterministic policy constraints are found to have little impact on expected reward, but they introduce a significant increase to CPU run time. For infinite horizon problems, the reverse is observed: deterministic policies tend to yield lower expected total rewards than their stochastic counterparts, but the impact of deterministic constraints on CPU run time is negligible in this case. Overall, these results demonstrate that LP models can effectively generate approximate policies for both finite and infinite horizon problems while providing the flexibility to incorporate various additional constraints into the underlying model.
科研通智能强力驱动
Strongly Powered by AbleSci AI