部分可观测马尔可夫决策过程
数学
数学优化
马尔可夫决策过程
有界函数
可见的
启发式
模棱两可
概率分布
力矩(物理)
联合概率分布
分段
马尔可夫过程
计算机科学
统计
物理
数学分析
经典力学
量子力学
程序设计语言
作者
Hideaki Nakao,Ruiwei Jiang,Siqian Shen
出处
期刊:Siam Journal on Optimization
[Society for Industrial and Applied Mathematics]
日期:2021-01-01
卷期号:31 (1): 461-488
被引量:7
摘要
We consider a distributionally robust partially observable Markov decision process (DR-POMDP), where the distribution of the transition-observation probabilities is unknown at the beginning of each decision period, but their realizations can be inferred using side information at the end of each period after an action being taken. We build an ambiguity set of the joint distribution using bounded moments via conic constraints and seek an optimal policy to maximize the worst-case (minimum) reward for any distribution in the set. We show that the value function of DR-POMDP is piecewise linear convex with respect to the belief state and propose a heuristic search value iteration method for obtaining lower and upper bounds of the value function. We conduct numerical studies and demonstrate the computational performance of our approach via testing instances of a dynamic epidemic control problem. Our results show that DR-POMDP can produce more robust policies under misspecified distributions of transition-observation probabilities as compared to POMDP but has less costly solutions than robust POMDP. The DR-POMDP policies are also insensitive to varying parameter in the ambiguity set and to noise added to the true transition-observation probability values obtained at the end of each decision period.
科研通智能强力驱动
Strongly Powered by AbleSci AI