部分可观测马尔可夫决策过程
移动机器人
概率逻辑
运动规划
计算机科学
路径(计算)
人工智能
机器人
人机交互
机器学习
马尔可夫链
马尔可夫模型
计算机网络
作者
S. D. Deshpande,R Harikrishnan,Rahee Walambe
标识
DOI:10.1016/j.cogr.2024.06.001
摘要
Path Planning in a collaborative mobile robot system has been a research topic for many years. Uncertainty in robot states, actions, and environmental conditions makes finding the optimum path for navigation highly challenging for the robot. To achieve robust behavior for mobile robots in the presence of static and dynamic obstacles, it is pertinent that the robot employs a path-finding mechanism that is based on the probabilistic perception of the uncertainty in various parameters governing its movement. Partially Observable Markov Decision Process (POMDP) is being used by many researchers as a proven methodology for handling uncertainty. The POMDP framework requires manually setting up the state transition matrix, the observation matrix, and the reward values. This paper describes an approach for creating the POMDP model and demonstrates its working by simulating it on two mobile robots destined on a collision course. Selective test cases are run on the two robots with three categories – MDP (POMDP with belief state spread of 1), POMDP with distribution spread of belief state over ten observations, and distribution spread across two observations. Uncertainty in the sensor data is simulated with varying levels of up to 10%. The results are compared and analyzed. It is demonstrated that when the observation probability spread is increased from 2 to 10, collision reduces from 34% to 22%, indicating that the system's robustness increases by 12% with only a marginal increase of 3.4% in the computational complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI