强化学习
计算机科学
运动规划
人工智能
路径(计算)
机器人
算法
机器人学习
机器学习
数学优化
移动机器人
数学
程序设计语言
作者
Chee Sheng Tan,Rosmiwati Mohd‐Mokhtar,Mohd Rizal Arshad
标识
DOI:10.1016/j.eswa.2024.123539
摘要
Recently, researchers have been extensively exploring the immense potential of Q-Star. However, the available resources lack comprehensive information on this topic. Despite this, a Q-table, an uncomplicated lookup table containing actions and states, is often seen as a mere data structure for tracking. This overlooks the vast amount of knowledge that can be derived from it through visualization. The Q-learning algorithm utilizes this table to update values and determine the highest anticipated rewards for actions in each state. However, instead of relying solely on complex reward functions for algorithm development, leveraging the existing knowledge within the Q-table would be highly beneficial. Incorporating this valuable information into the algorithmic framework can minimize the need to develop intricate reward functions. This paper proposes an expected-mean gamma-incremental Q approach to tackle the challenges of convergence speed in an uninformed search reinforcement learning (RL) algorithm and the issue of path optimality in path planning problems. The gamma-incremental RL method revolves around adjusting the weight of the future value by considering the level of exploration. It enables the robot to receive preference feedback, either near-term reward or long-term reward, based on the frequency of the visited state. Meanwhile, the expected-mean technique uses the information of the robot's turning actions to update the Q-target. By consistently incorporating valuable insights from the Q-table, the algorithm can gradually enhance its understanding of the available information, resulting in more efficient decision-making. The experiment results indicate that the proposed algorithm accelerates the convergence rate, outperforming the baseline Q-learning by up to 2 times. It addresses the challenge of robot path planning by prioritizing promising solutions, resulting in near-optimal outcomes with higher total rewards and enhanced learning stability.
科研通智能强力驱动
Strongly Powered by AbleSci AI