计算机科学
强化学习
移动机器人
初始化
增强学习
趋同(经济学)
运动规划
先验与后验
机器人
机器人学习
路径(计算)
人工智能
人气
机器学习
数学优化
数学
哲学
经济
心理学
认识论
程序设计语言
社会心理学
经济增长
作者
Ee Soong Low,Pauline Ong,Kah Chun Cheah
标识
DOI:10.1016/j.robot.2019.02.013
摘要
Q-learning, a type of reinforcement learning, has gained increasing popularity in autonomous mobile robot path planning recently, due to its self-learning ability without requiring a priori model of the environment. Yet, despite such advantage, Q-learning exhibits slow convergence to the optimal solution. In order to address this limitation, the concept of partially guided Q-learning is introduced wherein, the flower pollination algorithm (FPA) is utilized to improve the initialization of Q-learning. Experimental evaluation of the proposed improved Q-learning under the challenging environment with a different layout of obstacles shows that the convergence of Q-learning can be accelerated when Q-values are initialized appropriately using the FPA. Additionally, the effectiveness of the proposed algorithm is validated in a real-world experiment using a three-wheeled mobile robot.
科研通智能强力驱动
Strongly Powered by AbleSci AI