强化学习
计算机科学
移动机器人
运动规划
人工智能
机器人
自动计划和调度
路径(计算)
机器人运动学
人机交互
分布式计算
计算机网络
作者
Zekun Bai,Hui Pang,Zhaonian He,Bin Zhao,Tong Wang
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-03-19
卷期号:11 (12): 22153-22166
被引量:3
标识
DOI:10.1109/jiot.2024.3379361
摘要
In real world situations, some unavoidable problems like significant dependence on environment information, long inference time and weak anti-disturbance ability are often involved in path planning of autonomous mobile robot (AMR) under unknown environments. To solve these issues, this paper proposes an improved deep reinforcement learning based path planning algorithm to find out an optimized path for a class of AMRs. Frist, the path planning of AMR is described as a Markov decision process framework, and the Double Deep Q Network (DDQN) is utilized to obtain the optimal adaptive solutions of AMRs path planning. Second, a comprehensive reward function integrated with heuristic function is designed to navigate the AMR into the target area. Afterwards, an optimized deep neural network with an adaptive e-greedy action selection policy is designed to deal with the trade-off between exploration and exploitation, thus further to improve the global searching capability and the convergence performance for the AMR path planning. Moreover, Bezier curve theory is utilized to smooth the planned path. Finally, the comparative simulations are carried out to validate our proposed path planning algorithm. The results show that, compared with DQN, A*, RRT, APF algorithms, our improved DDQN algorithm can produce safer and shorter global paths in comprehensive unknown environments. Meanwhile, the IDDQN algorithm has strong adaptability to random disturbances in unknown environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI