强化学习
计算机科学
路径(计算)
运动规划
人工智能
钢筋
机器学习
机器人
工程类
结构工程
程序设计语言
作者
Shengli Du,Zexing Zhu,Xuefang Wang,Honggui Han,Junfei Qiao
标识
DOI:10.1016/j.neucom.2024.128085
摘要
Local path planning and obstacle avoidance in complex environments are two challenging problems in the research of intelligent robots. In this study, we develop a novel approach grounded in deep distributional reinforcement learning to address these challenges. Within this methodology, agents instantiated by deep neural networks perceive real-time local environmental information through sensor data, addressing inherent stochasticity and local path planning tasks in complex environments. End-to-end training is facilitated via distributional reinforcement learning algorithms and reward functions informed by heuristic knowledge. Optimal actions for path planning are determined through return value distributions. Finally, the simulation results show that the success rate of the proposed distributed algorithm is 98% in a random environment and 94% in a dynamic environment. This proves that the algorithm has better generalization and flexibility than the non-distributed algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI