计算机科学
运动规划
路径(计算)
人工智能
计算机视觉
实时计算
机器人
计算机网络
作者
Dianyi Zhou,Xi Du,Shiyi Liu,Qingyu Su,Hongyang Guo
摘要
In decision problems, single step decision making refers to making a decision at each time step based only on the current state, without considering the long-term state or future effects. This approach is suitable for those scenarios with immediate feedback and operational impact, but can be challenging when facing complex and long-term dependent environments. We will explore the advantages and disadvantages of single step decision making and how this strategy can be used to optimize the decision process in practice. This innovative algorithm integrates the memory capabilities of recurrent neural networks (RNNs) into deep reinforcement learning frameworks. Unlike traditional Deep Q Network (DQN) setups, where feedforward neural networks are typically used for the the RPP-LSTM employs an LSTM network as the Q-value network. This integration allows the Q network to retain memory of previous environmental states and actions, thereby addressing the myopic nature of decision-making prevalent in methods. By leveraging LSTM's ability to capture and utilize temporal dependencies, the RPP-LSTM algorithm enhances the UAV's path planning capability by considering a broader context of environmental changes and past decisions. This approach is particularly beneficial in dynamic environments where the immediate decision based solely on current state information may not be optimal. The LSTM-equipped Q-value network can effectively learn and adapt to varying environmental conditions, leading in tasks. Furthermore, the incorporates a stratified punishment and reward mechanism designed to optimize the rationality of UAV path planning. This function encourages the UAV to make decisions that not only achieve immediate goals but also contribute to long-term planning objectives, ensuring strategic adaptability in complex scenarios. Simulation results demonstrate the superiority of the RPP-LSTM algorithm over traditional approaches relying on feedforward neural networks (FNNs). It exhibits enhanced adaptability to complex environments and achieves superior performance in terms of both robustness and accuracy in real-time UAV path planning scenarios. This integration of LSTM with deep reinforcement learning represents a significant advancement towards more intelligent and effective autonomous UAV operations in dynamic and challenging environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI