期刊:2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS)日期:2019-12-01被引量:6
标识
DOI:10.1109/icetas48360.2019.9117478
摘要
Intelligent aerial vehicles became useful and serve as assistance to the civilian missions. These innovative techniques have evolved and consistently improve their capacity and performance. Thus, effective path planning is necessary to achieve these missions. This paper explores the Deep Reinforcement Learning with the use of Gaussian noise injection in path planning of unmanned aerial vehicle. A desirable result processed designing an effective UAV path planning by getting the lowest duration to reach the target destination point using Double Dueling Deep Q Network (D3QN), a combination of double Q - learning and dueling architectures were used in this paper. The D3QN was used to approximate the Q values of actions to select the best action while the Gaussian noise (GN) layer is also added after the convolutional neural network and dense layer of the dueling network. The Gaussian Noise layer applies additive zero-centered Gaussian noise using a standard deviation of 1.0. The D3QN with Gaussian noise is more stable in learning than the D3QN baseline and resulted in the shortest duration and avoiding the obstacles in the environment.