计算机科学
超参数
强化学习
功能(生物学)
控制器(灌溉)
梯度下降
适应度函数
遗传算法
蒙特卡罗方法
数学优化
人工智能
算法
机器学习
人工神经网络
数学
进化生物学
生物
统计
农学
作者
Larasmoyo Nugroho,Rika Andiarti,Rini Akmeliawati,Ali Türker Kutay,Diva Kartika Larasati,Sastra Kusuma Wijaya
标识
DOI:10.1016/j.engappai.2022.105798
摘要
One major capability of a Deep Reinforcement Learning (DRL) agent to control a specific vehicle in an environment without any prior knowledge is decision-making based on a well-designed reward shaping function. An important but little-studied major factor that can alter significantly the training reward score and performance outcomes is the reward shaping function. To maximize the control efficacy of a DRL algorithm, an optimized reward shaping function and a solid hyperparameter combination are essential. In order to achieve optimal control during the powered descent guidance (PDG) landing phase of a reusable launch vehicle, the Deep Deterministic Policy Gradient (DDPG) algorithm is used in this paper to discover the best shape of the reward shaping function (RSF). Although DDPG is quite capable of managing complex environments and producing actions intended for continuous spaces, its state and action performance could still be improved. A reference DDPG agent with the original reward shaping function and a PID controller were placed side by side with the GA-DDPG agent using GA-optimized RSF. The best GA-DDPG individual can maximize overall rewards and minimize state errors with the help of the potential-based GA(PbGA) searched RSF, maintaining the highest fitness score among all individuals after has been cross-validated and retested extensively Monte-Carlo experimental results.
科研通智能强力驱动
Strongly Powered by AbleSci AI