强化学习
固定翼
PID控制器
控制理论(社会学)
控制器(灌溉)
空速
自动驾驶仪
计算机科学
飞行包线
线性二次调节器
包络线(雷达)
控制工程
人工智能
工程类
控制(管理)
空气动力学
翼
航空航天工程
生物
温度控制
雷达
电信
农学
作者
Eivind Bøhn,Erlend M. Coates,Signe Moe,Tor Arne Johansen
标识
DOI:10.1109/icuas.2019.8798254
摘要
Contemporary autopilot systems for unmanned aerial vehicles (UAVs) are far more limited in their flight envelope as compared to experienced human pilots, thereby restricting the conditions UAVs can operate in and the types of missions they can accomplish autonomously. This paper proposes a deep reinforcement learning (DRL) controller to handle the nonlinear attitude control problem, enabling extended flight envelopes for fixed-wing UAVs. A proof-of-concept controller using the proximal policy optimization (PPO) algorithm is developed, and is shown to be capable of stabilizing a fixed-wing UAV from a large set of initial conditions to reference roll, pitch and airspeed values. The training process is outlined and key factors for its progression rate are considered, with the most important factor found to be limiting the number of variables in the observation vector, and including values for several previous time steps for these variables. The trained reinforcement learning (RL) controller is compared to a proportional-integral-derivative (PID) controller, and is found to converge in more cases than the PID controller, with comparable performance. Furthermore, the RL controller is shown to generalize well to unseen disturbances in the form of wind and turbulence, even in severe disturbance conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI