强化学习
方向舵
控制器(灌溉)
人工神经网络
电梯
计算机科学
控制工程
水下
模型预测控制
控制(管理)
路径(计算)
人工智能
工程类
生物
程序设计语言
农学
海洋学
结构工程
海洋工程
地质学
作者
Dongfang Ma,Xi Chen,Weihao Ma,Huarong Zheng,Fengzhong Qu
标识
DOI:10.1109/tiv.2023.3282681
摘要
Autonomous underwater vehicles (AUVs) have become important tools in the ocean exploration and have drawn considerable attention. Precise control for AUVs is the prerequisite to effectively execute underwater tasks. However, the classical control methods such as model predictive control (MPC) rely heavily on the dynamics model of the controlled system which is difficult to obtain for AUVs. To address this issue, a new reinforcement learning (RL) framework for AUV path-following control is proposed in this paper. Specifically, we propose a novel actor-model-critic (AMC) architecture integrating a neural network model with the traditional actor-critic architecture. The neural network model is designed to learn the state transition function to explore the spatio-temporal change patterns of the AUV as well as the surrounding environment. Based on the AMC architecture, a RL-based controller agent named ModelPPO is constructed to control the AUV. With the required sailing speed achieved by a traditional proportional-integral (PI) controller, ModelPPO can control the rudder and elevator fins so that the AUV follows the desired path. Finally, a simulation platform is built to evaluate the performance of the proposed method that is compared with MPC and other RL-based methods. The obtained results demonstrate that the proposed method can achieve better performance than other methods, which demonstrate the great potential of the advanced artificial intelligence methods in solving the traditional motion control problems for intelligent vehicles.
科研通智能强力驱动
Strongly Powered by AbleSci AI