PID控制器
控制理论(社会学)
强化学习
非线性系统
计算机科学
控制器(灌溉)
控制工程
自适应控制
控制(管理)
人工智能
温度控制
工程类
物理
农学
量子力学
生物
作者
T. Shuprajhaa,Shivakanth Sujit,K. Srinivasan
标识
DOI:10.1016/j.asoc.2022.109450
摘要
Control of unstable process is challenging owing to its dynamic nature, output multiplicities and stability issues. This research work focuses to develop a generic data driven modified Proximal Policy Optimization (m-PPO) reinforcement learning based adaptive PID controller (RL-PID) for the control of open loop unstable processes. The RL agent acting as the supervisor explores and identifies optimal gains for the PID controller to ensure desired servo and regulatory performance. Adaptive modifications in terms of inclusion of action repeat, modified reward function and early stopping criterion are incorporated to the m-PPO algorithm to handle the unbounded output nature of unstable processes. Effect of m-PPO algorithm is proven in terms of reward earned by the RL agent. Servo and regulatory performance of the proposed RL-PID controller is compared with that of classical PID controller, Deep Discriminant Policy Gradient based PID controller and Advantage Actor Critic based PID controller on various linear, non linear, multivariable unstable systems including unstable jacketed CSTR process and Unmanned Aerial Vehicle in simulation environment. Validation of the proposed controller is also done in real time level control process station, a laboratory level experimental test rig. It is observed that the proposed RL-PID performs satisfactorily better than the other controllers in both qualitative and quantitative metrics. The striking feature of this control scheme is that it eliminates the need of process modeling and pre-requisite knowledge on process dynamics and controller tuning. The proposed controller is a data driven generic approach that can be directly applied to any industrial process. • Model free data driven controller is proposed for unstable systems. • Reinforcement learning-Proportional Integral Derivative controller is proposed. • Modified Proximal Policy Optimization is employed for optimal tuning of controller. • Early stopping, action repeat and modified reward are used in optimization process. • Validation is done with linear and complex nonlinear unstable systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI