计算机科学
动力系统理论
控制器(灌溉)
巡航控制
控制(管理)
极限(数学)
动力系统(定义)
控制理论(社会学)
控制工程
人工智能
工程类
数学
数学分析
物理
量子力学
农学
生物
作者
Marc-Antoine Beaudoin,Benoît Boulet
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:8 (1): 868-877
被引量:2
标识
DOI:10.1109/tiv.2022.3148212
摘要
Approaches to keeping a dynamical system within state constraints typically rely on a model-based safety condition to limit the control signals. In the face of significant modeling uncertainty, the system can suffer from important performance penalties due to the safety condition becoming overly conservative. Machine learning can be employed to reduce the uncertainty around the system dynamics, and allow for higher performance. In this article, we propose the safe uncertainty-learning principle, and argue that the learning must be properly structured to preserve safety guarantees. For instance, robust safety conditions are necessary, and they must be initialized with conservative uncertainty bounds prior to learning. Also, the uncertainty bounds should only be tightened if the collected data sufficiently captures the future system behavior. To support the principle, two example problems are solved with control barrier functions: a lane-change controller for an autonomous vehicle, and an adaptive cruise controller. This work offers a way to evaluate whether machine learning preserves safety guarantees during the control of uncertain dynamical systems. It also highlights challenging aspects of learning for control.
科研通智能强力驱动
Strongly Powered by AbleSci AI