强化学习
计算机科学
对抗制
杠杆(统计)
稳健性(进化)
对手
人工智能
机器学习
任务(项目管理)
计算机安全
工程类
系统工程
基因
生物化学
化学
作者
Lucas Schott,Hatem Hajri,Sylvain Lamprier
标识
DOI:10.1109/ijcnn55064.2022.9892901
摘要
To improve robustness of deep reinforcement learning agents, a line of recent works focus on producing disturbances of the dynamics of the environment. Existing approaches of the literature to generate such disturbances are environment adversarial reinforcement learning methods. These methods set the problem as a two-player game between the protagonist agent, which learns to perform a task in an environment, and the adversary agent, which learns to disturb the dynamics of the considered environment to make the protagonist agent fail. Alternatively, we propose to build on gradient-based adversarial attacks, usually used for classification tasks for instance, that we apply on the critic network of the protagonist to identify efficient disturbances of the dynamics of the environment. Rather than training an adversary agent, which usually reveals as very complex and unstable, we leverage the knowledge of the critic network of the protagonist, to dynamically increase the complexity of the task at each step of the learning process. We show that our method, while being faster and lighter, leads to significantly better improvements in robustness of the policy than existing methods of the literature.
科研通智能强力驱动
Strongly Powered by AbleSci AI