强化学习
计算机科学
一般化
人工智能
推论
代表(政治)
一套
编码(集合论)
特征学习
特征(语言学)
视觉学习
机器学习
深度学习
领域(数学分析)
程序设计语言
数学分析
法学
集合(抽象数据类型)
考古
哲学
发展心理学
心理学
历史
政治
语言学
数学
政治学
作者
Hyesong Choi,Hunsang Lee,Seong‐Jae Jeong,Dongbo Min
标识
DOI:10.1109/iccv51070.2023.00031
摘要
Generalization capability of vision-based deep reinforcement learning (RL) is indispensable to deal with dynamic environment changes that exist in visual observations. The high-dimensional space of the visual input, however, imposes challenges in adapting an agent to unseen environments. In this work, we propose Environment Agnostic Reinforcement learning (EAR), which is a compact framework for domain generalization of the visual deep RL. Environmentagnostic features (EAFs) are extracted by leveraging three novel objectives based on feature factorization, reconstruction, and episode-aware state shifting, so that policy learning is accomplished only with vital features. EAR is a simple single-stage method with a low model complexity and a fast inference time, ensuring a high reproducibility, while attaining state-of-the-art performance in the DeepMind Control Suite and DrawerWorld benchmarks. Code is available at: https://github.com/doihye/EAR.
科研通智能强力驱动
Strongly Powered by AbleSci AI