强化学习
零(语言学)
钢筋
弹丸
计算机科学
人工智能
心理学
社会心理学
材料科学
哲学
语言学
冶金
作者
Kevin Frans,Seohong Park,Pieter Abbeel,Sergey Levine
出处
期刊:Cornell University - arXiv
日期:2024-02-26
标识
DOI:10.48550/arxiv.2402.17135
摘要
Can we pre-train a generalist agent from a large amount of unlabeled offline trajectories such that it can be immediately adapted to any new downstream tasks in a zero-shot manner? In this work, we present a functional reward encoding (FRE) as a general, scalable solution to this zero-shot RL problem. Our main idea is to learn functional representations of any arbitrary tasks by encoding their state-reward samples using a transformer-based variational auto-encoder. This functional encoding not only enables the pre-training of an agent from a wide diversity of general unsupervised reward functions, but also provides a way to solve any new downstream tasks in a zero-shot manner, given a small number of reward-annotated samples. We empirically show that FRE agents trained on diverse random unsupervised reward functions can generalize to solve novel tasks in a range of simulated robotic benchmarks, often outperforming previous zero-shot RL and offline RL methods. Code for this project is provided at: https://github.com/kvfrans/fre
科研通智能强力驱动
Strongly Powered by AbleSci AI