强化学习
样品(材料)
钢筋
计算机科学
人工智能
心理学
社会心理学
化学
色谱法
作者
Liangliang Chen,Yutian Lei,Shiyu Jin,Ying Zhang,Liangjun Zhang
出处
期刊:IEEE robotics and automation letters
日期:2024-07-01
卷期号:9 (7): 6075-6082
标识
DOI:10.1109/lra.2024.3400189
摘要
Reinforcement learning (RL) has demonstrated its capability in solving various tasks but is notorious for its low sample efficiency. In this paper, we propose RLingua, a framework that can leverage the internal knowledge of large language models (LLMs) to reduce the sample complexity of RL in robotic manipulations. To this end, we first present a method for extracting the prior knowledge of LLMs by prompt engineering so that a preliminary rule-based robot controller for a specific task can be generated in a user-friendly manner. Despite being imperfect, the LLM-generated robot controller is utilized to produce action samples during rollouts with a decaying probability, thereby improving RL's sample efficiency. We employ TD3, the widely-used RL baseline method, and modify the actor loss to regularize the policy learning towards the LLM-generated controller. RLingua also provides a novel method of improving the imperfect LLM-generated robot controllers by RL. We demonstrate that RLingua can significantly reduce the sample complexity of TD3 in four robot tasks of panda_gym and achieve high success rates in 12 sparsely rewarded robot tasks in RLBench , where the standard TD3 fails. Additionally, we validated RLingua's effectiveness in real-world robot experiments through Sim2Real, demonstrating that the learned policies are effectively transferable to real robot tasks. For videos, please visit https://rlingua.github.io .
科研通智能强力驱动
Strongly Powered by AbleSci AI