已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences

计算机科学 任务(项目管理) 偏爱 行为模式 人机交互 人工智能 模棱两可 概括性 剪辑 心理学 软件工程 经济 微观经济学 管理 程序设计语言 心理治疗师
作者
Lin Guan,Karthik Valmeekam,Subbarao Kambhampati
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2210.15906
摘要

Generating complex behaviors that satisfy the preferences of non-expert users is a crucial requirement for AI agents. Interactive reward learning from trajectory comparisons (a.k.a. RLHF) is one way to allow non-expert users to convey complex objectives by expressing preferences over short clips of agent behaviors. Even though this parametric method can encode complex tacit knowledge present in the underlying tasks, it implicitly assumes that the human is unable to provide richer feedback than binary preference labels, leading to intolerably high feedback complexity and poor user experience. While providing a detailed symbolic closed-form specification of the objectives might be tempting, it is not always feasible even for an expert user. However, in most cases, humans are aware of how the agent should change its behavior along meaningful axes to fulfill their underlying purpose, even if they are not able to fully specify task objectives symbolically. Using this as motivation, we introduce the notion of Relative Behavioral Attributes, which allows the users to tweak the agent behavior through symbolic concepts (e.g., increasing the softness or speed of agents' movement). We propose two practical methods that can learn to model any kind of behavioral attributes from ordered behavior clips. We demonstrate the effectiveness of our methods on four tasks with nine different behavioral attributes, showing that once the attributes are learned, end users can produce desirable agent behaviors relatively effortlessly, by providing feedback just around ten times. This is over an order of magnitude less than that required by the popular learning-from-human-preferences baselines. The supplementary video and source code are available at: https://guansuns.github.io/pages/rba.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
yb完成签到,获得积分10
1秒前
5秒前
CodeCraft应助后会无期采纳,获得10
6秒前
HAI完成签到,获得积分10
7秒前
9秒前
9秒前
一天完成签到 ,获得积分10
9秒前
小点点完成签到,获得积分20
11秒前
科研通AI6应助HAI采纳,获得10
13秒前
汉堡包应助后会无期采纳,获得10
14秒前
万默完成签到 ,获得积分10
15秒前
不要慌完成签到 ,获得积分10
16秒前
犹豫幻丝完成签到,获得积分10
18秒前
18秒前
咕哒猫应助wqiao2010采纳,获得10
18秒前
九珥完成签到 ,获得积分10
19秒前
小豆豆完成签到,获得积分10
21秒前
汉堡包应助科研通管家采纳,获得10
23秒前
Criminology34应助科研通管家采纳,获得10
23秒前
24秒前
jianghs完成签到,获得积分10
24秒前
一只熊完成签到 ,获得积分10
25秒前
28秒前
29秒前
wqiao2010完成签到,获得积分10
29秒前
山楂球发布了新的文献求助10
29秒前
天真的路灯完成签到,获得积分10
31秒前
tong发布了新的文献求助10
31秒前
www完成签到,获得积分10
34秒前
35秒前
lmplzzp完成签到,获得积分10
37秒前
wlei完成签到,获得积分10
38秒前
虾球发布了新的文献求助30
40秒前
lcw1998发布了新的文献求助10
40秒前
41秒前
41秒前
43秒前
Eileen完成签到 ,获得积分0
43秒前
FashionBoy应助科研小巴采纳,获得30
44秒前
楠楠2001完成签到 ,获得积分10
46秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Reproduction Third Edition 3000
《药学类医疗服务价格项目立项指南(征求意见稿)》 1000
花の香りの秘密―遺伝子情報から機能性まで 800
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
nephSAP® Nephrology Self-Assessment Program - Hypertension The American Society of Nephrology 500
Digital and Social Media Marketing 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5627676
求助须知:如何正确求助?哪些是违规求助? 4714380
关于积分的说明 14962946
捐赠科研通 4785322
什么是DOI,文献DOI怎么找? 2555072
邀请新用户注册赠送积分活动 1516447
关于科研通互助平台的介绍 1476841