Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences

计算机科学 任务(项目管理) 偏爱 行为模式 人机交互 人工智能 模棱两可 概括性 剪辑 心理学 软件工程 经济 微观经济学 管理 程序设计语言 心理治疗师
作者
Lin Guan,Karthik Valmeekam,Subbarao Kambhampati
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2210.15906
摘要

Generating complex behaviors that satisfy the preferences of non-expert users is a crucial requirement for AI agents. Interactive reward learning from trajectory comparisons (a.k.a. RLHF) is one way to allow non-expert users to convey complex objectives by expressing preferences over short clips of agent behaviors. Even though this parametric method can encode complex tacit knowledge present in the underlying tasks, it implicitly assumes that the human is unable to provide richer feedback than binary preference labels, leading to intolerably high feedback complexity and poor user experience. While providing a detailed symbolic closed-form specification of the objectives might be tempting, it is not always feasible even for an expert user. However, in most cases, humans are aware of how the agent should change its behavior along meaningful axes to fulfill their underlying purpose, even if they are not able to fully specify task objectives symbolically. Using this as motivation, we introduce the notion of Relative Behavioral Attributes, which allows the users to tweak the agent behavior through symbolic concepts (e.g., increasing the softness or speed of agents' movement). We propose two practical methods that can learn to model any kind of behavioral attributes from ordered behavior clips. We demonstrate the effectiveness of our methods on four tasks with nine different behavioral attributes, showing that once the attributes are learned, end users can produce desirable agent behaviors relatively effortlessly, by providing feedback just around ten times. This is over an order of magnitude less than that required by the popular learning-from-human-preferences baselines. The supplementary video and source code are available at: https://guansuns.github.io/pages/rba.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
5秒前
超级瑶瑶发布了新的文献求助10
11秒前
林夕完成签到,获得积分10
17秒前
orixero应助萨尔莫斯采纳,获得10
22秒前
呜呜发布了新的文献求助10
23秒前
23秒前
行走的猫完成签到 ,获得积分10
24秒前
25秒前
tracer526发布了新的文献求助10
27秒前
优雅的女神完成签到,获得积分10
28秒前
ikutovaya完成签到,获得积分10
29秒前
理躺丁真完成签到,获得积分10
30秒前
32秒前
SJD完成签到,获得积分0
33秒前
呜呜完成签到,获得积分10
33秒前
领导范儿应助超级瑶瑶采纳,获得10
34秒前
萨尔莫斯发布了新的文献求助10
35秒前
科研通AI6应助蟹黄丸子采纳,获得30
36秒前
可靠小懒虫完成签到,获得积分10
37秒前
今后应助善良的广缘采纳,获得10
37秒前
欢喜的早晨完成签到,获得积分10
41秒前
英俊的铭应助tracer526采纳,获得10
42秒前
彭于晏应助科研通管家采纳,获得10
43秒前
科研通AI6应助科研通管家采纳,获得10
43秒前
orixero应助科研通管家采纳,获得10
43秒前
蓝天应助科研通管家采纳,获得10
43秒前
大个应助科研通管家采纳,获得10
43秒前
梦将军应助科研通管家采纳,获得10
43秒前
梁jj应助科研通管家采纳,获得30
43秒前
FashionBoy应助科研通管家采纳,获得10
43秒前
shhoing应助科研通管家采纳,获得10
43秒前
浮游应助科研通管家采纳,获得10
43秒前
Zewen_Li应助科研通管家采纳,获得10
44秒前
XY应助科研通管家采纳,获得10
44秒前
浮游应助科研通管家采纳,获得10
44秒前
蓝天应助科研通管家采纳,获得10
44秒前
英俊的铭应助科研通管家采纳,获得30
44秒前
Akim应助科研通管家采纳,获得10
44秒前
科目三应助科研通管家采纳,获得10
44秒前
44秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1621
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 800
Biology of the Reptilia. Volume 21. Morphology I. The Skull and Appendicular Locomotor Apparatus of Lepidosauria 620
A Guide to Genetic Counseling, 3rd Edition 500
Laryngeal Mask Anesthesia: Principles and Practice. 2nd ed 500
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5560419
求助须知:如何正确求助?哪些是违规求助? 4645567
关于积分的说明 14675591
捐赠科研通 4586746
什么是DOI,文献DOI怎么找? 2516526
邀请新用户注册赠送积分活动 1490130
关于科研通互助平台的介绍 1460963