杠杆(统计)
计算机科学
偏爱
冷启动(汽车)
自然语言处理
人工智能
对话框
任务(项目管理)
推荐系统
多样性(控制论)
偏好学习
语言模型
机器学习
万维网
管理
微观经济学
经济
航空航天工程
工程类
作者
Scott Sanner,Krisztian Balog,Filip Radlinski,Ben Wedin,Lucas Dixon
标识
DOI:10.1145/3604915.3608845
摘要
Traditional recommender systems leverage users’ item preference history to recommend novel content that users may like. However, modern dialog interfaces that allow users to express language-based preferences offer a fundamentally different modality for preference input. Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations from both item-based and language-based preferences in comparison to state-of-the-art item-based collaborative filtering (CF) methods. To support this investigation, we collect a new dataset consisting of both item-based and language-based preferences elicited from users along with their ratings on a variety of (biased) recommended items and (unbiased) random items. Among numerous experimental results, we find that LLMs provide competitive recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based CF methods, despite having no supervised training for this specific task (zero-shot) or only a few labels (few-shot). This is particularly promising as language-based preference representations are more explainable and scrutable than item-based or vector-based representations.
科研通智能强力驱动
Strongly Powered by AbleSci AI