推荐系统
计算机科学
背景(考古学)
用户满意度
情报检索
用户界面
度量(数据仓库)
人机交互
万维网
数据挖掘
古生物学
生物
操作系统
作者
Conor Hayes,Pádraig Cunningham
摘要
Several techniques are currently used to evaluate recommender systems. These techniques involve off-line analysis using evaluation methods from machine learning and information retrieval. We argue that while off-line analysis is useful, user satisfaction with a recommendation strategy can only be measured in an on-line context. We propose a new evaluation framework which involves a paired test of two recommender systems which simultaneously compete to give the best recommendations to the same user at the same time. The user interface and the interaction model for each system is the same. The framework enables you to specify an API so that different recommendation strategies may take part in such a competition. The API defines issues such as access to data, the interaction model and the means of gathering positive feedback from the user. In this way it is possible to obtain a relative measure of user satisfaction with the two systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI