人机交互
领域(数学)
机器人
人工智能
计算机科学
人机交互
口译(哲学)
样品(材料)
心理学
机器人学
数据科学
认知心理学
色谱法
化学
程序设计语言
纯数学
数学
作者
John Innes,Ben W. Morrison
标识
DOI:10.1007/s12369-020-00671-8
摘要
The rapid development of artificial intelligence brings with it the increasing likelihood of ubiquitous interaction between humans and robots. A significant contribution to studying human–robot interactions (HRI) comes from experimental studies, whereby humans and robots interact in controlled conditions and researchers observe and measure the reactions of humans (and robots). The use of experiments to understand human interactions has long been a central source of information in the field of experimental social psychology. These studies have yielded numerous major insights into the causes and outcomes of interaction. The methodology of experiments, however, including the demands made upon human participants to behave in predictable ways and the impact of experimenters’ expectancies upon results, has been a focus of much critical analysis. We examined a sample of 100 high impact HRI studies for evidence of potentially contaminating experimental artefacts and/or authors’ awareness of such factors. In our conclusions we highlight several methodological issues that appeared frequently in our sample, which may impede generalisations from laboratory experiments to real-world settings. Ultimately, we suggest that researchers may need to reformulate the methodologies used to study the unique features of HRI, and offer a number of recommendations for researchers designing HRI experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI