机器人
惩罚(心理学)
心理学
适度
社会心理学
团队合作
亲社会行为
人机交互
自私
计算机科学
人工智能
政治学
法学
出处
期刊:Human Factors
[SAGE]
日期:2022-10-11
卷期号:66 (4): 1103-1117
被引量:3
标识
DOI:10.1177/00187208221133272
摘要
Objective Based on social exchange theory, this study investigates the effects of robots’ fairness and social status on humans’ reward-punishment behaviors and trust in human-robot interactions. Background In human-robot teamwork, robots show fair behaviors, dedication (altruistic unfair behaviors), and selfishness (self-interested unfair behaviors), but few studies have discussed the effects of these robots’ behaviors on teamwork. Method This study adopts a 3 (the independent variable is the robot’s fairness: self-interested unfair behaviors, fair behaviors, and altruistic unfair behaviors) × 3 (the moderator variable is the robot’s social status: superior, peer, and subordinate) experimental design. Each participant and a robot completed the experimental task together through a computer. Results When robots have different social statuses, the more altruistic the fairness of the robot, the more reward behaviors, the fewer punishment behaviors, and the higher human–robot trust of humans. Robots’ higher social status weakens the influence of their fairness on humans’ punishment behaviors. Human–robot trust will increase humans’ reward behaviors and decrease humans’ punishment behaviors. Humans’ reward-punishment behaviors will increase repaired human-robot trust. Conclusion Robots’ fairness has a significant impact on humans’ reward-punishment behaviors and trust. Robots’ social status moderates the effect of their fair behavior on humans’ punishment behavior. There is an interaction between humans’ reward-punishment behaviors and trust. Application The study can help to better understand the interaction mechanism of the human–robot team and can better serve the management and cooperation of the human–robot team by appropriately adjusting the robots’ fairness and social status.
科研通智能强力驱动
Strongly Powered by AbleSci AI