心理学
感觉
相关性(法律)
任务(项目管理)
社会心理学
应用心理学
工程类
政治学
法学
系统工程
作者
Hannah Fahnenstich,Tobias Rieger,Eileen Roesler
标识
DOI:10.1016/j.chb.2023.108107
摘要
The growing number of safety-critical technologized workplaces leads to enhanced support of complex human decision-making by artificial intelligence (AI), increasing the relevance of risk in the joint decision process. This online study examined participants' trust attitude and behavior during a visual estimation task when supported by either a human or an AI decision support agent, with risk levels manipulated through different scenarios. Contrary to recent literature, no main effects were found in participants' trust attitude or trust behavior between support agent conditions or risk levels. However, participants using AI support exhibited increased trust behavior under higher risk, while participants with human support agents did not display behavioral differences. Self-confidence vs. trust and an increased feeling of responsibility might be possible reasons. Furthermore, participants reported the human support agent to be more responsible for possible negative outcomes of the joint task than the AI support agent. Risk did not influence perceived responsibility. However, the study's findings concerning trust behavior underscore the crucial importance of investigating the impact of risk in workplaces, shedding light on the under-researched effect of risk on trust attitude and behavior in AI-supported human decision-making.
科研通智能强力驱动
Strongly Powered by AbleSci AI