定性比较分析
独创性
知识管理
透明度(行为)
移情
认知
结构方程建模
计算机科学
背景(考古学)
心理学
社会心理学
创造力
古生物学
计算机安全
机器学习
神经科学
生物
出处
期刊:The Electronic Library
[Emerald (MCB UP)]
日期:2024-12-10
标识
DOI:10.1108/el-08-2024-0244
摘要
Purpose The purpose of this study is to examine the effect of trust on user adoption of artificial intelligence-generated content (AIGC) based on the stimulus–organism–response. Design/methodology/approach The authors conducted an online survey in China, which is a highly competitive AI market, and obtained 504 valid responses. Both structural equation modelling and fuzzy-set qualitative comparative analysis (fsQCA) were used to conduct data analysis. Findings The results indicated that perceived intelligence, perceived transparency and knowledge hallucination influence cognitive trust in platform, whereas perceived empathy influences affective trust in platform. Both cognitive trust and affective trust in platform lead to trust in AIGC. Algorithm bias negatively moderates the effect of cognitive trust in platform on trust in AIGC. The fsQCA identified three configurations leading to adoption intention. Research limitations/implications The main limitation is that more factors such as culture need to be included to examine their possible effects on trust. The implication is that generative AI platforms need to improve the intelligence, transparency and empathy, and mitigate knowledge hallucination to engender users’ trust in AIGC and facilitate their adoption. Originality/value Existing research has mainly used technology adoption theories such as unified theory of acceptance and use of technology to examine AIGC user behaviour and has seldom examined user trust development in the AIGC context. This research tries to fill the gap by disclosing the mechanism underlying AIGC user trust formation.
科研通智能强力驱动
Strongly Powered by AbleSci AI