期望理论
误传
知识管理
经验证据
感知
计算机科学
心理学
社会心理学
认识论
计算机安全
哲学
神经科学
作者
Mark Anthony Camilleri
标识
DOI:10.1016/j.techfore.2024.123247
摘要
Few studies have explored the use of artificial intelligence-enabled (AI-enabled) large language models (LLMs). This research addresses this knowledge gap. It investigates perceptions and intentional behaviors to utilize AI dialogue systems like Chat Generative Pre-Trained Transformer (ChatGPT). A survey questionnaire comprising measures from key information technology adoption models, was used to capture quantitative data from a sample of 654 respondents. A partial least squares (PLS) approach assesses the constructs' reliabilities and validities. It also identifies the relative strength and significance of the causal paths in the proposed research model. The findings from SmartPLS4 report that there are highly significant effects in this empirical investigation particularly between source trustworthiness and performance expectancy from AI chatbots, as well as between perceived interactivity and intentions to use this algorithm, among others. In conclusion, this contribution puts forward a robust information technology acceptance framework that clearly evidences the factors that entice online users to habitually engage with text-generating AI chatbot technologies. It implies that although they may be considered as useful interactive systems for content creators, there is scope to continue improving the quality of their responses (in terms of their accuracy and timeliness) to reduce misinformation, social biases, hallucinations and adversarial prompts.
科研通智能强力驱动
Strongly Powered by AbleSci AI