旅游
影响力营销
误传
营销
生成语法
政府(语言学)
心理学
生成模型
实证研究
计算机科学
广告
社会学
业务
人工智能
政治学
认识论
语言学
关系营销
哲学
市场营销管理
法学
计算机安全
作者
Jeff Christensen,Jared M. Hansen,Paul Wilson
标识
DOI:10.1080/13683500.2023.2300032
摘要
ChatGPT, which launched only a year ago, is the fastest-growing website in the world today. When generative AI software such as ChatGPT generates ideas for people, they often generate false ideas. This occurrence has been called 'AI Hallucination'. It can include generating false text output that is extremely believable to completely gibberish. This source of potential misinformation has significant potential implications for the travel and tourism industry. Using survey responses from 900 consumers, this empirical study contributes to theorizing and examination of how consumers' awareness of AI Hallucination potential combines with existing concepts from the Technology Acceptance Model (TAM) and Theory of Planned Behaviour (TPB) when it comes to the decision to use generative AI platforms such as ChatGPT for tourism planning. This research also examines if the consumers are actually able to discern AI Hallucination and why they select to use AI technologies over other tourism information sources, such as aggregated peer review websites like TripAdvisor, government tourism websites, or social media influencers. The results indicate that many consumers chose error-filled AI tourism itineraries over other options because they trust the AI to be more impartial and customized than the other sources.
科研通智能强力驱动
Strongly Powered by AbleSci AI