叙述的
内容(测量理论)
生成语法
媒体内容
动力学(音乐)
生成模型
心理学
健康传播
计算机科学
互联网隐私
万维网
多媒体
沟通
人工智能
数学
艺术
文学类
数学分析
教育学
作者
Seema Shukla,Babita Pandey,Devendra Kumar Pandey,Brijendra Pratap Mishra,Aditya Khamparia
标识
DOI:10.1002/9781394280735.ch21
摘要
Large language models, generative adversarial networks (GANs), and variational autoencoders (VAEs) are basic technologies used in interfaces like Chat Generative Pre-Trained Transformer (a textual content creator) and DALL-E 2 (a text-to-image creator), poised to revolutionize the way users access and understand health information. The rapid uptake and investment in these technologies suggest they will be transformative, yet their implications for health communications remain unclear. In this viewpoint, we present a research study measuring individual trust using a previously established trust scale and examining the impact of displaying disclaimers on trust in content generated by artificial intelligence (AI). The results of data analysis using SmartPLS indicate that the three components of trust have a positive impact on individual trust. Semi-structured interviews further reinforce these findings. This study sheds light on the adoption of new information technologies, focusing on how generative AI tools such as large language models, GANs, and VAEs may alter the production and consumption of health information. We explore how these technologies may influence the content people encounter, the blending of marketing and misinformation with evidence, and the factors that influence trust.
科研通智能强力驱动
Strongly Powered by AbleSci AI