误传
透明度(行为)
转化式学习
生成语法
健康传播
互联网隐私
公共关系
光学(聚焦)
健康信息
信息和通信技术
计算机科学
知识管理
心理学
医疗保健
政治学
计算机安全
万维网
人工智能
法学
物理
光学
教育学
作者
Adam G. Dunn,Ivy Shih,Julie Ayre,Heiko Spallek
标识
DOI:10.1080/17538068.2023.2277489
摘要
ABSTRACTLarge language models are fundamental technologies used in interfaces like ChatGPT and are poised to change the way people access and make sense of health information. The speed of uptake and investment suggests that these will be transformative technologies, but it is not yet clear what the implications might be for health communications. In this viewpoint, we draw on research about the adoption of new information technologies to focus on the ways that generative artificial intelligence (AI) tools like large language models might change how health information is produced, what health information people see, how marketing and misinformation might be mixed with evidence, and what people trust. We conclude that transparency and explainability in this space must be carefully considered to avoid unanticipated consequences.
科研通智能强力驱动
Strongly Powered by AbleSci AI