误传
心理学
制度化
认知心理学
认知科学
语言学
政治学
精神科
法学
哲学
作者
Maryanne Garry,Way Ming Chan,Jeffrey L. Foster,Linda A. Henkel
标识
DOI:10.1016/j.tics.2024.08.007
摘要
Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.
科研通智能强力驱动
Strongly Powered by AbleSci AI