Guard(计算机科学)
计算机科学
民主
代表(政治)
人工智能
立法机关
领域(数学)
自然语言处理
语言模型
机器学习
政治学
法学
数学
程序设计语言
政治
纯数学
作者
Sarah Kreps,Douglas L. Kriner
标识
DOI:10.1177/14614448231160526
摘要
Advances in machine learning have led to the creation natural language models that can mimic human writing style and substance. Here we investigate the challenge that machine-generated content, such as that produced by the model GPT-3, presents to democratic representation by assessing the extent to which machine-generated content can pass as constituent sentiment. We conduct a field experiment in which we send both handwritten and machine-generated letters (a total of 32,398 emails) to 7132 state legislators. We compare legislative response rates for the human versus machine-generated constituency letters to gauge whether language models can approximate inauthentic constituency voices at scale. Legislators were only slightly less likely to respond to artificial intelligence (AI)-generated content than to human-written emails; the 2% difference in response rate was statistically significant but substantively small. Qualitative evidence sheds light on the potential perils that this technology presents for democratic representation, but also suggests potential techniques that legislators might employ to guard against misuses of language models.
科研通智能强力驱动
Strongly Powered by AbleSci AI