把关控制
无礼的
社会化媒体
审计
问责
主题(文档)
政治
样品(材料)
计算机科学
互联网隐私
心理学
广告
政治学
万维网
业务
数学
化学
会计
法学
色谱法
运筹学
作者
Kokil Jaidka,Subhayan Mukerjee,Yphtach Lelkes
摘要
Abstract Algorithms play a critical role in steering online attention on social media. Many have alleged that algorithms can perpetuate bias. This study audited shadowbanning, where a user or their content is temporarily hidden on Twitter. We repeatedly tested whether a stratified random sample of American Twitter accounts (n ≈ 25,000) had been subject to various forms of shadowbans. We then identified the type of user and tweet characteristics that predict a shadowban. In general, shadowbans are rare. We found that accounts with bot-like behavior were more likely to face shadowbans, while verified accounts were less likely to be shadowbanned. The replies by Twitter accounts that posted offensive tweets and tweets about politics (from both the left and the right) were more likely to be downtiered. The findings have implications for algorithmic accountability and the design of future audit studies of social media platforms.
科研通智能强力驱动
Strongly Powered by AbleSci AI