控制(管理)
第三方
工作(物理)
心理学
计算机科学
业务
互联网隐私
公共关系
人工智能
政治学
工程类
机械工程
标识
DOI:10.1177/00018392211010118
摘要
Existing research has shown that people experience third-party evaluations as a form of control because they try to align their behavior with evaluations’ criteria to secure more favorable resources, recognition, and opportunities from external audiences. Much of this research has focused on evaluations with transparent criteria, but increasingly, algorithmic evaluation systems are not transparent. Drawing on over three years of interviews, archival data, and observations as a registered user on a labor platform, I studied how freelance workers contend with an opaque third-party evaluation algorithm—and with what consequences. My findings show the platform implemented an opaque evaluation algorithm to meaningfully differentiate between freelancers’ rating scores. Freelancers experienced this evaluation as a form of control but could not align their actions with its criteria because they could not clearly identify those criteria. I found freelancers had divergent responses to this situation: some experimented with ways to improve their rating scores, and others constrained their activity on the platform. Their reactivity differed based not only on their general success on the platform—whether they were high or low performers—but also on how much they depended on the platform for work and whether they experienced setbacks in the form of decreased evaluation scores. These workers experienced what I call an “invisible cage”: a form of control in which the criteria for success and changes to those criteria are unpredictable. For gig workers who rely on labor platforms, this form of control increasingly determines their access to clients and projects while undermining their ability to understand and respond to factors that determine their success.
科研通智能强力驱动
Strongly Powered by AbleSci AI