透明度(行为)
素描
问责
社会技术系统
建设性的
类型学
理想(伦理)
计算机科学
认识论
社会学
数据科学
工程伦理学
计算机安全
政治学
知识管理
法学
算法
哲学
工程类
操作系统
过程(计算)
人类学
作者
Mike Ananny,Kate Crawford
标识
DOI:10.1177/1461444816676645
摘要
Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.
科研通智能强力驱动
Strongly Powered by AbleSci AI