托换
问责
透明度(行为)
公司治理
钥匙(锁)
渲染(计算机图形)
背景(考古学)
社会学
功能(生物学)
计算机科学
认识论
公共关系
政治学
经济
法学
人工智能
管理
计算机安全
工程类
生物
哲学
土木工程
古生物学
进化生物学
标识
DOI:10.1177/02673231211028376
摘要
The algorithms underpinning many everyday communication processes are now complex enough that rendering them explainable has become a key governance objective. This article examines the question of 'who should be required to explain what, to whom, in platform environments'. By working with algorithm designers and using design methods to extrapolate existing capacities to explain aglorithmic functioning, the article discusses the power relationships underpinning explanation of algorithmic function. Reviewing how key concepts of transparency and accountability connect with explainability, the paper argues that reliance on explainability as a governance mechanism can generate a dangerous paradox which legitimates increased reliance on programmable infrastructure as expert stakeholders are reassured by their ability to perform or receive explanations, while displacing responsibility for understandings of social context and definitions of public interest
科研通智能强力驱动
Strongly Powered by AbleSci AI