意会
计算机科学
透明度(行为)
启发式
可靠性
规范性
背景(考古学)
人机交互
知识管理
计算机安全
操作系统
生物
法学
政治学
认识论
哲学
古生物学
作者
Dong‐Hee Shin,Joon Soo Lim,Norita Ahmad,Mohammed Ibahrine
出处
期刊:AI & society
[Springer Nature]
日期:2022-07-03
卷期号:39 (2): 477-490
被引量:58
标识
DOI:10.1007/s00146-022-01525-9
摘要
A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.
科研通智能强力驱动
Strongly Powered by AbleSci AI