透明度(行为)
计算机科学
人工智能
情感(语言学)
人类智力
机器学习
可靠性(半导体)
认知心理学
心理学
计算机安全
沟通
量子力学
物理
功率(物理)
标识
DOI:10.1287/isre.2019.0493
摘要
Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.
科研通智能强力驱动
Strongly Powered by AbleSci AI