性别偏见
分类
能见度
计算机科学
领域(数学)
背景(考古学)
包裹体(矿物)
透明度(行为)
数据科学
意外后果
人工智能
心理学
社会心理学
计算机安全
政治学
古生物学
物理
数学
法学
纯数学
光学
生物
作者
Grazia Cecere,Clara Jean,Fabrice Le Guel,Matthieu Manant
标识
DOI:10.1016/j.techfore.2023.123204
摘要
Artificial intelligence (AI) is a general purpose technology that is used in many sectors. However, automated decision-making powered by AI algorithms can lead to unintended outcomes, especially in the context of online platforms. The lack of transparency related to AI algorithms and their categorization methods make practical insights into effective management of the risks associated to their utilization of crucial importance. We address these issues through two field tests aimed at mitigating biases in online science, technology, engineering, and mathematics (STEM) education-related ads targeting teenagers. We conducted online ad campaigns involving gender-unspecific, women-specific, and gender-neutral ads targeted at young social network users. Our findings show that inclusion in the ad of a gender-oriented message tends to alleviate algorithmic gender bias but also reduced overall ad visibility. Our research shows also that text length has a significant impact on ad visibility, and that gender-oriented messages influence the display of the ad based on gender.
科研通智能强力驱动
Strongly Powered by AbleSci AI