计算机科学
人工智能
分类学(生物学)
数据科学
商业化
人工智能应用
工作(物理)
机器学习
政治学
植物
机械工程
生物
工程类
法学
作者
Ninareh Mehrabi,Fred Morstatter,Nripsuta Ani Saxena,Kristina Lerman,Aram Galstyan
出处
期刊:ACM Computing Surveys
[Association for Computing Machinery]
日期:2021-07-13
卷期号:54 (6): 1-35
被引量:1988
摘要
With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.
科研通智能强力驱动
Strongly Powered by AbleSci AI