社会福利功能
社会规划师
弱势群体
衡平法
计算机科学
社会福利
福利
功能(生物学)
社会平等
人工智能
公共经济学
经济
机器学习
算法
微观经济学
政治学
经济增长
法学
进化生物学
生物
市场经济
标识
DOI:10.1145/3219166.3219236
摘要
Social scientists have long been interested in discrimination and other inherent social inequities; and as such have developed models to evaluate policies through the dual lenses of efficiency and equity. More recently, computer scientists have illustrated show algorithms in many domains inherit and (sometimes inadvertently) bake in these same human biases and inequities. In this talk, I attempt to bring these two strands together: I embed concerns about algorithmic bias within a broader welfare economics framework. Instead of viewing the data as given, it begins with a model of the underlying social phenomena and their accompanying inequities. It then posits a social welfare function, where the social planner cares both about efficiency and equity. In particular, she places greater weight on equity than individual algorithm designers (firms or citizens) do. Intrinsic to this approach is that the social planner's preferences imply desired properties of algorithm: the fairness of a given algorithm is not a primitive; instead, it is derived from the welfare of the outcomes it engenders. Several pieces of conventional wisdom do not hold true in this framework. For example, "blinding the algorithm" to variables such as race generally reduces welfare, even for the disadvantaged group. At the other extreme, I characterize situations where apparently fair algorithms can drastically increase inequities. Overall, I argue that it would be beneficial to model fairness and algorithmic bias more holistically, including both a generative model of the underlying social phenomena and a description of a global welfare function.
科研通智能强力驱动
Strongly Powered by AbleSci AI