透明度(行为)
多样性(控制论)
计算机科学
问责
商业模式
可信赖性
政府(语言学)
人工智能
风险分析(工程)
业务
计算机安全
营销
政治学
语言学
哲学
法学
作者
Dmitry Zhdanov,Sudip Bhattacharjee,Mikhail A. Bragin
标识
DOI:10.1016/j.dss.2021.113715
摘要
We present a formal approach to build and evaluate AI systems that include principles of Fairness, Accountability and Transparency (FAT), which are extremely important in various domains where AI models are used, yet their utilization in business settings is scant. We develop and instantiate a FAT-based framework with a privacy-constrained dataset and build a model to demonstrate the balance among these 3 dimensions. These principles are gaining prominence with higher awareness of privacy and fairness in business and society. Our results indicate that FAT can co-exist in a well-designed system. Our contribution lies in presenting and evaluating a functional, FAT-based machine learning model in an affinity prediction scenario. Contrary to common belief, we show that explainable AI/ML systems need not have a major negative impact on predictive performance. Our approach is applicable in a variety of fields such as insurance, health diagnostics, government funds allocation and other business settings. Our work has broad policy implications as well, by making AI and AI-based decisions more ethical, less controversial, and hence, trustworthy. Our work contributes to emerging AI policy perspectives worldwide.
科研通智能强力驱动
Strongly Powered by AbleSci AI