透明度(行为)
黑匣子
计算机科学
人工智能
自动化
技术哲学
混淆
计算机安全
数据科学
科学哲学
工程类
认识论
机械工程
哲学
作者
Warren J. von Eschenbach
标识
DOI:10.1007/s13347-021-00477-0
摘要
With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence (AI) that uses deep learning (DL), an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open question to what extent we can trust these systems. The question of trust becomes more urgent as we delegate more and more decision-making to and increasingly rely on AI to safeguard significant human goods, such as security, healthcare, and safety. Models that “open the black box” by making the non-linear and complex decision process understandable by human observers are promising solutions to the black box problem in AI but are limited, at least in their current state, in their ability to make these processes less opaque to most observers. A philosophical analysis of trust will show why transparency is a necessary condition for trust and eventually for judging AI to be trustworthy. A more fruitful route for establishing trust in AI is to acknowledge that AI is situated within a socio-technical system that mediates trust, and by increasing the trustworthiness of these systems, we thereby increase trust in AI.
科研通智能强力驱动
Strongly Powered by AbleSci AI