可信赖性
透视图(图形)
计算机科学
点(几何)
启发式
认识论
心理学
人工智能
互联网隐私
几何学
数学
哲学
作者
Peter R. Lewis,Stephen Marsh
标识
DOI:10.1016/j.cogsys.2021.11.001
摘要
The trustworthiness (or otherwise) of AI has been much in discussion of late, not least because of the recent publication of the EU Guidelines for Trustworthy AI. Discussions range from how we might make people trust AI to AI being not possible to trust, with many points inbetween. In this article, we question whether or not these discussions somewhat miss the point, which is that people are going ahead and basically doing their own thing anyway, and that we should probably help them. Acknowledging that trust is a heuristic that is widely used by humans in a range of situations, we lean on the literature concerning how humans make trust decisions, to arrive at a general model of how people might consider trust in AI (and other artefacts) for specific purposes in a human world. We then use a series of thought experiments and observations of trust and trustworthiness, to illustrate the use of the model in taking a functionalist perspective on trust decisions, including with machines. Our hope is that this forms a useful basis upon which to develop intelligent systems in a way that considers how and when people may trust them, and in doing so empowers people to make better trust decisions about AI.
科研通智能强力驱动
Strongly Powered by AbleSci AI