可解释性
计算机科学
可信赖性
钥匙(锁)
认证
多样性(控制论)
可靠性(半导体)
特征工程
人工智能
索引(排版)
深度学习
计算机安全
功率(物理)
物理
量子力学
万维网
政治学
法学
作者
Xuan Li,Peijun Ye,Bai Li,Zhongmin Liu,Longbing Cao,Fei‐Yue Wang
出处
期刊:IEEE Intelligent Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-07-01
卷期号:37 (4): 18-26
被引量:121
标识
DOI:10.1109/mis.2022.3197950
摘要
Artificial intelligence (AI)’s rapid development has produced a variety of state-of-the-art models and methods that rely on network architectures and features engineering. However, some AI approaches achieve high accurate results only at the expense of interpretability and reliability. These problems may easily lead to bad experiences, lower trust levels, and systematic or even catastrophic risks. This article introduces the theoretical framework of scenarios engineering for building trustworthy AI techniques. We propose six key dimensions, including intelligence and index, calibration and certification, and verification and validation to achieve more robust and trusting AI, and address issues for future research directions and applications along this direction.
科研通智能强力驱动
Strongly Powered by AbleSci AI