计算机科学
人工智能
桥接(联网)
不确定度量化
机器学习
代表(政治)
信念修正
证据推理法
深度学习
管理科学
商业决策图
决策支持系统
计算机网络
政治
政治学
法学
经济
作者
Zhen Guo,Zelin Wan,Qisheng Zhang,Xujiang Zhao,Qi Zhang,Lance Kaplan,Audun Jøsang,Dong Hyun Jeong,Feng Chen,Jin-Hee Cho
标识
DOI:10.1016/j.inffus.2023.101987
摘要
An in-depth understanding of uncertainty is the first step to making effective decisions under uncertainty. Machine/deep learning (ML/DL) has been hugely leveraged to solve complex problems involved with processing high-dimensional data. However, reasoning and quantifying different uncertainties to achieve effective decision-making have been much less explored in ML/DL than in other Artificial Intelligence (AI) domains. In particular, belief/evidence theories have been studied in Knowledge representation and reasoning (KRR) since the 1960s to reason and measure uncertainties to enhance decision-making effectiveness. Based on our in-depth literature review, only a few studies have leveraged mature uncertainty research in belief/evidence theories in ML/DL to tackle complex problems under different types of uncertainty. Our present survey paper discusses major belief theories and their core ideas dealing with uncertainty causes and types and quantifying them, along with the discussions of their applicability in ML/DL. Particularly, we discuss three main approaches leveraging belief theories in Deep Neural Networks (DNNs), including Evidential DNNs, Fuzzy DNNs, and Rough DNNs, in terms of their uncertainty causes, types, and quantification methods along with their applicability in diverse problem domains. Through an in-depth understanding of the extensive survey on this topic, we discuss insights, lessons learned, limitations of the current state-of-the-art bridging belief theories and ML/DL, and future research directions. This paper conducts an extensive survey by bridging belief theories and deep learning in reasoning and quantifying uncertainty to help researchers initiate uncertainty and decision-making research.
科研通智能强力驱动
Strongly Powered by AbleSci AI