人工智能
模式识别(心理学)
人工神经网络
集合(抽象数据类型)
计算机科学
图像(数学)
透视图(图形)
代表(政治)
特征(语言学)
机器学习
深信不疑网络
深度学习
上下文图像分类
过程(计算)
语言学
哲学
政治
政治学
法学
程序设计语言
操作系统
作者
Zuowei Zhang,Zhunga Liu,Liangbo Ning,Arnaud Martin,Jiexuan Xiong
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-11-10
卷期号:: 1-14
被引量:6
标识
DOI:10.1109/tnnls.2023.3329712
摘要
Quantification and reduction of uncertainty in deep-learning techniques have received much attention but ignored how to characterize the imprecision caused by such uncertainty. In some tasks, we prefer to obtain an imprecise result rather than being willing or unable to bear the cost of an error. For this purpose, we investigate the representation of imprecision in deep-learning (RIDL) techniques based on the theory of belief functions (TBF). First, the labels of some training images are reconstructed using the learning mechanism of neural networks to characterize the imprecision in the training set. In the process, a label assignment rule is proposed to reassign one or more labels to each training image. Once an image is assigned with multiple labels, it indicates that the image may be in an overlapping region of different categories from the feature perspective or the original label is wrong. Second, those images with multiple labels are rechecked. As a result, the imprecision (multiple labels) caused by the original labeling errors will be corrected, while the imprecision caused by insufficient knowledge is retained. Images with multiple labels are called imprecise ones, and they are considered to belong to meta-categories, the union of some specific categories. Third, the deep network model is retrained based on the reconstructed training set, and the test images are then classified. Finally, some test images that specific categories cannot distinguish will be assigned to meta-categories to characterize the imprecision in the results. Experiments based on some remarkable networks have shown that RIDL can improve accuracy (AC) and reasonably represent imprecision both in the training and testing sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI