一般化
模棱两可
人工神经网络
计算机科学
泛化误差
集成学习
交叉验证
人工智能
机器学习
方案(数学)
数学
数学分析
程序设计语言
作者
Anders Krogh,Jesper Vedelsby
摘要
Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI