水准点(测量)
公理
计算机科学
人工智能
功能(生物学)
梯度下降
回归
机器学习
人工神经网络
质量(理念)
算法
数学
数学优化
模式识别(心理学)
应用数学
统计
认识论
生物
哲学
进化生物学
大地测量学
地理
几何学
作者
Tim Pearce,Mohamed Zaki,Alexandra Brintrup,Andy Neely
出处
期刊:Cornell University - arXiv
日期:2018-01-01
被引量:128
标识
DOI:10.48550/arxiv.1802.07167
摘要
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks. It is axiomatic that high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. We derive a loss function directly from this axiom that requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and that model uncertainty is accounted for in ensembled form. Benchmark experiments show the method outperforms current state-of-the-art uncertainty quantification methods, reducing average PI width by over 10%.
科研通智能强力驱动
Strongly Powered by AbleSci AI