贝叶斯优化
计算机科学
概率逻辑
全局优化
人工智能
可扩展性
贝叶斯概率
最优化问题
机器学习
强化学习
数学优化
算法
数学
数据库
作者
David Eriksson,Michael Pearce,Jacob R. Gardner,Ryan Turner,Matthias Poloczek
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:110
标识
DOI:10.48550/arxiv.1910.01739
摘要
Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the $\texttt{TuRBO}$ algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that $\texttt{TuRBO}$ outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.
科研通智能强力驱动
Strongly Powered by AbleSci AI