超参数
加速度
计算机科学
人工智能
物理
经典力学
作者
Jia Mian Tan,Haoran Liao,Wei Liu,Changjun Fan,Jincai Huang,Bai Li,Junchi Yan
出处
期刊:Mathematical Biosciences and Engineering
[American Institute of Mathematical Sciences]
日期:2024-01-01
卷期号:21 (6): 6289-6335
摘要
Hyperparameter optimization (HPO) has been well-developed and evolved into a well-established research topic over the decades. With the success and wide application of deep learning, HPO has garnered increased attention, particularly within the realm of machine learning model training and inference. The primary objective is to mitigate the challenges associated with manual hyperparameter tuning, which can be ad-hoc, reliant on human expertise, and consequently hinders reproducibility while inflating deployment costs. Recognizing the growing significance of HPO, this paper surveyed classical HPO methods, approaches for accelerating the optimization process, HPO in an online setting (dynamic algorithm configuration, DAC), and when there is more than one objective to optimize (multi-objective HPO). Acceleration strategies were categorized into multi-fidelity, bandit-based, and early stopping; DAC algorithms encompassed gradient-based, population-based, and reinforcement learning-based methods; multi-objective HPO can be approached via scalarization, metaheuristics, and model-based algorithms tailored for multi-objective situation. A tabulated overview of popular frameworks and tools for HPO was provided, catering to the interests of practitioners.
科研通智能强力驱动
Strongly Powered by AbleSci AI