计算机科学
帕累托原理
建筑
推论
多目标优化
过程(计算)
人工神经网络
机器学习
人工智能
分布式计算
数学优化
艺术
数学
视觉艺术
操作系统
作者
Yong Guo,Yaofo Chen,Yin Zheng,Qi Chen,Peilin Zhao,Junzhou Huang,Jian Chen,Mingkui Tan
标识
DOI:10.1109/cvprw59228.2023.00219
摘要
Designing feasible and effective architectures under diverse computational budgets, incurred by different applications/devices, is essential for deploying deep models in real-world applications. To achieve this goal, existing methods often perform an independent architecture search process for each target budget, which is very inefficient yet unnecessary. More critically, these independent search processes cannot share their learned knowledge (i.e., the distribution of good architectures) with each other and thus often result in limited search results. To address these issues, we propose a Pareto-aware Neural Architecture Generator (PNAG) which only needs to be trained once and dynamically produces the Pareto optimal architecture for any given budget via inference. To train our PNAG, we learn the whole Pareto frontier by jointly finding multiple Pareto optimal architectures under diverse budgets. Such a joint search algorithm not only greatly reduces the overall search cost but also improves the search results. Extensive experiments on three hardware platforms (i.e., mobile device, CPU, and GPU) show the superiority of our method over existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI