计算机科学
公制(单位)
网络体系结构
代理(统计)
人工神经网络
计算
进化算法
建筑
性能指标
集合(抽象数据类型)
机器学习
数据挖掘
人工智能
算法
艺术
运营管理
管理
经济
视觉艺术
程序设计语言
计算机安全
作者
Ngoc Hoang Luong,Quan Phan,An Vo,Tan Ngoc Pham,Dzung Tri Bui
标识
DOI:10.1016/j.ins.2023.119856
摘要
Multi-Objective Evolutionary Neural Architecture Search (MOENAS) methods employ evolutionary algorithms to approximate a set of architectures representing optimal trade-offs between network performance and complexity. Directly estimating network performance via error rates or losses incurs long runtimes due to the computationally expensive network training procedure. Instead, low-cost metrics that require no network training have been proposed as a proxy for network performance. However, these metrics might exhibit inconsistent correlations with network performance across different search spaces. The influences of training-based and training-free metrics on the effectiveness and efficiency of MOENAS are still under-explored. We introduce the Enhanced Training-Free MOENAS (E-TF-MOENAS) that employs the widely-used NSGA-II as the search algorithm and optimizes multiple training-free performance metrics as separate objectives. Experiments on NAS-Bench-101 and NAS-Bench-201 show that E-TF-MOENAS outperforms training-free methods that use a single training-free performance metric and could obtain comparable results to training-based methods but with approximately 30 times less computation cost. E-TF-MOENAS obtains architectures in NAS-Bench-201 with state-of-the-art mean accuracies of 94.37%, 73.50%, and 46.62% for CIFAR-10, CIFAR-100, and ImageNet16-120, respectively, within less than 3 GPU hours. It is beneficial to utilize multiple training-free proxy metrics simultaneously and E-TF-MOENAS provides a convenient framework for building such an efficient NAS approach. The source code can be found at https://github.com/ELO-Lab/E-TF-MOENAS.
科研通智能强力驱动
Strongly Powered by AbleSci AI