计算机科学
建筑
可扩展性
人工智能
机器学习
Boosting(机器学习)
卷积神经网络
人工神经网络
分布式计算
理论计算机科学
计算机工程
艺术
数据库
视觉艺术
作者
Guangrun Wang,Changlin Li,Liuchun Yuan,Jiefeng Peng,Xiaoyu Xian,Xiaodan Liang,Xiaojun Chang,Liang Lin
标识
DOI:10.1109/tpami.2023.3335261
摘要
Neural Architecture Search (NAS), aiming at automatically designing neural architectures by machines, has been considered a key step toward automatic machine learning. One notable NAS branch is the weight-sharing NAS, which significantly improves search efficiency and allows NAS algorithms to run on ordinary computers. Despite receiving high expectations, this category of methods suffers from low search effectiveness. By employing a generalization boundedness tool, we demonstrate that the devil behind this drawback is the untrustworthy architecture rating with the oversized search space of the possible architectures. Addressing this problem, we modularize a large search space into blocks with small search spaces and develop a family of models with the distilling neural architecture (DNA) techniques. These proposed models, namely a DNA family, are capable of resolving multiple dilemmas of the weight-sharing NAS, such as scalability, efficiency, and multi-modal compatibility. Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using heuristic algorithms. Moreover, under a certain computational complexity constraint, our method can seek architectures with different depths and widths. Extensive experimental evaluations show that our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively. Additionally, we provide in-depth empirical analysis and insights into neural architecture ratings.
科研通智能强力驱动
Strongly Powered by AbleSci AI