概括性
修剪
计算机科学
人工神经网络
建筑
范畴变量
人工智能
过程(计算)
机器学习
心理学
艺术
农学
视觉艺术
心理治疗师
生物
操作系统
作者
Xiawu Zheng,Chenyi Yang,Shaokun Zhang,Yan Wang,Baochang Zhang,Yongjian Wu,Yunsheng Wu,Ling Shao,Rongrong Ji
标识
DOI:10.1007/s11263-023-01753-6
摘要
Neural Architecture Search (NAS) has demonstrated state-of-the-art performance on various computer vision tasks. Despite the superior performance achieved, the efficiency and generality of existing methods are highly valued due to their high computational complexity and low generality. In this paper, we propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning, facilitating a theoretical bound on accuracy and efficiency. In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs. With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints, which is practical for on-device models across diverse search spaces and constraints. The architectures searched by our method achieve remarkable top-1 accuracies, 97.56 and 77.2 on CIFAR-10 and ImageNet (mobile settings), respectively, with the fastest search process, i.e., only 1.8 GPU hours on a Tesla V100. Codes for searching and network generation are available at: https://openi.pcl.ac.cn/PCL_AutoML/XNAS.
科研通智能强力驱动
Strongly Powered by AbleSci AI