计算机科学
多样性(控制论)
趋同(经济学)
构造(python库)
人工神经网络
培训(气象学)
人工智能
深度学习
代表(政治)
点(几何)
进化算法
计算机网络
物理
法学
经济
气象学
几何学
政治
经济增长
数学
政治学
作者
Dayu Tan,Wei Zhong,Xin Peng,Qiang Wang,Vladimir Mahalec
出处
期刊:IEEE Transactions on Cognitive and Developmental Systems
[Institute of Electrical and Electronics Engineers]
日期:2022-03-01
卷期号:14 (1): 102-115
被引量:3
标识
DOI:10.1109/tcds.2020.3017100
摘要
Deep neural networks have been scaled up to thousands of layers with the intent to improve their accuracy. Unfortunately, after some point, doubling the number of layers leads to only minor improvements, while the training difficulties increase substantially. In this article, we present an approach for constructing high-accuracy deep evolutionary networks and train them by activating and freezing dense networks (AFNets). The activating and freezing strategy enables us to reduce the classification error of test and reduce the training time required for deeper dense networks. We activate the layers that are being trained and construct a freezing box to freeze the idle and pretrained network layers in order to minimize memory consumption. The training speed in the early stage is not fast enough because many layers are activated for training. As the epochs gradually increase, the training speed becomes faster and faster since fewer and fewer layers are activated. Our method improves the convergence to the optimal performance within a limited number of epochs. Comprehensive experiments on a variety of data sets show that the proposed model achieves better performance when compared to the other state-of-the-art network models.
科研通智能强力驱动
Strongly Powered by AbleSci AI