水准点(测量)
计算机科学
图层(电子)
特征(语言学)
卷积神经网络
符号
计算
重新使用
人工智能
模式识别(心理学)
对象(语法)
理论计算机科学
数学
算法
算术
化学
哲学
大地测量学
有机化学
生物
地理
语言学
生态学
作者
Gao Huang,Zhuang Liu,Geoff Pleiss,Laurens van der Maaten,Kilian Q. Weinberger
标识
DOI:10.1109/tpami.2019.2918284
摘要
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with $L$ layers have $L$ connections—one between each layer and its subsequent layer—our network has $\frac{L(L+1)}{2}$ direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, encourage feature reuse and substantially improve parameter efficiency. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less parameters and computation to achieve high performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI