计算机科学
判别式
卷积神经网络
卷积(计算机科学)
特征(语言学)
模式识别(心理学)
频道(广播)
代表(政治)
保险丝(电气)
人工智能
卷积码
分割
转化(遗传学)
过程(计算)
校准
算法
人工神经网络
解码方法
数学
法学
化学
统计
哲学
工程类
电气工程
操作系统
基因
政治
生物化学
语言学
计算机网络
政治学
作者
Jiangjiang Liu,Qibin Hou,Ming–Ming Cheng,Changhu Wang,Jiashi Feng
标识
DOI:10.1109/cvpr42600.2020.01011
摘要
Recent advances on CNNs are mostly devoted to designing more complex architectures to enhance their representation learning capacity. In this paper, we consider how to improve the basic convolutional feature transformation process of CNNs without tuning the model architectures. To this end, we present a novel self-calibrated convolutions that explicitly expand fields-of-view of each convolutional layers through internal communications and hence enrich the output features. In particular, unlike the standard convolutions that fuse spatial and channel-wise information using small kernels (e.g., 3×3), self-calibrated convolutions adaptively build long-range spatial and inter-channel dependencies around each spatial location through a novel self-calibration operation. Thus, it can help CNNs generate more discriminative representations by explicitly incorporating richer information. Our self-calibrated convolution design is simple and generic, and can be easily applied to augment standard convolutional layers without introducing extra parameters and complexity. Extensive experiments demonstrate that when applying self-calibrated convolutions into different backbones, our networks can significantly improve the baseline models in a variety of vision tasks, including image recognition, object detection, instance segmentation, and keypoint detection, with no need to change the network architectures. We hope this work could provide a promising way for future research in designing novel convolutional feature transformations for improving convolutional networks. Code is available on the project page.
科研通智能强力驱动
Strongly Powered by AbleSci AI