计算机科学
联营
人工智能
卷积神经网络
卷积(计算机科学)
转化(遗传学)
分割
编码(集合论)
深度学习
计算机视觉
模式识别(心理学)
目标检测
几何变换
人工神经网络
图像(数学)
基因
生物化学
集合(抽象数据类型)
化学
程序设计语言
作者
Jifeng Dai,Haozhi Qi,Yuwen Xiong,Yi Li,Guodong Zhang,Han Hu,Yichen Wei
摘要
Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https://github.com/msracver/Deformable-ConvNets.
科研通智能强力驱动
Strongly Powered by AbleSci AI