图像扭曲
卷积神经网络
变压器
人工智能
计算机科学
可微函数
模式识别(心理学)
算法
数学
电压
工程类
电气工程
数学分析
作者
Max Jaderberg,Karen Simonyan,Andrew Zisserman,Koray Kavukcuoglu
出处
期刊:Cornell University - arXiv
日期:2015-06-05
被引量:46
摘要
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
科研通智能强力驱动
Strongly Powered by AbleSci AI