MNIST数据库
卷积神经网络
人工智能
模式识别(心理学)
计算机科学
Gabor滤波器
水准点(测量)
不变(物理)
小波
特征(语言学)
方向(向量空间)
上下文图像分类
深度学习
计算机视觉
特征提取
图像(数学)
数学
几何学
哲学
小波
离散小波变换
语言学
数学物理
小波变换
地理
大地测量学
作者
Ye Yuan,Lina Wang,Guoqiang Zhong,Wei Gao,Wencong Jiao,Junyu Dong,Biao Shen,Dongdong Xia,Wei Xiang
标识
DOI:10.1016/j.patcog.2021.108495
摘要
Despite the great breakthroughs that deep convolutional neural networks (DCNNs) have achieved on image representation learning in recent years, they lack the ability to extract invariant information from images. On the other hand, several traditional feature extractors like Gabor filters are widely used for invariant information learning from images. In this paper, we propose a new class of DCNNs named adaptive Gabor convolutional networks (AGCNs). In the AGCNs, the convolutional kernels are adaptively multiplied by Gabor filters to construct the Gabor convolutional filters (GCFs), while the parameters in the Gabor functions (i.e., scale and orientation) are learned alongside those in the convolutional kernels. In addition, the GCFs can be regenerated after updating the Gabor filters and convolutional kernels. We evaluate the performance of the proposed AGCNs on image classification using five benchmark image datasets, i.e., MNIST and its rotated version, SVHN, CIFAR-10, CINIC-10, and DogsVSCats. Experimental results show that the AGCNs are robust to spatial transformations and have achieved higher accuracy compared with the DCNNs and other state-of-the-art deep networks. Moreover, the GCFs can be easily embedded into any classical DCNN models (e.g., ResNet) and require fewer parameters than the corresponding DCNNs.
科研通智能强力驱动
Strongly Powered by AbleSci AI