计算机科学
卷积神经网络
人工智能
模式识别(心理学)
分类器(UML)
支持向量机
视觉对象识别的认知神经科学
上下文图像分类
内存占用
特征提取
特征(语言学)
代表(政治)
图像(数学)
政治
法学
哲学
操作系统
语言学
政治学
作者
Ali Sharif Razavian,Hossein Azizpour,Josephine Sullivan,Stefan Carlsson
出处
期刊:Cornell University - arXiv
日期:2014-01-01
被引量:5
标识
DOI:10.48550/arxiv.1403.6382
摘要
Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the \overfeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the \overfeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the \overfeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or $L2$ distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI