计算机科学
关系(数据库)
人工智能
嵌入
特征(语言学)
弹丸
依赖关系(UML)
集合(抽象数据类型)
模式识别(心理学)
特征学习
机器学习
数据挖掘
哲学
有机化学
化学
程序设计语言
语言学
作者
Binyuan Hui,Pengfei Zhu,Qinghua Hu,Qilong Wang
出处
期刊:International Conference on Multimedia and Expo
日期:2019-07-01
被引量:30
标识
DOI:10.1109/icmew.2019.00041
摘要
The success of deep learning greatly attributes to massive data with accurate labels. However, for few shot learning, especially zero shot learning, deep models cannot be well trained in that there are few available labeled samples. Inspired by human visual system, attention models have been widely used in action recognition, instance segmentation, and other vision tasks by introducing spatial, temporal, or channel-wise weights. In this paper, we propose a self-attention relation network (SARN) for few-shot learning. SARN consists of three modules, i.e., embedding module, attention module and relation module. The embedding module extracts feature maps while the attention module is introduced to enhance the learned features. Finally the extracted features of the query sample and support set are fed into the relation module for comparison, and the relation score is output for classification. Compared with the existing relation network for few shot learning, SARN can discover non-local information and allow long-range dependency. SARN can be easily extended to zero shot learning by replacing the support set with semantic vectors. Experiments on benchmarks (Omniglot, miniImageNet, AwA, and CUB) show that our proposed SARN outperforms the state-of-the-art algorithms in terms of both few shot and zero shot tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI