计算机科学
边距(机器学习)
关系(数据库)
人工智能
弹丸
特征(语言学)
任务(项目管理)
机器学习
一次性
光学(聚焦)
领域(数学)
关系抽取
模式识别(心理学)
数据挖掘
数学
工程类
哲学
经济
有机化学
化学
管理
物理
纯数学
光学
机械工程
语言学
作者
Zijun Li,Zhengping Hu,Weiwei Luo,Xiao Chuan Hu
标识
DOI:10.1016/j.patcog.2022.109024
摘要
Few-shot learning is an essential and challenging field in machine learning since the agent needs to learn novel concepts with a few data. Recent methods aim to learn comparison or relation between query and support samples to tackle few-shot tasks but have not exceeded human performance and made full use of relations in few-shot tasks. Humans can recognize multiple variants of objects located anywhere in images and compare the relation among learned instances. Inspired by the human learning mechanism, we explore the definition of relations in relation networks and propose self-attention relation modules for feature and learning ability. First, we introduce vision self-attention to generate and purify features in few-shot learning. The comparison of different patches leads the backbone to infer relations between local features, which enforces feature extraction focus on more details. Second, we propose task-specific feature augmentation modules to infer relations and weight different contributions of components in few-shot tasks. The proposed SaberNet is conceptually simple and empirically powerful. Its performance surpasses the baseline a great margin, including pushing 5-way 1-shot CUB accuracy to 89.75% (12.73% absolute improvement), Cars to 76.71% (12.99% absolute improvement) and Flowers to 84.33% (7.67% absolute improvement).
科研通智能强力驱动
Strongly Powered by AbleSci AI