计算机科学
互补性(分子生物学)
人工智能
分割
代表(政治)
机器学习
基础(拓扑)
过程(计算)
建筑
程序设计语言
数学分析
遗传学
数学
政治
政治学
法学
生物
艺术
视觉艺术
作者
Zhongyu Li,Bowen Yin,Shanghua Gao,Yongxiang Liu,Li Liu,Ming–Ming Cheng
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2310.05108
摘要
Incorporating heterogeneous representations from different architectures has facilitated various vision tasks, e.g., some hybrid networks combine transformers and convolutions. However, complementarity between such heterogeneous architectures has not been well exploited in self-supervised learning. Thus, we propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model. In this process, HSSL endows the base model with new characteristics in a representation learning way without structural changes. To comprehensively understand the HSSL, we conduct experiments on various heterogeneous pairs containing a base model and an auxiliary head. We discover that the representation quality of the base model moves up as their architecture discrepancy grows. This observation motivates us to propose a search strategy that quickly determines the most suitable auxiliary head for a specific base model to learn and several simple but effective methods to enlarge the model discrepancy. The HSSL is compatible with various self-supervised methods, achieving superior performances on various downstream tasks, including image classification, semantic segmentation, instance segmentation, and object detection. Our source code will be made publicly available.
科研通智能强力驱动
Strongly Powered by AbleSci AI