计算机科学
变压器
特征提取
网(多面体)
人工智能
模式识别(心理学)
数学
工程类
电压
电气工程
几何学
作者
Chunlei Meng,Jiacheng Yang,Wei Lin,Bowen Liu,Hongda Zhang,Chun Ouyang,Zhongxue Gan
出处
期刊:Cornell University - arXiv
日期:2024-10-15
标识
DOI:10.48550/arxiv.2410.11428
摘要
Convolutional neural networks (CNNs) and vision transformers (ViTs) have become essential in computer vision for local and global feature extraction. However, aggregating these architectures in existing methods often results in inefficiencies. To address this, the CNN-Transformer Aggregation Network (CTA-Net) was developed. CTA-Net combines CNNs and ViTs, with transformers capturing long-range dependencies and CNNs extracting localized features. This integration enables efficient processing of detailed local and broader contextual information. CTA-Net introduces the Light Weight Multi-Scale Feature Fusion Multi-Head Self-Attention (LMF-MHSA) module for effective multi-scale feature integration with reduced parameters. Additionally, the Reverse Reconstruction CNN-Variants (RRCV) module enhances the embedding of CNNs within the transformer architecture. Extensive experiments on small-scale datasets with fewer than 100,000 samples show that CTA-Net achieves superior performance (TOP-1 Acc 86.76\%), fewer parameters (20.32M), and greater efficiency (FLOPs 2.83B), making it a highly efficient and lightweight solution for visual tasks on small-scale datasets (fewer than 100,000).
科研通智能强力驱动
Strongly Powered by AbleSci AI