计算机科学
块(置换群论)
人工智能
卷积神经网络
棱锥(几何)
卷积(计算机科学)
模式识别(心理学)
目标检测
分割
边距(机器学习)
瓶颈
嵌入
计算机视觉
人工神经网络
机器学习
嵌入式系统
物理
几何学
数学
光学
作者
Hu Zhang,Keke Zu,Jian Lü,Yuru Zou,Deyu Meng
标识
DOI:10.1007/978-3-031-26313-2_33
摘要
Recently, it has been demonstrated that the performance of a deep convolutional neural network can be effectively improved by embedding an attention module into it. In this work, a novel lightweight and effective attention method named Pyramid Squeeze Attention (PSA) module is proposed. By replacing the 3 $$\,\times \,$$ 3 convolution with the PSA module in the bottleneck blocks of the ResNet, a novel representational block named Efficient Pyramid Squeeze Attention (EPSA) is obtained. The EPSA block can be easily added as a plug-and-play component into a well-established backbone network, and significant improvements on model performance can be achieved. Hence, a simple and efficient backbone architecture named EPSANet is developed in this work by stacking these ResNet-style EPSA blocks. Correspondingly, a stronger multi-scale representation ability can be offered by the proposed EPSANet for various computer vision tasks including but not limited to, image classification, object detection, instance segmentation, etc. Without bells and whistles, the performance of the proposed EPSANet outperforms most of the state-of-the-art channel attention methods. As compared to the SENet-50, the Top-1 accuracy is improved by 1.93 $$\%$$ on ImageNet dataset, a larger margin of +2.7 box AP for object detection and an improvement of +1.7 mask AP for instance segmentation by using the Mask-RCNN on MS-COCO dataset are obtained.
科研通智能强力驱动
Strongly Powered by AbleSci AI