计算机科学
Boosting(机器学习)
人工智能
注意力网络
机器学习
变压器
正规化(语言学)
模式识别(心理学)
量子力学
物理
电压
作者
Hui Lin,Zhiheng Ma,Rongrong Ji,Yaowei Wang,Xiaopeng Hong
标识
DOI:10.1109/cvpr52688.2022.01901
摘要
This paper focuses on the challenging crowd counting task. As large-scale variations often exist within crowd images, neither fixed-size convolution kernel of CNN nor fixed-size attention of recent vision transformers can well handle this kind of variations. To address this problem, we propose a Multifaceted Attention Network (MAN) to improve transformer models in local spatial relation encoding. MAN incorporates global attention from vanilla transformer, learnable local attention, and instance attention into a counting model. Firstly, the local Learnable Region Attention (LRA) is proposed to assign attention exclusive for each feature location dynamically. Secondly, we design the Local Attention Regularization to supervise the training of LRA by minimizing the deviation among the attention for different feature locations. Finally, we provide an Instance Attention mechanism to focus on the most important instances dynamically during training. Extensive experiments on four challenging crowd counting datasets namely ShanghaiTech, UCF-QNRF, JHU++, and NWPU have validated the proposed method. Code: https://github.com/LoraLinH/Boosting-Crowd-Counting-via-Multifaceted-Attention.
科研通智能强力驱动
Strongly Powered by AbleSci AI