计算机科学
帕斯卡(单位)
注意力网络
频道(广播)
代表(政治)
目标检测
特征(语言学)
人工智能
模式识别(心理学)
数据挖掘
电信
语言学
政治
哲学
程序设计语言
法学
政治学
作者
Dahang Wan,Rongsheng Lu,Siyuan Shen,Ting Xu,Xianli Lang,Zhijie Ren
标识
DOI:10.1016/j.engappai.2023.106442
摘要
Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spatial feature information, resulting in poor model representation effect or object detection performance, and the spatial attention modules were often complex and expensive. In order to strike a balance between performance and complexity, this paper proposes a lightweight Mixed Local Channel Attention (MLCA) module to improve the performance of the object detection network, and it can simultaneously incorporate both channel information and spatial information, as well as local information and global information to improve the expression effect of the network. On this basis, the MobileNet-Attention-YOLO(MAY) algorithm for comparing the performance of various attention modules is presented. On the Pascal VOC and SMID datasets, MLCA achieves a better balance between model representation efficacy, performance, and complexity than alternative attention techniques. Against the Squeeze-and-Excitation(SE) attention mechanism on the PASCAL VOC dataset and the Coordinate Attention(CA) method on the SIMD dataset, the mAP is enhanced by 1.0 % and 1.5 %, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI