亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Research on knowledge distillation algorithm based on Yolov5 attention mechanism

计算机科学 特征(语言学) 蒸馏 人工智能 机器学习 代表(政治) 抓住 算法 模式识别(心理学) 哲学 语言学 化学 有机化学 政治 政治学 法学 程序设计语言
作者
Shengjie Cheng,Peiyong Zhou,YuLiu,HongjiMa,Alimjan Aysa,Kurban Ubul
出处
期刊:Expert Systems With Applications [Elsevier BV]
卷期号:240: 122553-122553 被引量:6
标识
DOI:10.1016/j.eswa.2023.122553
摘要

The current most advanced CNN-based detection models are nearly not deployable on mobile devices with limited arithmetic power due to problems such as too many redundant parameters and excessive arithmetic power required, and knowledge distillation as a potentially practical model compression approach can alleviate this limitation. In the past, feature-based knowledge distillation algorithms focused more on transferring the local features customized by people and reduced the full grasp of global information in images. To address the shortcomings of traditional feature distillation algorithms, we first improve GAMAttention to learn the global feature representation in images, and the improved attention mechanism can minimize the information loss caused by processing features. Secondly, feature shifting no longer defines manually which features should be shifted, a more interpretable approach is proposed where the student network learns to emulate the high-response feature regions predicted by the teacher network, which increases the end-to-end properties of the model, and feature shifting allows the student network to simulate the teacher network in generating semantically strong feature maps to improve the detection performance of the small model. To avoid learning too many noisy features when learning background features, these two parts of feature distillation are assigned different weights. Finally, logical distillation is performed on the prediction heads of the student and teacher networks. In this experiment, we chose Yolov5 as the base network structure for teacher-student pairs. We improved Yolov5s through attention and knowledge distillation, ultimately achieving a 1.3% performance gain on VOC and a 1.8% performance gain on KITTI.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
醉熏的飞薇完成签到,获得积分10
5秒前
7秒前
Rabbit发布了新的文献求助10
11秒前
Perry发布了新的文献求助10
13秒前
20秒前
Perry完成签到,获得积分10
22秒前
31秒前
量子星尘发布了新的文献求助10
56秒前
JrPaleo101完成签到,获得积分10
1分钟前
遇上就这样吧应助liudy采纳,获得50
1分钟前
1分钟前
Chocolat_Chaud完成签到,获得积分10
1分钟前
舒适踏歌完成签到,获得积分10
1分钟前
2分钟前
Esperanza完成签到,获得积分10
2分钟前
完美的海发布了新的文献求助10
2分钟前
2分钟前
量子星尘发布了新的文献求助10
2分钟前
完美的海完成签到,获得积分10
3分钟前
wisdom发布了新的文献求助10
3分钟前
水牛完成签到,获得积分20
3分钟前
3分钟前
HL完成签到,获得积分10
3分钟前
3分钟前
flyingpig应助wisdom采纳,获得10
4分钟前
量子星尘发布了新的文献求助10
4分钟前
4分钟前
Sid完成签到,获得积分0
5分钟前
李li完成签到,获得积分20
5分钟前
论高等数学的无用性完成签到 ,获得积分10
5分钟前
搜集达人应助小梦采纳,获得10
5分钟前
李li发布了新的文献求助10
5分钟前
5分钟前
量子星尘发布了新的文献求助10
6分钟前
6分钟前
在水一方应助nsc采纳,获得10
6分钟前
6分钟前
judy007发布了新的文献求助10
6分钟前
6分钟前
nsc发布了新的文献求助10
6分钟前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
A Preliminary Study on Correlation Between Independent Components of Facial Thermal Images and Subjective Assessment of Chronic Stress 500
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3957044
求助须知:如何正确求助?哪些是违规求助? 3503067
关于积分的说明 11111230
捐赠科研通 3234118
什么是DOI,文献DOI怎么找? 1787727
邀请新用户注册赠送积分活动 870762
科研通“疑难数据库(出版商)”最低求助积分说明 802264