Deep Affinity Network for Multiple Object Tracking

计算机科学 人工智能 视频跟踪 计算机视觉 深度学习 目标检测 帧(网络) 标杆管理 联想(心理学) 对象(语法) 模式识别(心理学) 跟踪(教育) 利用 哲学 营销 业务 心理学 认识论 电信 计算机安全 教育学
作者
Shijie Sun,Naveed Akhtar,Huansheng Song,Ajmal Mian,Mubarak Shah
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-1 被引量:143
标识
DOI:10.1109/tpami.2019.2929520
摘要

Multiple Object Tracking (MOT) plays an important role in solving many fundamental problems in video analysis and computer vision. Most MOT methods employ two steps: Object Detection and Data Association. The first step detects objects of interest in every frame of a video, and the second establishes correspondence between the detected objects in different frames to obtain their tracks. Object detection has made tremendous progress in the last few years due to deep learning. However, data association for tracking still relies on hand crafted constraints such as appearance, motion, spatial proximity, grouping etc. to compute affinities between the objects in different frames. In this paper, we harness the power of deep learning for data association in tracking by jointly modeling object appearances and their affinities between different frames in an end-to-end fashion. The proposed Deep Affinity Network (DAN) learns compact, yet comprehensive features of pre-detected objects at several levels of abstraction, and performs exhaustive pairing permutations of those features in any two frames to infer object affinities. DAN also accounts for multiple objects appearing and disappearing between video frames. We exploit the resulting efficient affinity computations to associate objects in the current frame deep into the previous frames for reliable on-line tracking. Our technique is evaluated on popular multiple object tracking challenges MOT15, MOT17 and UA-DETRAC. Comprehensive benchmarking under twelve evaluation metrics demonstrates that our approach is among the best performing techniques on the leader board for these challenges. The open source implementation of our work is available at https://github.com/shijieS/SST.git.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
方向发布了新的文献求助10
1秒前
Sylvia完成签到,获得积分10
1秒前
orixero应助af采纳,获得10
2秒前
Linda琳完成签到,获得积分10
2秒前
Leyna完成签到,获得积分20
3秒前
Yangpan发布了新的文献求助10
4秒前
YangZhang发布了新的文献求助30
5秒前
6秒前
背后的小白菜完成签到,获得积分10
8秒前
叶玉雯完成签到 ,获得积分20
9秒前
充电小子完成签到 ,获得积分10
10秒前
粗犷的凌兰完成签到,获得积分10
10秒前
Akim应助方向采纳,获得10
11秒前
烟花应助木中一采纳,获得10
12秒前
李健应助走过的风采纳,获得10
12秒前
12秒前
ASHhan111完成签到,获得积分10
12秒前
叶玉雯关注了科研通微信公众号
14秒前
gua完成签到 ,获得积分10
14秒前
啦啦完成签到 ,获得积分10
15秒前
sube完成签到,获得积分10
15秒前
张大星完成签到 ,获得积分10
17秒前
秦屿发布了新的文献求助10
20秒前
ziwei完成签到 ,获得积分10
20秒前
Orange应助123asd采纳,获得10
21秒前
星辰大海应助123asd采纳,获得10
21秒前
21秒前
21秒前
Tohka完成签到 ,获得积分10
22秒前
科研通AI6应助dzh采纳,获得10
22秒前
一颗松应助马雪滢采纳,获得10
22秒前
22秒前
123别认出我完成签到,获得积分10
23秒前
义气的断秋完成签到,获得积分10
24秒前
24秒前
Red完成签到,获得积分10
25秒前
夏xx完成签到 ,获得积分10
26秒前
小一完成签到,获得积分10
26秒前
livo发布了新的文献求助10
26秒前
emeqwq发布了新的文献求助10
27秒前
高分求助中
Comprehensive Toxicology Fourth Edition 24000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
LRZ Gitlab附件(3D Matching of TerraSAR-X Derived Ground Control Points to Mobile Mapping Data 附件) 2000
World Nuclear Fuel Report: Global Scenarios for Demand and Supply Availability 2025-2040 800
Handbook of Social and Emotional Learning 800
The Social Work Ethics Casebook(2nd,Frederic G. R) 600
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5130554
求助须知:如何正确求助?哪些是违规求助? 4332648
关于积分的说明 13498156
捐赠科研通 4169169
什么是DOI,文献DOI怎么找? 2285499
邀请新用户注册赠送积分活动 1286489
关于科研通互助平台的介绍 1227430