卷积神经网络
计算机科学
人工智能
目标检测
计算机视觉
模式识别(心理学)
细胞神经网络
对象(语法)
人工神经网络
作者
Kai Kang,Hongsheng Li,Junjie Yan,Xingyu Zeng,Bin Yang,Tong Xiao,Cong Zhang,Zhe Wang,Ruohui Wang,Wei Wang,Wanli Ouyang
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2017-08-07
卷期号:28 (10): 2896-2907
被引量:512
标识
DOI:10.1109/tcsvt.2017.2736553
摘要
The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks such as GoogleNet and VGG, novel object detection frameworks such as R-CNN and its successors, Fast R-CNN and Faster R-CNN, play an essential role in improving the state-of-the-art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this work, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e. tubelets with convolutional neueral networks. The proposed framework won the recently introduced object-detection-from-video (VID) task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015 (ILSVRC2015).
科研通智能强力驱动
Strongly Powered by AbleSci AI