计算机科学
人工智能
跟踪(教育)
文本检测
文本识别
帧(网络)
特征提取
模式识别(心理学)
计算机视觉
情报检索
图像(数学)
心理学
教育学
电信
作者
Shu Tian,Xu-Cheng Yin,Ya Su,Hongwei Hao
标识
DOI:10.1109/tpami.2017.2692763
摘要
Video text extraction plays an important role for multimedia understanding and retrieval. Most previous research efforts are conducted within individual frames. A few of recent methods, which pay attention to text tracking using multiple frames, however, do not effectively mine the relations among text detection, tracking and recognition. In this paper, we propose a generic Bayesian-based framework of Tracking based Text Detection And Recognition (T DAR) from web videos for embedded captions, which is composed of three major components, i.e., text tracking, tracking based text detection, and tracking based text recognition. In this unified framework, text tracking is first conducted by tracking-by-detection. Tracking trajectories are then revised and refined with detection or recognition results. Text detection or recognition is finally improved with multi-frame integration. Moreover, a challenging video text (embedded caption text) database (USTB-VidTEXT) is constructed and publicly available. A variety of experiments on this dataset verify that our proposed approach largely improves the performance of text detection and recognition from web videos.
科研通智能强力驱动
Strongly Powered by AbleSci AI