定位
计算机科学
最小边界框
注释
跳跃式监视
人工智能
序列(生物学)
变压器
点(几何)
定位关键字
自然语言处理
词(群论)
编码(集合论)
模式识别(心理学)
图像(数学)
程序设计语言
语言学
哲学
遗传学
物理
数学
集合(抽象数据类型)
量子力学
电压
生物
几何学
作者
Dezhi Peng,Xinyu Wang,Yuliang Liu,Jiaxin Zhang,Mingxin Huang,Songxuan Lai,Jing Li,Shenggao Zhu,Dahua Lin,Chunhua Shen,Xiang Bai,Lianwen Jin
标识
DOI:10.1145/3503161.3547942
摘要
Existing scene text spotting (i.e., end-to-end text detection and recognition) methods rely on costly bounding box annotations (e.g., text-line, word-level, or character-level bounding boxes). For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance. We propose an end-to-end scene text spotting method that tackles scene text spotting as a sequence prediction task. Given an image as input, we formulate the desired detection and recognition results as a sequence of discrete tokens and use an auto-regressive Transformer to predict the sequence. The proposed method is simple yet effective, which can achieve state-of-the-art results on widely used benchmarks. Most significantly, we show that the performance is not very sensitive to the positions of the point annotation, meaning that it can be much easier to be annotated or even be automatically generated than the bounding box that requires precise positions. We believe that such a pioneer attempt indicates a significant opportunity for scene text spotting applications of a much larger scale than previously possible. The code is available at https://github.com/shannanyinxiang/SPTS.
科研通智能强力驱动
Strongly Powered by AbleSci AI