计算机科学
鉴别器
规范化(社会学)
端到端原则
人工智能
删除
分割
模式识别(心理学)
计算机视觉
程序设计语言
人类学
电信
探测器
社会学
作者
Chongyu Liu,Yuliang Liu,Lianwen Jin,Shuaitao Zhang,Canjie Luo,Yongpan Wang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2020-01-01
卷期号:29: 8760-8775
被引量:38
标识
DOI:10.1109/tip.2020.3018859
摘要
Scene text removal has attracted increasing research interests owing to its valuable applications in privacy protection, camera-based virtual reality translation, and image editing. However, existing approaches, which fall short on real applications, are mainly because they were evaluated on synthetic or unrepresentative datasets. To fill this gap and facilitate this research direction, this article proposes a real-world dataset called SCUT-EnsText that consists of 3,562 diverse images selected from public scene text reading benchmarks, and each image is scrupulously annotated to provide visually plausible erasure targets. With SCUT-EnsText, we design a novel GAN-based model termed EraseNet that can automatically remove text located on the natural images. The model is a two-stage network that consists of a coarse-erasure sub-network and a refinement sub-network. The refinement sub-network targets improvement in the feature representation and refinement of the coarse outputs to enhance the removal performance. Additionally, EraseNet contains a segmentation head for text perception and a local-global SN-Patch-GAN with spectral normalization (SN) on both the generator and discriminator for maintaining the training stability and the congruity of the erased regions. A sufficient number of experiments are conducted on both the previous public dataset and the brand-new SCUT-EnsText. Our EraseNet significantly outperforms the existing state-of-the-art methods in terms of all metrics, with remarkably superior higher-quality results. The dataset and code will be made available at https://github.com/HCIILAB/SCUT-EnsText .
科研通智能强力驱动
Strongly Powered by AbleSci AI