计算机科学
水准点(测量)
判别式
人工智能
分割
解析
块(置换群论)
像素
图像分割
集合(抽象数据类型)
模式识别(心理学)
语义学(计算机科学)
试验装置
目标检测
计算机视觉
数学
几何学
程序设计语言
地理
大地测量学
作者
Zilong Huang,Xinggang Wang,Yunchao Wei,Lichao Huang,Humphrey Shi,Wenyu Liu,Thomas S. Huang
标识
DOI:10.1109/tpami.2020.3007032
摘要
Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a criss-cross network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11× less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 percent of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9, 45.76 and 55.47 percent on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNethttps://github.com/speedinghzl/CCNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI