分割
计算机科学
边界(拓扑)
背景(考古学)
人工智能
一致性(知识库)
对象(语法)
像素
光学(聚焦)
计算机视觉
模式识别(心理学)
数学
古生物学
数学分析
物理
光学
生物
作者
Xiaoyang Xiao,Yuqian Zhao,Fan Zhang,Biao Luo,Lingli Yu,Baifan Chen,Chunhua Yang
标识
DOI:10.1016/j.neunet.2022.10.034
摘要
Semantic segmentation is a critical component for street understanding task in autonomous driving field. Existing various methods either focus on constructing the object's inner consistency by aggregating global or multi-scale context information, or simply combine semantic features with boundary features to refine object details. Despite impressive, most of them neglect the long-range dependences between the inner objects and boundaries. To this end, we present a Boundary Aware Network (BASeg) for semantic segmentation by exploiting boundary information as a significant cue to guide context aggregation. Specifically, a Boundary Refined Module (BRM) is proposed in the BASeg to refine coarse low-level boundary features from a Canny detector by high-level multi-scale semantic features from the backbone, and based on which, the Context Aggregation Module (CAM) is further proposed to capture long-range dependences between the boundary regions and the object inner pixels, achieving mutual gains and enhancing the intra-class consistency. Moreover, our method can be plugged into other CNN backbones for higher performance with a minor computation budget, and obtains 45.72%, 81.2%, and 77.3% of mIoU on the datasets ADE20K, Cityscapes, and CamVid, respectively. Compared with some state-of-the-art ResNet101-based segmentation methods, extensive experiments demonstrate the effectiveness of our method. Our code is available at https://github.com/Lature-Yang/BASeg.
科研通智能强力驱动
Strongly Powered by AbleSci AI