计算机科学
分割
人工智能
嵌入
像素
参数化复杂度
编码器
利用
计算机视觉
图像分割
机器学习
模式识别(心理学)
算法
计算机安全
操作系统
标识
DOI:10.1109/icrcv59470.2023.10329050
摘要
Semantic segmentation, as one of the fundamental topics in computer vision, aims to identify the class of each pixel in an image and has a wide range of applications in many fields. Traditional semantic segmentation models for dense prediction can be reduced to learning a single prototype of weight/query vectors for each class, which ignores the rich intra-class diversity and is fully parameterized in a way that does not take into consideration the representational power of the prototype and does not take fully exploit of the model’s segmentation capabilities. Hierarchical Transformers have gained a lot of interest in the vision domain due to their superior performance and ease of integration. These models usually employ local attention mechanisms, which effectively reduce the secondary complexity of the self-attention, but also lose the ability to capture long-range dependencies and the properties of the global receptive field. In this study, we propose DiPFormer, which introduces dilated neighborhood attention in the encoder part, which acts as an extension of neighborhood attention to capture more global dependencies and exponentially expands the receptive field without increasing the computational cost; and treats each class as a set of prototypes and directly shapes the pixel embedding space in the Decoder part, which is optimized by optimizing the distance for prediction. Evaluation results on the publicly available dataset Cityscapes show that the method achieves 83.89% mIoU, an improvement of 1.59 percentage points over the SegFormer, proving that the method is an effective improvement over baseline model.
科研通智能强力驱动
Strongly Powered by AbleSci AI