BiSeNet V2: Bilateral Network with Guided Aggregation for Real-Time Semantic Segmentation

计算机科学 分割 人工智能 特征(语言学) 推论 语义学(计算机科学) 频道(广播) 增采样 模式识别(心理学) 深度学习 图像(数学) 计算机网络 语言学 哲学 程序设计语言
作者
Changqian Yu,Changxin Gao,Jingbo Wang,Gang Yu,Chunhua Shen,Nong Sang
出处
期刊:International Journal of Computer Vision [Springer Science+Business Media]
卷期号:129 (11): 3051-3068 被引量:1047
标识
DOI:10.1007/s11263-021-01515-2
摘要

Low-level details and high-level semantics are both essential to the semantic segmentation task. However, to speed up the model inference, current approaches almost always sacrifice the low-level details, leading to a considerable decrease in accuracy. We propose to treat these spatial details and categorical semantics separately to achieve high accuracy and high efficiency for real-time semantic segmentation. For this purpose, we propose an efficient and effective architecture with a good trade-off between speed and accuracy, termed Bilateral Segmentation Network (BiSeNet V2). This architecture involves the following: (i) A detail branch, with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation; (ii) A semantics branch, with narrow channels and deep layers to obtain high-level semantic context. The detail branch has wide channel dimensions and shallow layers, while the semantics branch has narrow channel dimensions and deep layers. Due to the reduction in the channel capacity and the use of a fast-downsampling strategy, the semantics branch is lightweight and can be implemented by any efficient model. We design a guided aggregation layer to enhance mutual connections and fuse both types of feature representation. Moreover, a booster training strategy is designed to improve the segmentation performance without any extra inference cost. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture shows favorable performance compared to several state-of-the-art real-time semantic segmentation approaches. Specifically, for a $$2048\times 1024$$ input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy. The code and trained models are available online at https://git.io/BiSeNet .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Akim应助称心的菲鹰采纳,获得10
刚刚
量子星尘发布了新的文献求助10
1秒前
咩咩洞发布了新的文献求助10
1秒前
小鱼儿完成签到 ,获得积分10
1秒前
2秒前
3秒前
马康辉发布了新的文献求助10
4秒前
sunshine完成签到,获得积分10
5秒前
Jun关闭了Jun文献求助
5秒前
美丽沛春发布了新的文献求助10
6秒前
6秒前
6秒前
7秒前
超级忆雪发布了新的文献求助20
9秒前
imkhun1021发布了新的文献求助10
9秒前
9秒前
9秒前
尧九发布了新的文献求助10
10秒前
unique不二发布了新的文献求助10
10秒前
MeetAgainLZH发布了新的文献求助10
12秒前
lulu发布了新的文献求助10
12秒前
LUMOS完成签到,获得积分10
13秒前
Fairyvivi发布了新的文献求助10
17秒前
咩咩洞完成签到,获得积分10
18秒前
打打应助轻松的惜芹采纳,获得150
19秒前
Jasper应助teng采纳,获得20
19秒前
美丽沛春完成签到,获得积分10
20秒前
20秒前
21秒前
21秒前
21秒前
21秒前
22秒前
于思枫完成签到,获得积分10
23秒前
深情安青应助Xuecong采纳,获得10
24秒前
我是老大应助马康辉采纳,获得30
24秒前
25秒前
25秒前
仁爱听露发布了新的文献求助10
25秒前
欣慰妙海完成签到,获得积分10
26秒前
高分求助中
Picture Books with Same-sex Parented Families: Unintentional Censorship 1000
A new approach to the extrapolation of accelerated life test data 1000
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 500
Indomethacinのヒトにおける経皮吸収 400
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 370
基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 310
The Moiseyev Dance Company Tours America: "Wholesome" Comfort during a Cold War 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3980251
求助须知:如何正确求助?哪些是违规求助? 3524205
关于积分的说明 11220347
捐赠科研通 3261655
什么是DOI,文献DOI怎么找? 1800851
邀请新用户注册赠送积分活动 879332
科研通“疑难数据库(出版商)”最低求助积分说明 807234