计算机科学
分割
帕斯卡(单位)
人工智能
注意力网络
背景(考古学)
特征(语言学)
频道(广播)
模式识别(心理学)
融合机制
对偶(语法数字)
代表(政治)
融合
艺术
法学
程序设计语言
古生物学
政治学
哲学
文学类
脂质双层融合
政治
生物
语言学
计算机网络
作者
Jun Fu,Jing Liu,Haijie Tian,Yong Li,Yongjun Bao,Zhiwei Fang,Hanqing Lu
标识
DOI:10.1109/cvpr.2019.00326
摘要
In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the self-attention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data.
科研通智能强力驱动
Strongly Powered by AbleSci AI