突出
变压器
计算机科学
人工智能
工程类
电气工程
电压
作者
Sucheng Ren,Nanxuan Zhao,Qiang Wen,Guoqiang Han,Shengfeng He
出处
期刊:IEEE transactions on emerging topics in computational intelligence
[Institute of Electrical and Electronics Engineers]
日期:2024-04-02
卷期号:8 (4): 2870-2879
被引量:6
标识
DOI:10.1109/tetci.2024.3380442
摘要
The fully convolutional network (FCN) has dominated salient object detection for a long period. However, the locality of CNN requires the model deep enough to have a global receptive field and such a deep model always leads to the loss of local details. In this paper, we introduce a new attention-based encoder, vision transformer, into salient object detection to ensure the globalization of the representations from shallow to deep layers. With the global view in very shallow layers, the transformer encoder preserves more local representations to recover the spatial details in final saliency maps. Besides, as each layer can capture a global view of its previous layer, adjacent layers can implicitly maximize the representation differences and minimize the redundant features, making every output feature of transformer layers contribute uniquely to the final prediction. To decode features from the transformer, we propose a simple yet effective deeply-transformed decoder. The decoder densely decodes and upsamples the transformer features, generating the final saliency map with less noise injection. Experimental results demonstrate that our method significantly outperforms other FCN-based and transformer-based methods in five benchmarks by a large margin, with an average of 12.17% improvement in terms of Mean Absolute Error (MAE).
科研通智能强力驱动
Strongly Powered by AbleSci AI