计算机科学
人工智能
感知
水准点(测量)
基本事实
编码(集合论)
生成语法
功能(生物学)
对抗制
模式识别(心理学)
图像(数学)
利用
灰度
生成对抗网络
机器学习
集合(抽象数据类型)
心理学
大地测量学
计算机安全
进化生物学
生物
神经科学
程序设计语言
地理
作者
Xiaoxu Cai,Sheng Wang,Jianwen Lou,Muwei Jian,Junyu Dong,Rung-Ching Chen,Brett Stevens,Hui Yu
标识
DOI:10.1016/j.ins.2023.119625
摘要
In this work, we introduce a novel approach for saliency detection through the utilization of a generative adversarial network guided by perceptual loss. Achieving effective saliency detection through deep learning entails intricate challenges influenced by a multitude of factors, with the choice of loss function playing a pivotal role. Previous studies usually formulate loss functions based on pixel-level distances between predicted and ground-truth saliency maps. However, these formulations don't explicitly exploit the perceptual attributes of objects, such as their shapes and textures, which serve as critical indicators of saliency. To tackle this deficiency, we propose an innovative loss function that capitalizes on perceptual features derived from the saliency map. Our approach has been rigorously evaluated on six benchmark datasets, demonstrating competitive performance when compared against the forefront methods in terms of both Mean Absolute Error (MAE) and F-measure. Remarkably, our experiments reveal consistent outcomes when assessing the perceptual loss using either grayscale saliency maps or saliency-masked colour images. This observation underscores the significance of shape information in shaping the perceptual saliency cues. The code is available at https://github.com/XiaoxuCai/PerGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI