卷积神经网络
语义学(计算机科学)
人工智能
计算机科学
相关性
对比度(视觉)
突出
背景(考古学)
对象(语法)
模式识别(心理学)
人工神经网络
深度学习
联想(心理学)
目标检测
自然语言处理
数学
程序设计语言
古生物学
哲学
认识论
生物
几何学
作者
Yi Liu,Ling Zhou,Gengshen Wu,Shoukun Xu,Jungong Han
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-12
被引量:14
标识
DOI:10.1109/tits.2023.3342811
摘要
Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection.However, few works pay attention to their complementary properties in the context of saliency prediction.In this paper, we probe into this issue and propose a Type-Correlation Guidance Network (TCGNet) for salient object detection.Specifically, a Multi-Type Cue Correlation (MTCC) covering CNNs and CapsNets is designed to extract the contrast and part-whole relational semantics, respectively.Using MTCC, two correlation matrices containing complementary information are computed with these two types of semantics.In return, these correlation matrices are used to guide the learning of the above semantics to generate better saliency cues.Besides, a Type Interaction Attention (TIA) is developed to interact semantics from CNNs and CapsNets for the aim of saliency prediction.Experiments and analysis on five benchmarks show the superiority of the proposed approach.Codes has been released on https://github.com/liuyi1989/TCGNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI