RGB颜色模型
人工智能
计算机科学
突出
背景(考古学)
特征(语言学)
模式识别(心理学)
目标检测
计算机视觉
空间语境意识
骨料(复合)
边界(拓扑)
数学
地理
数学分析
哲学
复合材料
考古
材料科学
语言学
作者
Junwei Wu,Wujie Zhou,Ting Luo,Lu Yu,Jingsheng Lei
标识
DOI:10.1016/j.sigpro.2020.107766
摘要
Red–green–blue and depth (RGB-D) saliency detection has recently attracted much research attention; however, the effective use of depth information remains challenging. This paper proposes a method that leverages depth information in clear shapes to detect the boundary of salient objects. As context plays an important role in saliency detection, the method incorporates a proposed end-to-end multiscale multilevel context and multimodal fusion network (MCMFNet) to aggregate multiscale multilevel context feature maps for accurate saliency detection from objects of varying sizes. Finally, a coarse-to-fine approach is applied to an attention module retrieving multilevel and multimodal feature maps to produce the final saliency map. A comprehensive loss function is also incorporated in MCMFNet to optimize the network parameters. Extensive experiments demonstrate the effectiveness of the proposed method and its substantial improvement over state-of-the-art methods for RGB-D salient object detection on four representative datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI