计算机科学
杠杆(统计)
人工智能
目标检测
特征(语言学)
计算机视觉
编码(集合论)
对象(语法)
探测器
利用
源代码
频道(广播)
模式识别(心理学)
电信
计算机网络
哲学
语言学
计算机安全
集合(抽象数据类型)
程序设计语言
操作系统
作者
Dehua Zheng,Xiaochen Zheng,Laurence T. Yang,Yuan Gao,Chenlu Zhu,Yiheng Ruan
标识
DOI:10.1109/wacv56688.2023.00617
摘要
Recent research about camouflaged object detection (COD) aims to segment highly concealed objects hidden in complex surroundings. The tiny, fuzzy camouflaged objects result in visually indistinguishable properties. However, current single-view COD detectors are sensitive to background distractors. Therefore, blurred boundaries and variable shapes of the camouflaged objects are challenging to be fully captured with a singleview detector. To overcome these obstacles, we propose a behavior-inspired framework, called Multi-view Feature Fusion Network (MFFN), which mimics the human behaviors of finding indistinct objects in images, i.e., observing from multiple angles, distances, perspectives. Specifically, the key idea behind it is to generate multiple ways of observation (multi-view) by data augmentation and apply them as inputs. MFFN captures critical boundary and semantic information by comparing and fusing extracted multi-view features. In addition, our MFFN exploits the dependence and interaction between views and channels. Specifically, our methods leverage the complementary information between different views through a two-stage attention module called Co-attention of Multi-view (CAMV). And we design a local-overall module called Channel Fusion Unit (CFU) to explore the channel-wise contextual clues of diverse feature maps in an iterative manner. The experiment results show that our method performs favorably against existing state-of-the-art methods via training with the same data. The code will be available at https://github.com/dwardzheng/MFFN_COD.
科研通智能强力驱动
Strongly Powered by AbleSci AI