计算机科学
子网
目标检测
人工智能
计算机视觉
对象(语法)
特征(语言学)
能见度
骨干网
水下
模式识别(心理学)
物理
哲学
地质学
光学
海洋学
语言学
计算机网络
作者
Na Cheng,Hongye Xie,Xuanbing Zhu,Hongyu Wang
标识
DOI:10.1016/j.engappai.2023.105905
摘要
Marine object detection has received an increasing amount of attention due to its enormous application potential in the field of marine engineering, Remotely Operated Vehicles, and Autonomous Underwater Vehicles. It has made substantial progress in generic object detection with the prevalent trend of deep learning in the past few years. However, marine object detection in natural scenes remains certainly an unsolved problem. The challenges stem from low visibility, small size, serious occlusion, and dense distribution. In this article, we attempt to address the marine object detection problem by presenting a clever joint attention-guided dual-subnet network that can jointly learn both image enhancement and object detection tasks for end-to-end training. JADSNet attains significant performance gains by comprising two subnetworks: an image enhancement subnet and a marine object detection subnet. Essentially, the marine object detection subnet is an extended feature pyramid network with a dual attention-guided module and a multi-term loss function. It takes RetinaNet as a backbone and is responsible for classifying and locating objects. In the image enhancement subnet, feature extraction layers are shared with the marine object detection subnet and a feature enhancement module is used. A multi-term loss function is introduced to reduce false detection and miss detection caused by the mutual occlusion of marine objects. We build a new Marine Object Detection (MOD) dataset that contains more than 25,000 train-val and 3000 test underwater images. The experimental findings demonstrate that our JADSNet realize notable performance and reach 74.41% mAP on the MOD dataset. We also verify that the JADSNet method can be applied to object detection in foggy weather and achieve 49.54% mAP on the foggy dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI