声纳
计算机科学
人工智能
集合(抽象数据类型)
计算机视觉
模式识别(心理学)
程序设计语言
作者
Wenpei Jiao,Jianlei Zhang,Chunyan Zhang
标识
DOI:10.1016/j.eswa.2024.123495
摘要
Current sonar image recognition methods excel in closed-set and balanced scenarios, but real underwater data often follow an open-set and long-tailed distribution, leading to misclassifications, especially among tail classes. Although open-set long-tail recognition (OLTR) tasks have received attention in natural images in recent years, there has been a lack of systematic research in sonar images. To address this gap, we present the first comprehensive study and analysis of open-set long-tail recognition in sonar images (Sonar-OLTR). In this paper, we establish a Sonar-OLTR benchmark by introducing the Nankai Sonar Image Dataset (NKSID), a new collection of 2617 real-world forward-looking sonar images. We investigate the challenges posed by long-tail distributions in existing open-set recognition (OSR) evaluation metrics for sonar images and propose two improved evaluation metrics. Using this benchmark, we conduct a thorough examination of state-of-the-art OSR, long-tail recognition, OLTR, and out-of-distribution detection algorithms. Additionally, we propose a straightforward yet effective integrated Sonar-OLTR approach as a new baseline. This method introduces a Push the right Logit Up and the wrong logit Down (PLUD) loss to increase feature space margins between known and unknown classes, as well as head and tail classes within known classes. Extensive experimental evaluation based on the benchmark demonstrates the performance and speed advantages of PLUD, providing insights for future Sonar-OLTR research. The code and dataset are publicly available at https://github.com/Jorwnpay/Sonar-OLTR.
科研通智能强力驱动
Strongly Powered by AbleSci AI