水听器
计算机科学
条纹
声纳
稳健性(进化)
干扰(通信)
水下
数据压缩
传感器融合
人工智能
频域
水声通信
声学
模式识别(心理学)
计算机视觉
地质学
电信
频道(广播)
物理
古生物学
海洋学
基因
化学
生物化学
作者
Xingyue Zhou,Yonghong Yan,Kunde Yang
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2021-11-01
卷期号:21 (21): 24349-24358
被引量:3
标识
DOI:10.1109/jsen.2021.3112164
摘要
Applying passive sonar to classify underwater acoustic targets at different depths is a challenging task. Although the self-contained hydrophone array can ensure the normal operation of most units in various environments, it is arduous to achieve precise time synchronization between each hydrophone, which results in difficulties in data fusion between hydrophones. For a vertical sonar array composed of self-contained units, a deep learning-based data compression and multihydrophone fusion (DCMF) model is proposed to quickly extract acoustic propagation interference features, which are used for underwater acoustic target classification. Unlike the frequency-range domain striation features acquired by long-term accumulation, this paper exploits the depth difference between multiple hydrophones to obtain the frequency-depth domain joint striation features in a short time. The proposed DCMF conducts efficient feature compression and fusion via parallel stacked sparse autoencoders and a multi-input fusion network. The experimental results illustrate that the compressed features have strong robustness, a low mean square error with the simulation results, and shorter signal length requirements, which improves the classification efficiency and real-time performance of DCMF. In the case of the experimental dataset, DCMF is compared with several state-of-the-art multiscale fusion models, and the experiments indicate that DCMF has the best performance and smallest computational complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI