期刊:IEEE Geoscience and Remote Sensing Letters [Institute of Electrical and Electronics Engineers] 日期:2022-01-01卷期号:19: 1-5被引量:36
标识
DOI:10.1109/lgrs.2022.3201396
摘要
Underwater acoustic target recognition (UATR) is usually difficult due to the complex and multi-path underwater environment. Currently, deep learning-based UATR methods have proven its effectiveness, and have outperformed the traditional methods by using powerful convolution neural networks (CNNs) to extract discriminative features on acoustic spectrograms. However, CNNs always fail to capture the global information implicated in spectrogram due to the use of small kernel, and thus encounter the performance bottleneck. To this end, we propose the UATR-Transformer based on a convolution-free architecture, referred to as the Transformer, which can perceive both the global and local information from acoustic spectrograms, and thus improve the accuracy. Experiments on two real world data demonstrate that our proposed model has achieved comparative results to the state of art CNNs, and thus can be applied to some certain cases in UATR.