计算机科学
正规化(语言学)
先验与后验
人工智能
深度学习
推论
机器学习
模式识别(心理学)
算法
认识论
哲学
作者
Qinghe Zheng,Xinyu Tian,Zhiguo Yu,Hongjun Wang,Abdussalam Elhanashi,Sergio Saponara
标识
DOI:10.1016/j.engappai.2023.106082
摘要
Automatic modulation classification (AMC) is an essential and indispensable topic in the development of cognitive radios. It is the cornerstone of adaptive modulation and demodulation capabilities to perceive and understand surrounding environments and make corresponding decisions. In this paper, we propose a priori regularization method in deep learning (DL-PR) for guiding loss optimization during model training process. The regularization factor designed by the combination of inter-class confrontation factor, global and dimensional divergence can help increase the inter-class distance and reduce the intra-class distance of samples. While preserving the original information of received signals as much as possible, it makes full use of the prior knowledge in the signal transmission process and ultimately helps deep learning models to be well generalized on signals with various signal-to-noise ratios (SNRs). As far as we know, this is the first attempt to regularize deep learning models based on SNR distribution of samples to improve AMC accuracy. Moreover, it can be proved that priori regularization can be interpreted as implicit data augmentation and model ensemble methods. By comparing with a series of state-of-the-art AMC methods and different regularization techniques on the public dataset RadioML 2016.10a, experimental results of multiple deep learning models illustrate the superiority of DL-PR, including CNN with accuracy of 62.6% and inference time of 0.82 ms per signal, LSTM with 61.8% and 0.87 ms, and hybrid CNN–LSTM with 64.2% and 0.94 ms. In practical applications, DL-PR can be also easily applied to complex environments due to its robustness to hyper-parameters and SNR estimation.
科研通智能强力驱动
Strongly Powered by AbleSci AI