计算机科学
载波频率偏移
估计员
量化(信号处理)
频率偏移
算法
均方误差
深度学习
电子工程
人工智能
正交频分复用
电信
数学
统计
频道(广播)
工程类
作者
Ryan M. Dreifuerst,Robert W. Heath,Mandar N. Kulkarni,Jianzhong Charlie
标识
DOI:10.1109/spawc48557.2020.9154214
摘要
Low resolution architectures are a power efficient solution for high bandwidth communication at millimeter wave and terahertz frequencies. In such systems, carrier synchronization is important yet has not received much attention. In this paper, we develop and analyze deep learning architectures for estimating the carrier frequency of a complex sinusoid in noise from the 1-bit samples of the in-phase and quadrature components. Carrier frequency offset estimation from a sinusoid is used in GSM and is a first step towards developing a more comprehensive solution with other kinds of signals. We train four different deep learning architectures each on eight datasets which represent possible training considerations. Specifically, we consider how training with various signal to noise ratios (SNR), quantization, and sequence lengths affects estimation error. Further, we analyze each architecture in terms of scalability for MIMO receivers. In simulations, we compare execution time and mean squared error (MSE) versus classic signal processing techniques. We demonstrate that training with quantized data, drawn from signals with SNRs between 0-10dB tends to improve deep learning estimator performance across the entire SNR range of interest. We conclude that convolutional models have the best performance, while also requiring shorter execution time than FFT methods. Our approach is able to accurately estimate carrier frequencies from 1-bit quantized data with fewer pilots and lower signal to noise ratios (SNRs) than traditional signal processing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI