计算机科学
人工智能
深度学习
卷积神经网络
特征学习
机器学习
监督学习
代表(政治)
人工神经网络
政治学
政治
法学
作者
Dongxin Liu,Tianshi Wang,Shengzhong (Frank) Liu,Ruijie Wang,Shuochao Yao,Tarek Abdelzaher
出处
期刊:International Conference on Computer Communications and Networks
日期:2021-07-01
被引量:6
标识
DOI:10.1109/icccn52240.2021.9522151
摘要
This paper presents a contrastive self-supervised representation learning framework that is new in being designed specifically for deep learning from frequency domain data. Contrastive self-supervised representation learning trains neural networks using mostly unlabeled data. It is motivated by the need to reduce the labeling burden of deep learning. In this paper, we are specifically interested in applying this approach to physical sensing scenarios, such as those arising in Internet-of-Things (IoT) applications. Deep neural networks have been widely utilized in IoT applications, but the performance of such models largely depends on the availability of large labeled datasets, which in turn entails significant training costs. Motivated by the success of contrastive self-supervised representation learning at substantially reducing the need for labeled data (mostly in areas of computer vision and natural language processing), there is growing interest in customizing the contrastive learning framework to IoT applications. Most existing work in that space approaches the problem from a time-domain perspective. However, IoT applications often measure physical phenomena, where the underlying processes (such as acceleration, vibration, or wireless signal propagation) are fundamentally a function of signal frequencies and thus have sparser and more compact representations in the frequency domain. Recently, this observation motivated the development of Short-Time Fourier Neural Networks (STFNets) that learn directly in the frequency domain, and were shown to offer large performance gains compared to Convolutional Neural Networks (CNNs) when designing supervised learning models for IoT tasks. Hence, in this paper, we introduce an STFNet-based Contrastive Self-supervised representation Learning framework (STF-CSL). STF-CSL takes both time-domain and frequency-domain features into consideration. We build the encoder using STFNet as the fundamental building block. We also apply both time-domain data augmentation and frequency-domain data augmentation during the self-supervised training process. We evaluate the resulting performance of STF-CSL on various human activity recognition tasks. The evaluation results demonstrate that STF-CSL significantly outperforms the time-domain based self-supervised approaches thereby substantially enhancing our ability to train deep neural networks from unlabeled data in IoT contexts.
科研通智能强力驱动
Strongly Powered by AbleSci AI