计算机科学
卷积(计算机科学)
邻接表
图形
空间分析
人工智能
特征(语言学)
保险丝(电气)
数据挖掘
模式识别(心理学)
时态数据库
深度学习
算法
理论计算机科学
人工神经网络
数学
语言学
哲学
统计
电气工程
工程类
作者
Yan Wang,Qianqian Ren,Jinbao Li
标识
DOI:10.1016/j.eswa.2023.119959
摘要
Exploiting deep spatial–temporal features for traffic prediction has become growing widespread. Accurate traffic prediction is still challenging due to the complex spatial dependencies and time varying temporal dependencies, especially for long-term prediction tasks. Existing studies usually employ pre-defined spatial graphs or learned fixed adjacency graphs and design models to capture spatial and temporal features. However, the pre-defined or fixed graph cannot accurately model the complex hidden structure. In this paper, a novel deep learning framework called Spatial–Temporal Multi-Feature Fusion Network (STMFFN) is proposed to address these challenges. Specifically, a multi-scale attention module with temporal convolution is designed to capture the temporal dependencies from different scales. Then, a gated graph convolution module is proposed, which constructs adaptive adjacency matrices, and integrates graph convolution and graph aggregation modules to capture spatial dependencies from different ranges. Moreover, a multi-feature fusion layer is presented to fuse the extracted spatial and temporal dependencies by obtaining the attention vectors of temporal and spatial features. Experimental results on real-world datasets show a consistent improvement of 6%–9% over state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI