计算机科学
卷积神经网络
人工智能
任务(项目管理)
事件(粒子物理)
空间分析
对偶(语法数字)
数据挖掘
机器学习
地理
遥感
艺术
管理
经济
文学类
物理
量子力学
作者
Guangyin Jin,Chenxi Liu,Zhexu Xi,Hengyu Sha,Yanyun Liu,Jincai Huang
标识
DOI:10.1016/j.ins.2021.12.085
摘要
Spatial–temporal event prediction is a particular task for multivariate time series forecasting. Therefore, the complex entangled dynamics of space and time need to be considered. This task is an essential but crucial loop in future smart cities construction, which can be widely applied in urban traffic management, disaster monitoring and mobility analysis. In recent years, video-like spatial–temporal modelling has been the most common approach in many deep learning models . However, the video-like modelling approach cannot consider some latent region-wise correlations other than geographic spatial distance information. To overcome the limitation, we propose a novel neural network framework, Adaptive Dual-View WaveNet (ADVW-Net), for the urban spatial–temporal event prediction. By integrating the spatial representations from Convolutional Neural Network (CNN) and that from adaptive Graph convolutional neural network (GCN), our proposed model can capture not only the geographic correlations but also some latent region-wise dependencies from the input data. In addition, the effective architecture, WaveNet, can be transferred to region-wise spatial–temporal prediction scenarios for long-range temporal dependencies learning. Experimental results on three urban datasets demonstrate the superior performance of our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI