链接(几何体)
计算机科学
人工智能
自然语言处理
计算机网络
作者
Pengfei Jiao,Xinxun Zhang,Zehao Liu,Long Zhang,Huaming Wu,Mengzhou Gao,Tianpeng Li,Jian Wu
标识
DOI:10.1016/j.ins.2024.120499
摘要
In dynamic networks, temporal link prediction aims to predict the appearance and disappearance of links in future snapshots based on the network structure we have observed. It also plays a crucial role in network analysis and predicting the behavior of the dynamic system. However, most existing studies only focus on supervised temporal link prediction problems, i.e., taking part of the links in future snapshots as supervised information. The ones that can solve the unsupervised temporal link prediction problem are mainly based on matrix decomposition, which lack the capability to automatically extract nonlinear spatial and temporal features from dynamic networks. The most challenging part of this problem is to extract the inherent evolution of the patterns hidden in dynamic networks in unsupervised ways. Inspired by the application and achievement of contrastive learning in network representation learning, we propose a novel deep Contrastive framework for unsupervised Temporal Link Prediction (CTLP). Our framework is based on a deep encoder-decoder architecture, which can capture the nonlinear structure and temporal features automatically and can predict future links of subsequent snapshots of dynamic networks in an unsupervised manner. Besides, CTLP could handle the multi-step temporal link prediction problem of dynamic networks through attenuation modeling across the snapshots. Extensive experiments on temporal link prediction show that our CTLP framework significantly outperforms state-of-the-art unsupervised methods, and even outperforms the supervised methods in some cases.
科研通智能强力驱动
Strongly Powered by AbleSci AI