级联
信息级联
计算机科学
变压器
人工智能
模式识别(心理学)
机器学习
数据挖掘
电压
数学
统计
化学
物理
色谱法
量子力学
作者
Xigang Sun,Jingya Zhou,Liu Ling,Zhen Wu
标识
DOI:10.1016/j.ins.2023.119531
摘要
Predicting information diffusion cascade is an essential task in social networks. We mainly focus on predicting the size of the information cascade. The relationships inside a cascade are diverse, including global and relative spatio-temporal relationships, as well as interpersonal influence relationships. These complex relationships between nodes play a crucial role in cascade prediction, but they have not been thoroughly investigated. The Transformer's global receptive field can assist in capturing the relationships between two arbitrary nodes. However, using Transformer directly for a cascade is insufficient without considering its temporal and structural characteristics. In this paper, we propose a novel cascade Transformer for the first time, called CasTformer, specifically designed for cascade size prediction. CasTformer utilizes a global spatio-temporal positional encoding and relative relationship bias matrices on the self-attention mechanism to capture diverse cascade relationships. Moreover, self-knowledge distillation is employed for obtaining a better cascade representation to enhance prediction performance. We use four datasets with nearly millions of cascade samples to validate our model and it achieves training in 3 hours. Experimental results show that it outperforms state-of-the-art methods by an average of 11.9%, 6.1%, and 9.6% on MSLE, MAPE, and R2, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI