计算机科学
地点
嵌入
推论
一般化
人工智能
顶点(图论)
理论计算机科学
代表(政治)
机器学习
数学
图形
哲学
数学分析
法学
政治
语言学
政治学
作者
Wenchao Yu,Cheng Zheng,Wei Cheng,Charų C. Aggarwal,Dongjin Song,Bo Zong,Haifeng Chen,Wei Wang
标识
DOI:10.1145/3219819.3220000
摘要
The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI