计算机科学
情态动词
图形
知识图
人工智能
理论计算机科学
化学
高分子化学
作者
Fei Wang,Xianzhang Zhu,Xin Cheng,Yongjun Zhang,Yansheng Li
标识
DOI:10.1016/j.eswa.2023.121278
摘要
In the era of remote sensing (RS) big data, in order to alleviate the time cost of acquiring RS images, recommending RS images that meet users’ individual needs continues to be an urgent technology. However, the technology accomplished to date has two main problems: (1) they rely on the users’ queries and thus lack initiative and cannot tap the users’ potential interests and (2) they restrict the users’ preferences to temporal and/or spatial information while ignoring other attributes and are not compatible with visual information. In an effort to fully explore the features of RS images and thereby achieve accurate active recommendations, in this paper we propose a new Multi-modal Knowledge graph-aware Deep Graph Attention Network (MMKDGAT) which we built upon graph convolutional networks. Specifically, we first constructed a multi-modal knowledge graph (MMKG) for RS images to integrate their various attributes as well as visual information, and then we conduct deep relational attention-based information aggregation to enrich the node representations with multi-modal information and higher-order collaborative signals. Our extensive experiments on two simulated RS image recommendation datasets demonstrated that our MMKDGAT achieved noticeable improvement over several state-of-the-art methods in so far as active recommendation accuracy and cold-start recommendation.
科研通智能强力驱动
Strongly Powered by AbleSci AI