元数据
计算机科学
图形
余弦相似度
众包
情报检索
人工智能
机器学习
万维网
理论计算机科学
模式识别(心理学)
作者
Zi-Hui Cai,Hongwei Ding,Mohan Xu,Xiaohui Cui
标识
DOI:10.1016/j.asoc.2024.111313
摘要
Crowdfunding creates opportunities for creative people to raise funds so their ideas can be brought to life. However, the failure of fundraising leads to certain losses for project starters. Crowdfunding success prediction allows them to learn about the probability of fundraising success as early as possible, which can reduce their losses or lead them to modify project content for increasing the probability. Crowdfunding success prediction is a challenging classification task because there is diverse descriptive data but relative little supervisory information. There has been a lot of work done based on post-launch factors, but pre-launch prediction is more valuable for project creators and crowdfunding platforms. Although some efforts on pre-launch prediction were also made, most of them only analyze one or more of metadata, text and image, and the multimodal study about text and video is lacked. Considering this, we propose a multimodal dynamic graph convolutional network for crowdfunding success prediction based on text and video data. Specifically, sentences and frames are viewed as nodes, and edges are constructed based on location relationships and cosine similarity, which reconstructs the text and video into a multimodal graph and allows multimodal features to interact in the form of a graph. Besides, dynamic graph convolution is utilized to fuse the features of nodes, where dynamic means the structure of graph changes during graph convolution. Besides, we make two multimodal crowdfunding datasets containing metadata, text, image, video and project status, and they are utilized to validate the effectiveness of crowdfunding prediction models. Experiments conducted on them show that our model outperforms existing state-of-the-art baselines, and interaction and fusion in the form of a graph can effectively integrate features from multiple modalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI