计算机科学
水准点(测量)
多模式学习
任务(项目管理)
人工智能
融合
机器学习
代表(政治)
信息融合
特征学习
语言学
哲学
管理
大地测量学
政治
政治学
法学
经济
地理
作者
Han Liu,Yinwei Wei,Fan Liu,Wenjie Wang,Liqiang Nie,Tat‐Seng Chua
出处
期刊:ACM Transactions on Information Systems
日期:2023-08-30
卷期号:42 (2): 1-26
被引量:5
摘要
Multimodal information (e.g., visual, acoustic, and textual) has been widely used to enhance representation learning for micro-video recommendation. For integrating multimodal information into a joint representation of micro-video, multimodal fusion plays a vital role in the existing micro-video recommendation approaches. However, the static multimodal fusion used in previous studies is insufficient to model the various relationships among multimodal information of different micro-videos. In this article, we develop a novel meta-learning-based multimodal fusion framework called Meta Multimodal Fusion (MetaMMF), which dynamically assigns parameters to the multimodal fusion function for each micro-video during its representation learning. Specifically, MetaMMF regards the multimodal fusion of each micro-video as an independent task. Based on the meta information extracted from the multimodal features of the input task, MetaMMF parameterizes a neural network as the item-specific fusion function via a meta learner. We perform extensive experiments on three benchmark datasets, demonstrating the significant improvements over several state-of-the-art multimodal recommendation models, like MMGCN, LATTICE, and InvRL. Furthermore, we lighten our model by adopting canonical polyadic decomposition to improve the training efficiency, and validate its effectiveness through experimental results. Codes are available at https://github.com/hanliu95/MetaMMF .
科研通智能强力驱动
Strongly Powered by AbleSci AI