计算机科学
任务(项目管理)
流利
背景(考古学)
人工智能
适应(眼睛)
相关性(法律)
表(数据库)
序列(生物学)
文本生成
语音识别
自然语言处理
数据挖掘
语言学
工程类
古生物学
哲学
物理
遗传学
系统工程
光学
法学
政治学
生物
作者
Muhammad Akbar,Said Al Faraby,Ade Romadhony,Adiwijaya Adiwijaya
标识
DOI:10.1109/i2ct57861.2023.10126285
摘要
Question Generation (QG) is a task to generate questions based on an input context. Question Generation can be solved in several ways, ranging from conventional rule-based systems to recently emerging sequence-to-sequence approaches. The limitation of most QG systems is its limitation on input form, which is mainly only on text data. On the other hand, Multimodal QG covers several different inputs such as: text, image, table, video, or even acoustics. In this paper, we present our proposed method to handle the Multimodal Question Generation task using an attachment to a BERT-based model called Multimodal Adaptation Gate (MAG). The results show that using the proposed method, this development succeeds to do a Multimodal Question Generation task. The generated questions give 16.05 BLEU 4 and 28.27 ROUGE-L scores, accompanied by the human evaluation to judge the generated questions from the model, resulting in 55% fluency and 53% relevance.
科研通智能强力驱动
Strongly Powered by AbleSci AI