自动汇总
计算机科学
解码方法
编码器
门控
任务(项目管理)
人工智能
钥匙(锁)
自然语言处理
机器学习
算法
生物
操作系统
生理学
经济
计算机安全
管理
作者
Bingfei Zhao,Hongying Zan,Chengzhi Niu,Kunli Zhang
标识
DOI:10.1109/ialp61005.2023.10337097
摘要
Abstractive summarization is a key technique for automatic text summarization. However, existing generative models typically rely on beam search decoding, which leads to poor performance due to the large search space. To address this, we propose a multi-task learning framework for text reordering. Specifically, we introduce a multi-task learning model to reorder the text according to different evaluation metrics, in order to select the optimal summary candidates. Furthermore, we replace the gating network in mixture-of-experts models with a smooth gating control method to alleviate the unsmooth parameter issue. To improve the ability of extracting semantic information, we incorporate textcnn into the basic encoder to enhance the semantic understanding of the model. Experiments on the Chinese long text summarization datasets CLTS and CLTS+ show significant improvements of our method over the best models reported in prior work, demonstrating the efficacy of the multi-task re-ordering framework and smooth gating control. Ablation studies are conducted to analyze the impact of different decoding methods and beam sizes, as well as the contribution of different re-ordering methods integrated into the framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI