自动汇总
计算机科学
自然语言处理
人工智能
特征(语言学)
代表(政治)
序列(生物学)
特征学习
编码(集合论)
语言学
哲学
集合(抽象数据类型)
政治
政治学
生物
法学
遗传学
程序设计语言
作者
Shusheng Xu,Xingxing Zhang,Yi Wu,Furu Wei
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2022-06-28
卷期号:36 (10): 11556-11565
被引量:55
标识
DOI:10.1609/aaai.v36i10.21409
摘要
Contrastive learning models have achieved great success in unsupervised visual representation learning, which maximize the similarities between feature representations of different views of the same image, while minimize the similarities between feature representations of views of different images. In text summarization, the output summary is a shorter form of the input document and they have similar meanings. In this paper, we propose a contrastive learning model for supervised abstractive text summarization, where we view a document, its gold summary and its model generated summaries as different views of the same mean representation and maximize the similarities between them during training. We improve over a strong sequence-to-sequence text generation model (i.e., BART) on three different summarization datasets. Human evaluation also shows that our model achieves better faithfulness ratings compared to its counterpart without contrastive objectives. We release our code at https://github.com/xssstory/SeqCo.
科研通智能强力驱动
Strongly Powered by AbleSci AI