计算机科学
判决
编码器
变压器
机器翻译
编码
编码(内存)
源文本
代表(政治)
人工智能
语音识别
电压
物理
法学
化学
生物化学
政治学
操作系统
基因
政治
量子力学
作者
Jaehun Shin,WonKee Lee,Byung-Hyun Go,Baikjin Jung,Youngkil Kim,Jong-Hyeok Lee
出处
期刊:ACM Transactions on Asian and Low-Resource Language Information Processing
日期:2021-08-12
卷期号:20 (6): 1-17
被引量:3
摘要
Automatic post-editing (APE) is the study of correcting translation errors in the output of an unknown machine translation (MT) system and has been considered as a method of improving translation quality without any modification to conventional MT systems. Recently, several variants of Transformer that take both the MT output and its corresponding source sentence as inputs have been proposed for APE; and models introducing an additional attention layer into the encoder to jointly encode the MT output with its source sentence recorded a high-rank in the WMT19 APE shared task. We examine the effectiveness of such joint-encoding strategy in a controlled environment and compare four types of decoder multi-source attention strategies that have been introduced into previous APE models. The experimental results indicate that the joint-encoding strategy is effective and that taking the final encoded representation of the source sentence is the more proper strategy than taking such representation within the same encoder stack. Furthermore, among the multi-source attention strategies combined with the joint-encoding, the strategy that applies attention to the concatenated input representation and the strategy that adds up the individual attention to each input improve the quality of APE results over the strategy using the joint-encoding only.
科研通智能强力驱动
Strongly Powered by AbleSci AI