计算机科学
言语翻译
布鲁
语音识别
机器翻译
语言模型
管道(软件)
变压器
人工智能
自然语言处理
口语
翻译(生物学)
程序设计语言
生物化学
量子力学
基因
信使核糖核酸
物理
电压
化学
作者
Hari Krishna Vydana,Martin Karafiát,Kateřina Žmolíková,Lukáš Burget,Honza Černocký
标识
DOI:10.1109/icassp39728.2021.9414159
摘要
End-to-End and cascade (ASR-MT) spoken language translation (SLT) systems are reaching comparable performances, however, a large degradation is observed when translating the ASR hypothesis in comparison to using oracle input text. In this work, degradation in performance is reduced by creating an End-to-End differentiable pipeline between the ASR and MT systems. In this work, we train SLT systems with ASR objective as an auxiliary loss and both the networks are connected through the neural hidden representations. This training has an End-to-End differentiable path with respect to the final objective function and utilizes the ASR objective for better optimization. This architecture has improved the BLEU score from 41.21 to 44.69. Ensembling the proposed architecture with independently trained ASR and MT systems further improved the BLEU score from 44.69 to 46.9. All the experiments are reported on English-Portuguese speech translation task using the How2 corpus. The final BLEU score is on-par with the best speech translation system on How2 dataset without using any additional training data and language model and using fewer parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI