Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2020, Vol. 56 ›› Issue (1): 31-38.DOI: 10.13209/j.0479-8023.2019.093

Previous Articles     Next Articles

Analysis of Bi-directional Reranking Model for Uyghur-Chinese Neural Machine Translation

ZHANG Xinlu1,2,3, LI Xiao1,2,3,†, YANG Yating1,2,3, WANG Lei1,2,3, DONG Rui1,2,3   

  1. 1. Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011 2. University of Chinese Academy of Sciences, Beijing 100049 3. Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011
  • Received:2019-06-02 Revised:2019-09-27 Online:2020-01-20 Published:2020-01-20
  • Contact: LI Xiao, E-mail: xiaoli(at)ms.xjb.ac.cn

面向维汉神经机器翻译的双向重排序模型分析

张新路1,2,3, 李晓1,2,3,†, 杨雅婷1,2,3, 王磊1,2,3, 董瑞1,2,3   

  1. 1. 中国科学院新疆理化技术研究所, 乌鲁木齐 830011 2. 中国科学院大学, 北京 100049 3. 新疆民族语音语言信息处理实验室, 乌鲁木齐 830011
  • 通讯作者: 李晓, E-mail: xiaoli(at)ms.xjb.ac.cn
  • 基金资助:
    新疆维吾尔自治区重点实验室开放课题(2018D04018)、国家自然科学基金(U1703133)、中国科学院青年创新促进会项目(2017472)和新疆维吾尔自治区高层次人才引进工程项目(Y839031201)资助

Abstract:

The fitting training of neural machine translation is easy to fall into a local optimal solution on a lowresource corpus such as Uyghur to Chinese, resulting in the translation result of a single model may not be a global optimal solution. In order to solve this problem, the probability distribution predicted by multiple models is effectively integrated through the ensemble strategy, and multiple translation models are taken as a whole. At the same time, the translation models with opposite decoding directions are integrated by the reordering method based on cross entropy, and the candidate translation with the highest comprehensive score is selected as the output. The experiment on CWMT2015 Uighur-Chinese parallel corpus shows that proposed method has 4.82 BLEU values improvement compared with a single transformer model.

Key words: neural machine translation, ensemble learning, bi-directional reranking, Uyghur

摘要:

在维吾尔语到汉语等低资源语料库上, 神经机器翻译的拟合训练容易陷入局部最优解, 导致单一模型的翻译结果可能不是全局最优解。针对此问题, 通过集成策略, 有效整合多个模型预测的概率分布, 将多个翻译模型作为一个整体; 同时采用基于交叉熵的重排序方法, 将具有相反解码方向的翻译模型相结合, 最终选出综合得分最高的候选翻译作为输出。在CWMT2015维汉平行语料上的实验结果表明, 与单一的Transformer模型相比, 改进后的方法提升4.82个BLEU值。

关键词: 神经机器翻译, 集成学习, 双向重排序, 维吾尔语