Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2021, Vol. 57 ›› Issue (1): 45-52.DOI: 10.13209/j.0479-8023.2020.086

Previous Articles     Next Articles

Syntax-based Code Generation Model with Selective Local Attention and Pre-order Information LSTM Decoder

LIANG Wanying1,2, ZHU Jia1,2,†, WU Zhijie1,2, YAN Zhiwen1,2, TANG Yong1,2, HUANG Jin1,2, YU Weihao1,2#br#   

  1. 1. School of Computer, South China Normal University, Guangzhou 510631 2. Guangzhou Key Laboratory of Big Data and Intelligent Education, Guangzhou 510631
  • Received:2020-06-08 Revised:2020-08-08 Online:2021-01-20 Published:2021-01-20
  • Contact: ZHU Jia, E-mail: jzhu(at)m.scnu.edu.cn

具有选择性局部注意力和前序信息解码器的代码生成模型

梁婉莹1,2, 朱佳1,2,†, 吴志杰1,2, 颜志文1,2, 汤庸1,2, 黄晋1,2, 余伟浩1,2   

  1. 1. 华南师范大学计算机学院, 广州 510631 2. 广州市大数据智能教育重点实验室, 广州 510631
  • 通讯作者: 朱佳, E-mail: jzhu(at)m.scnu.edu.cn
  • 基金资助:
    国家自然科学基金(61877020, U1811263, 61772211)、广东省重点领域研发计划(2018B010109002)、广州市科学技术计划项目(2019
    04010393)和广州市大数据智能教育重点实验室(201905010009)资助

Abstract:

This paper proposes a syntax-based code generation model with selective local attention and a preorder information decoder based on long-short term memory (LSTM) neural network, which aims to enhance the relevance by changing the calculation scope of the context vector and fuse more pre-order information during the decoding process. Code generation experiments in two dataset Hearthstone and Django confirm the effectiveness of this model. Compared with state-of-the-art models, the proposed model not only achieves excellent accuracy and bilingual evaluation understudy score, but also minimizing computational effort.

Key words: code generation, abstract syntactic tree, pre-order information LSTM, selective local attention

摘要:

提出一种基于语法的代码生成模型, 该模型具有选择性局部注意力和包含前序信息的长短期记忆(LSTM)神经网络解码器, 通过更改上下文向量的计算范围, 并在解码过程中融合更多的前序信息, 增强单词之间的相关性。在Hearthstone和Django两个数据集上进行的代码生成实验证实了所提模型的有效性, 与最新的模型相比, 所提模型不仅表现出更出色的准确率和双语评估学习成绩, 还可以使计算工作量最小化。

关键词: 代码生成, 抽象语法树, 包含前序信息的长短期记忆神经网络(LSTM), 选择性局部注意力