北京大学学报自然科学版 ›› 2020, Vol. 56 ›› Issue (1): 9-15.DOI: 10.13209/j.0479-8023.2019.103

上一篇    下一篇

基于主题约束的篇章级文本生成方法

黄炎1,2, 孙海丽1, 徐科1,3, 余晓阳1, 王同洋1,†, 张新访1, 路松峰1,2   

  1. 1. 华中科技大学计算机科学与技术学院, 武汉 430074 2. 深圳华中科技大学研究院, 深圳 518063
    3. 中南民族大学计算机科学学院, 武汉 430074
  • 收稿日期:2019-05-22 修回日期:2019-09-23 出版日期:2020-01-20 发布日期:2020-01-20
  • 通讯作者: 王同洋, E-mail: platanus(at)hust.edu.cn
  • 基金资助:
    深圳市科技计划基础研究项目(JCYJ20180306124612893, JCYJ20170818160208570, JCYJ20170307160458368)资助

Discourse-Level Text Generation Method Based on Topical Constraint

HUANG Yan1,2, SUN Haili1, XU Ke1,3, YU Xiaoyang1, WANG Tongyang1,†, ZHANG Xinfang1, LU Songfeng1,2   

  1. 1. School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074 2. Shenzhen Huazhong University of Science and Technology Research Institute, Shenzhen 518063 3. School of Computer Science, South-Central University for Nationalities, Wuhan 430074
  • Received:2019-05-22 Revised:2019-09-23 Online:2020-01-20 Published:2020-01-20
  • Contact: WANG Tongyang, E-mail: platanus(at)hust.edu.cn

摘要:

针对计算机自动生成的文本缺乏主题思想这一问题, 提出一种基于主题约束的篇章级文本自动生成方法。该方法围绕用户输入的主题描述语句提取若干主题词; 然后对主题词进行扩展和主题聚类, 形成文章主题规划; 最后利用每个聚类中的关键词信息约束每个段落的文本生成。该模型从文本主题分布、注意力评分方法和主题覆盖生成3个方面对现有基于注意力机制的循环神经网络文本生成模型进行了改进。在3个真实数据集上分别与Char-RNN, SC-LSTM和MTA-LSTM基准模型进行对比, 并对3个方面的改进进行独立验证。实验结果表明, 所提方法在人工评判和BLEU自动评测上均优于基准模型, 生成的文本能更好地贴合主题。

关键词: 文本自动生成, 主题约束, 循环神经网络(RNN), 长短时记忆网络(LSTM), 注意力机制

Abstract:

To solve the topic missing problem of text generated by computers, this paper proposed a new discourse-level text generation method based on topical constraint. Providing a short topic description, the approach extracted several topic words from the text, then extended and clustered the keywords to form topical planning which were used to restrain the generation of each paragraphs. The model improved the attention based recurrent neural network form three aspects including topic distribution, attention scoring function and topic coverage generation. In experiments, the proposed method was compared with benchmark models such as Char-RNN, SC-LSTM and MTA-LSTM on three real datasets, three improvement strategies were verified and analysed independently. Experimental results show that proposed model is more efficient than benchmark models on human and BLEU metrics, and the generated text can catch the topic more effectively.

Key words: automatic text generation, topical constraint, RNN, LSTM, attention mechanism