Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2018, Vol. 54 ›› Issue (2): 286-292.DOI: 10.13209/j.0479-8023.2017.155

Previous Articles     Next Articles

LSTM Based Question Answering for Large Scale Knowledge Base

ZHOU Botong, SUN Chengjie, LIN Lei, LIU Bingquan   

  1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001
  • Received:2017-06-04 Revised:2017-09-05 Online:2018-03-20 Published:2018-03-20
  • Contact: SUN Chengjie, E-mail: cjsun(at)


周博通, 孙承杰, 林磊, 刘秉权   

  1. 哈尔滨工业大学计算机科学与技术学院, 哈尔滨 150001
  • 通讯作者: 孙承杰, E-mail: cjsun(at)
  • 基金资助:
    国家高技术研究发展计划专项经费(2015AA015405)和国家自然科学基金(61572151, 61602131)资助


To solve the specific problem in KBQA, a question answering system is built based on large scale Chinese knowledge base. This system consists of three main steps: recognition of named entity in question, mapping from question to property in KB, and answering selection. Alias dictionary and LSTM language model are used to recognize named entity contained in question, and two different attention mechanisms are combined with bidirectional LSTM for question-property mapping. Finally, exploit results of first two steps are exploited for entity disambiguation and answering selection. The average F1 value of proposed system in NLPCC-ICCPOL 2016 KBQA task is 0.8106, which is competitive with the best result.

Key words: knowledge base, question answering, named entity recognition, attention mechanism


针对大规模知识库问答的特点, 构建一个包含3个主要步骤的问答系统: 问句中的命名实体识别、问句与属性的映射和答案选择。采用别名词典结合LSTM语言模型进行命名实体识别, 使用双向LSTM模型结合两种不同的注意力机制进行属性映射, 最后综合前两步的结果进行实体消歧和答案选择。该系统在NLPCC-ICCPOL 2016 KBQA任务提供的数据集上的平均F1值为0.8106, 接近评测的最好水平。

关键词: 知识库, 自动问答, 命名实体识别, 注意力机制

CLC Number: