北京大学学报自然科学版 ›› 2019, Vol. 55 ›› Issue (1): 84-90.DOI: 10.13209/j.0479-8023.2018.059

上一篇    下一篇

基于改进孪生网络结构的相似法律案例检索研究

李兰君, 周俊生, 顾颜慧, 曲维光   

  1. 南京师范大学计算机科学与技术学院, 南京 210023
  • 收稿日期:2018-04-15 修回日期:2018-08-13 出版日期:2019-01-20 发布日期:2019-01-20
  • 通讯作者: 周俊生, E-mail: zhoujs(at)njnu.edu.cn
  • 基金资助:
    国家自然科学基金(61472191, 61772278, 41571382)、福建省信息处理与智能控制重点实验室开放基金(MJUKF201705)、江苏省高校哲学社会科学研究项目(2016SJB740004)和江苏省高校自然科学研究重大项目(15KJA420001)资助

Similar Legal Case Retrieval Based on Improved Siamese Network

LI Lanjun, ZHOU Junsheng, GU Yanhui, QÜ Weiguang   

  1. School of Computer Science and Technology, Nanjing Normal University, Nanjing 210023
  • Received:2018-04-15 Revised:2018-08-13 Online:2019-01-20 Published:2019-01-20
  • Contact: ZHOU Junsheng, E-mail: zhoujs(at)njnu.edu.cn

摘要:

针对现有的基于孪生网络结构的文档相似度计算方法大多将整个文档看成模型的输入序列, 易导致数据稀疏的问题, 提出利用层级注意力机制来改进孪生网络结构中的文档表示。针对基于层级注意力机制的孪生网络计算模型在输入时有可能忽略文档中重要句子的问题, 进一步提出一种引入文档内容压缩的两步骤文档相似度计算方法。利用开发的法律案例文档相似度标注数据集进行实验, 结果表明所提方法明显优于基于长短期记忆模型的孪生网络计算模型。

关键词: 文档相似度计算, 孪生网络, 注意力机制, 文档内容压缩

Abstract:

In view of the existing research about document similarity calculation methods based on siamese networks, the entire document is regarded as the input sequence of model that may lead to sparse data. Hierarchical attention mechanism is used to improve the document representation in the siamese network. For the siamese network computing model based on hierarchical attention mechanism may ignore the important sentence in the document when inputting, a two-step document similarity calculation method that introduces the compression of document content is further proposed. The experimental results show that the proposed method is obviously superior to the siamese network computing model based on the Long Short-Term Memory.

Key words: document similarity calculation, siamese network, attention mechanism, document content compression