北京大学学报自然科学版 ›› 2019, Vol. 55 ›› Issue (1): 113-119.DOI: 10.13209/j.0479-8023.2018.065

上一篇    下一篇

N3LDG: 一种轻量级自然语言处理深度学习库

王潜升, 余南, 张梅山, 韩子嘉, 付国宏   

  1. 黑龙江大学计算机科学技术学院, 哈尔滨 150080
  • 收稿日期:2018-04-21 修回日期:2018-08-15 出版日期:2019-01-20 发布日期:2019-01-20
  • 通讯作者: 付国宏, E-mail: ghfu(at)hotmail.com
  • 基金资助:
    国家自然科学基金(61672211, 61602160)和黑龙江省自然科学基金(F2016036)资助

N3LDG: A Lightweight Neural Network Library for Natural Language Processing

WANG Qiansheng, YU Nan, ZHANG Meishan, HAN Zijia, FU Guohong   

  1. School of Computer Science and Technology, Heilongjiang University, Harbin 150080
  • Received:2018-04-21 Revised:2018-08-15 Online:2019-01-20 Published:2019-01-20
  • Contact: FU Guohong, E-mail: ghfu(at)hotmail.com

摘要:

提出一种用于自然语言处理的轻量级深度学习库N3LDG, 可以支持动态地构建计算图, 并能自动地批量化执行计算图。实验显示, 当训练卷积神经网络、双向LSTM和树结构LSTM时, N3LDG都能高效地构建与执行计算图; 当使用CPU训练上述模型时, N3LDG的训练速度优于PyTorch; 当使用GPU训练卷积神经网络和树结构LSTM模型时, N3LDG的训练速度优于PyTorch。

关键词: 深度学习库, 自然语言处理, 轻量级, CUDA

Abstract:

The authors propose a neural network library N3LDG for natural language processing. N3LDG supports constructing computation graphs dynamically, and organizing executions into batches automatically. Experiments show that N3LDG can efficiently construct and execute computation graphs when training CNN, Bi-LSTM, and Tree-LSTM. When using CPU to train above models, the training speed of N3LDG is better than that of PyTorch. When using GPU to train CNN and Tree-LSTM, N3LDG is better than PyTorch.

Key words: deep learning library, NLP, lightweight, CUDA