北京大学学报自然科学版 ›› 2021, Vol. 57 ›› Issue (4): 605-613.DOI: 10.13209/j.0479-8023.2021.052

上一篇    下一篇

基于时空建模的动态图卷积神经网络

李荆1, 刘钰2, 邹磊2,†   

  1. 1. 北京大学前沿交叉学科研究院, 北京 100871 2. 北京大学王选计算机研究所, 北京 100871
  • 收稿日期:2020-05-25 修回日期:2020-06-07 出版日期:2021-07-20 发布日期:2021-07-20
  • 通讯作者: 邹磊, E-mail: zoulei(at)pku.edu.cn
  • 基金资助:
    国家自然科学基金(61932001)资助

A Dynamic Graph Convolutional Network Based on Spatial-Temporal Modeling

LI Jing1, LIU Yu2, ZOU Lei2,†   

  1. 1. Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871 2. Wangxuan Institute of Computer Technology, Peking University, Beijing 100871
  • Received:2020-05-25 Revised:2020-06-07 Online:2021-07-20 Published:2021-07-20
  • Contact: ZOU Lei, E-mail: zoulei(at)pku.edu.cn

摘要:

为了使图表示学习得到的嵌入向量对节点和边不断变化的动态图具有很好的信息表征能力, 提出一种动态图卷积神经网络模型(DyGCN), 将动态图上的表示学习建模为时间和空间信息的聚合。该模型将从图卷积神经网络(GCN)的空间卷积提取图上的结构信息与从时间卷积神经网络(TCN)的因果卷积提取时序上的历史信息相结合, 同时在空间卷积层加入自适应的模型更新机制, 使得模型参数随着图结构的变化能够自适应地更新。在金融领域数据集上针对金融欺诈检测进行的边分类实验表明, 该模型比现有方法有很大的性能提升。

关键词: 动态图, 图卷积神经网络(GCN), 图表示学习, 时空卷积

Abstract:

In order to learn high-level representation with rich information for dynamic graphs where nodes and edges change dynamically, a dynamic graph convolutional network (DyGCN) is proposed to learn representation as a mixture of both spatial and temporal information. The model performs spatial convolutions to learn structural information on graphs and temporal convolutions to learn historical information along time axis. Besides, the selfadapting mechanism on the spatial convolution layer allows model parameters to update with graphs. Extensive experiments on financial networks for edge classification tasks against financial crimes show that DyGCN outperforms state-of-the-art methods.

Key words: dynamic graphs, graph convolutional network (GCN), graph representation learning, spatial-temporal convolutions