北京大学学报自然科学版 ›› 2024, Vol. 60 ›› Issue (5): 786-798.DOI: 10.13209/j.0479-8023.2024.066

上一篇    下一篇

适配PAICORE2.0的硬件编码转帧加速单元设计

丁亚伟1, 曹健1,†, 李琦彬1, 冯硕1, 杨辰涛1, 王源2, 张兴2,3,†
  

  1. 1. 北京大学软件与微电子学院, 北京 102600 2. 北京大学集成电路学院, 北京 100871 3. 北京大学深圳研究生院集成微系统科学工程与应用重点实验室, 深圳 518055
  • 收稿日期:2023-10-18 修回日期:2024-03-01 出版日期:2024-09-20 发布日期:2024-09-11
  • 通讯作者: 曹健, E-mail: caojian(at)ss.pku.edu.cn, 张兴, E-mail: zhx(at)pku.edu.cn
  • 基金资助:
    深圳市科技创新委员会基金(KQTD20200820113105004)资助

Design of Acceleration Unit of Encoding and Frame Generation for PAICORE2.0

DING Yawei1, CAO Jian1,†, LI Qibin1, FENG Shuo1, YANG Chentao1, WANG Yuan2, ZHANG Xing2,3,†
  

  1. 1. School of Software & Microelectronics, Peking University, Beijing 102600 2. School of Integrated Circuits, Peking University, Beijing 100871 3. Key Lab of Integrated Microsystems, Peking University Shenzhen Graduate School, Shenzhen 518055
  • Received:2023-10-18 Revised:2024-03-01 Online:2024-09-20 Published:2024-09-11
  • Contact: CAO Jian, E-mail: caojian(at)ss.pku.edu.cn, ZHANG Xing, E-mail: zhx(at)pku.edu.cn

摘要:

为了解决北京大学脉冲神经网络芯片PAICORE2.0类脑终端系统中软件编码和转帧过程速度较慢的问题, 提出一种硬件加速方法。通过增加硬件加速单元, 将Xilinx ZYNQ的处理系统 PS端串行执行的软件编码转帧过程转移到可编程逻辑 PL端的数据通路中流水化并行执行。硬件加速单元主要包含高度并行的卷积单元、参数化的脉冲神经元和位宽平衡数据缓冲区等。实验结果表明, 该方法在几乎不增加数据通路传输延迟的前提下, 可以消除软件编码和转帧过程的时间开销。在CIFAR-10图像分类的例子中, 与软件编码和转帧方法相比, 硬件编码转帧模块仅增加9.3%的LUT、3.7%的BRAM、2.6%的FF、0.9%的LUTRAM、14.9%的DSP以及 14.6%的功耗, 却能够实现约8.72倍的推理速度提升。

关键词:

Abstract:

An edge computing system was designed by the spiking neural network chip PAICORE2.0 of Peking University, in conjunction with Xilinx ZYNQ. However, the software encoding and frame generation processes on the processing system (PS) side is slow and limits the performance of the system. Therefore, a hardware acceleration method is proposed. The software encoding and frame generation processes, which is serially executed on the PS side, is moved to the data path on the programmable logic (PL) side for pipelined parallel execution. The hardware acceleration unit mainly consists of highly parallel convolution units, parameterizable spiking neurons, width-balanced data buffers and other modules. The results show that the method removes the time overhead of software encoding and frame generation without increasing the data path transmission delay. In the example of CIFAR-10 image classification, compared with software encoding and frame generation, the hardware encoding and frame generation module results in only a marginal increase in resource utilization — 9.3% more Look-Up Tables (LUTs), 3.7% more Block RAMs (BRAMs), 2.6% more flip-flops (FFs), 0.9% more LUTRAMs, and 14.9% more digital signal processors (DSPs), as well as a 14.6% increase in power consumption. However, it achieves approximately an 8.72-fold improvement in inference speed.

Key words: