Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2022, Vol. 58 ›› Issue (5): 801-807.DOI: 10.13209/j.0479-8023.2022.081

Previous Articles     Next Articles

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

XU Pengtao, CAO Jian, SUN Wenyu, LI Pu, WANG Yuan, ZHANG Xing   

  1. School of Software and Microelectronics, Peking University, Beijing 102600
  • Received:2021-09-30 Revised:2021-12-04 Online:2022-09-20 Published:2022-09-20
  • Contact: CAO Jian, E-mail: caojian@ss.pku.edu.cn, ZHANG Xing, E-mail: zhx@pku.edu.cn

基于可融合残差卷积块的深度神经网络模型层剪枝方法

徐鹏涛, 曹健, 孙文宇, 李普, 王源, 张兴   

  1. 北京大学软件与微电子学院, 北京 102600
  • 通讯作者: 曹健, E-mail: caojian@ss.pku.edu.cn, 张兴, E-mail: zhx@pku.edu.cn
  • 基金资助:
    国家重点研发计划(2018YFE0203801)资助 

Abstract:

Aiming at the problems of long inference time and poor effect of the compression model obtained by the current mainstream pruning methods, an easy-to-use and excellent layer pruning method is proposed. The original convolution layers in the model are transformed into fusible residual convolutional blocks, and then layer pruning is realized by sparse training, therefore a layer pruning method with engineering ease is obtained, which has the advantages of short inference time and good pruning effect. The experimental results show that the proposed layer pruning method can achieve a very high compression rate with less accuracy loss in image classification tasks and object detection tasks, and the compression performance is better than the advanced convolutional kernel pruning methods.

Key words: convolutional neural network, layer pruning, fusible residual convolutional block, sparse training; image classification 

摘要:

针对当前主流的剪枝方法所获得的压缩模型推理时间较长和效果较差的问题, 提出一种易用且性能优异的层剪枝方法。该方法将原始卷积层转化为可融合残差卷积块, 然后通过稀疏化训练的方法实现层剪枝, 得到一种具有工程易用性的层剪枝方法, 兼具推理时间短和剪枝效果好的优点。实验结果表明, 在图像分类任务和目标检测任务中, 该方法可使模型在精度损失较小的情况下获得极高的压缩率, 优于先进的卷积核剪枝方法。

关键词: 卷积神经网络, 层剪枝, 可融合残差卷积块, 稀疏化训练, 图像分类