Acta Scientiarum Naturalium Universitatis Pekinensis ›› 2022, Vol. 58 ›› Issue (5): 808-812.DOI: 10.13209/j.0479-8023.2022.082

Previous Articles     Next Articles

Post Training Quantization Preprocessing Method of Convolutional Neural Network via Outlier Removal

XU Pengtao, CAO Jian, CHEN Weiqian, LIU Shengrong, WANG Yuan, ZHANG Xing   

  1. School of Software and Microelectronics, Peking University, Beijing 102600
  • Received:2021-10-26 Revised:2022-02-21 Online:2022-09-20 Published:2022-09-20
  • Contact: CAO Jian, E-mail: caojian@ss.pku.edu.cn, ZHANG Xing, E-mail: zhx@pku.edu.cn

基于离群值去除的卷积神经网络模型训练后量化预处理方法

徐鹏涛, 曹健, 陈玮乾, 刘晟荣, 王源, 张兴    

  1. 北京大学软件与微电子学院, 北京 102600
  • 通讯作者: 曹健, E-mail: caojian@ss.pku.edu.cn, 张兴, E-mail: zhx@pku.edu.cn
  • 基金资助:
    国家自然科学基金联合基金(U20A20204)资助 

Abstract:

In order to improve the performance of post training quantization model, a quantization preprocessing method based on outlier removal is proposed. This method is simple and easy to use. The outliers of weight and activation value are removed only through simple operations such as sorting and comparison, so that the quantization model loses only a small amount of information and improves the accuracy. The experimental results show that the performance can be significantly improved by preprocessing with this method before using different quantization methods.

Key words:  convolutional neural network, post training quantization, preprocessing, outlier removal, image classification 

摘要:

为了提高训练后量化模型的性能, 提出一种基于离群值去除的模型训练后量化预处理方法。该方法仅通过排序、比较等简易的操作, 实现权重、激活值的离群值去除, 使模型在量化时仅损失少量的信息, 从而提升量化模型的精度。实验结果表明, 在使用不同的量化方法前, 采用所提方法进行预处理, 可显著地提升性能。

关键词: 卷积神经网络, 训练后量化, 预处理, 离群值去除, 图像分类