• 学术论文 •

### 基于自适应层信息熵的卷积神经网络压缩

1. 江南大学轻工过程先进控制教育部重点实验室，江苏 无锡 214122
• 收稿日期:2020-12-01 修回日期:2021-01-29 出版日期:2022-10-25
• 通讯作者:
• 陈莹
• 作者简介:
• 魏钰轩 男，1997年10月出生，江苏徐州人.江南大学研究生.主要研究方向为图像处理、模型压缩.
陈 莹（通讯作者） 女，1976年12月出生，浙江丽水人.江南大学教授，博士生导师.主要研究方向为图像处理、信息融合、模式识别.
• 基金资助:
• 国家自然科学基金 (61573168)

### Convolutional Neural Network Compression Based on Adaptive Layer Entropy

WEI Yu-xuan, CHEN Ying()

1. The Key Laboratory of Advanced Process Control for Light Industry （Ministry of Education），Jiangnan University，Wuxi，Jiangsu 214122，China
• Received:2020-12-01 Revised:2021-01-29 Online:2022-10-25 Published:2022-10-11
• Corresponding author:
• CHEN Ying

Abstract:

Network pruning has proven to be an effective approach to compress convolutional neural network (CNN). However, most existing CNN compression methods iteratively prune the "least important" filters, which not only destroys the information integrity of network structures, but also results in significant computation cost due to the iterative operation. To solve the problems, a convolutional neural network compression method based on adaptive layer entropy(ALE) is proposed, considering a global network structure. Firstly, an end-to-end network pruning strategy is designed, in which the retention rate of each convolutional layer filter is directly determined based on the entropy correlation between layers. The pruning strategy takes the convolutional layer as a whole, which decreases the information loss and computation cost of iterative pruning. Then, an adaptive joint grafting method, in which both convolutional and batch normalization(BN) layers are considered, is presented for the pruned network retraining to learn more information from the network. The layer entropies used in the compression are also utilized for the grafting. Experiments are conducted on different benchmarks and three popular networks, which demonstrate the efficiency and superiority of the proposed ALE over other methods. For the experiments on CIFAR-10, ALE achieves 36.2%, 52.4% and 55.2% pruned rate in FLOPs for ResNet-56, ResNet-110 and MobileNetV2 while with increase of 1%, 1.42%, 1.29% accuracy respectively.