电子学报 ›› 2023, Vol. 51 ›› Issue (1): 180-191.DOI: 10.12263/DZXB.20211671
袁海英, 成君鹏, 曾智勇, 武延瑞
收稿日期:
2021-12-18
修回日期:
2022-03-07
出版日期:
2023-01-25
作者简介:
基金资助:
YUAN Hai-ying, CHENG Jun-peng, ZENG Zhi-yong, WU Yan-rui
Received:
2021-12-18
Revised:
2022-03-07
Online:
2023-01-25
Published:
2023-02-23
Supported by:
摘要:
针对深度卷积神经网络难以部署到资源受限的端侧设备这一问题,本文提出一种高效精简的轻量化卷积神经网络Mobile_BLNet,在模型规模、计算量和性能之间取得了良好的平衡.该网络引入深度可分离卷积和倒残差结构,通过合理分配不同分支的运算量缩减模型规模并节省大量计算资源;采用通道剪枝操作压缩网络模型,基于占总和比值方法裁剪对模型贡献度低的卷积通道,在相同压缩效果情况下提升了分类准确率;基于通道裁剪情况重构网络,进一步降低模型所需计算资源.实验结果表明,Mobile_BLNet结构精简、性能优异,在CIFAR-10/CIFAR-100数据集上以0.1 M/0.3 M参数量、9.6 M/12.7 M浮点计算量获得91.2%/71.5%分类准确率;在Food101/ImageNet数据集上以1.0 M/2.1 M参数量、203.0 M/249.6 M浮点计算量获得82.8%/70.9%分类准确率,满足轻量化卷积神经网络的端侧硬件高能效部署需求.
中图分类号:
袁海英, 成君鹏, 曾智勇, 等. Mobile_BLNet:基于Big-Little Net的轻量级卷积神经网络优化设计[J]. 电子学报, 2023, 51(1): 180-191.
Hai-ying YUAN, Jun-peng CHENG, Zhi-yong ZENG, et al. Mobile_BLNet: Optimization Design of Lightweight Convolutional Neural Network Based on Big-Little Net[J]. Acta Electronica Sinica, 2023, 51(1): 180-191.
模型类别 | 模型名称 | 参数量 | 浮点计算量 |
---|---|---|---|
传统CNN模型 | VGG16 | 138.4 M | 15.5 G |
ResNet50 | 25.6 M | 3.8 G | |
DenseNet121[ | 8.0 M | 2.9 G | |
轻量级CNN模型 | MobileNet V2 | 3.4 M | 300 M |
ShuffleNet V2(2×) | 7.4 M | 591 M | |
EfficientNet b0 | 5.3 M | 399.3 M | |
本文网络 | Mobile_BLNet | 2.1 M | 249.6 M |
表1 CNN模型复杂度对比
模型类别 | 模型名称 | 参数量 | 浮点计算量 |
---|---|---|---|
传统CNN模型 | VGG16 | 138.4 M | 15.5 G |
ResNet50 | 25.6 M | 3.8 G | |
DenseNet121[ | 8.0 M | 2.9 G | |
轻量级CNN模型 | MobileNet V2 | 3.4 M | 300 M |
ShuffleNet V2(2×) | 7.4 M | 591 M | |
EfficientNet b0 | 5.3 M | 399.3 M | |
本文网络 | Mobile_BLNet | 2.1 M | 249.6 M |
结构 | 操作 | |
---|---|---|
卷积1 | 3×3卷积,步长=1,输出尺寸32×32×32 | |
Module1 | B_bottleneck1 (t=6, n=3) 输出尺寸16×16×32 | L_bottleneck1 输出尺寸16×16×(32×α) |
拼接并点卷积,输出通道数 | ||
Module2 | B_bottleneck2 (t=4, n=4) 输出尺寸8×8× | L_bottleneck2 输出尺寸8×8×( |
拼接并点卷积,输出通道数 | ||
Module3 | B_bottleneck3 (t=2, n=1) 输出尺寸4×4× | L_bottleneck3 输出尺寸4×4×( |
拼接并点卷积,输出通道数 | ||
卷积2 | 点卷积,输出通道数=512 | |
池化 | 4×4平均池化 | |
全连接 | 分类数目 |
表2 Mobile_BLNet结构
结构 | 操作 | |
---|---|---|
卷积1 | 3×3卷积,步长=1,输出尺寸32×32×32 | |
Module1 | B_bottleneck1 (t=6, n=3) 输出尺寸16×16×32 | L_bottleneck1 输出尺寸16×16×(32×α) |
拼接并点卷积,输出通道数 | ||
Module2 | B_bottleneck2 (t=4, n=4) 输出尺寸8×8× | L_bottleneck2 输出尺寸8×8×( |
拼接并点卷积,输出通道数 | ||
Module3 | B_bottleneck3 (t=2, n=1) 输出尺寸4×4× | L_bottleneck3 输出尺寸4×4×( |
拼接并点卷积,输出通道数 | ||
卷积2 | 点卷积,输出通道数=512 | |
池化 | 4×4平均池化 | |
全连接 | 分类数目 |
准确率 | 82.5% | 91.7% | 92.4% | 92.7% |
表3 Mobile_BLNet在CIFAR-10上的准确率
准确率 | 82.5% | 91.7% | 92.4% | 92.7% |
网络 | 参数量/M | 浮点计算量/M | 准确率/% | |||
---|---|---|---|---|---|---|
CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | |
SqueezeNet[ | 0.7 | 0.8 | 54.1 | 54.9 | 90.5 | 69.4 |
Xception[ | 20.8 | 21.0 | 1 135.7 | 1 135.9 | 94.8 | 75.0 |
MobileNet V1[ | 3.2 | 3.3 | 47.2 | 47.3 | 89.6 | 66.0 |
MobileNet V2[ | 2.3 | 2.4 | 94.6 | 94.7 | 91.8 | 68.1 |
MobileNet V3-Small[ | 1.4 | 1.5 | 18.9 | 19.1 | 91.6 | 68.8 |
MobileNet V3-Large[ | 4.3 | 4.4 | 69.5 | 69.6 | 93.4 | 74.3 |
ShuffleNet V1(g=3)[ | 0.9 | 0.9 | 42.2 | 42.3 | 89.9 | 70.1 |
ShuffleNet V2(1×)[ | 1.3 | 1.4 | 47.4 | 47.5 | 90.9 | 69.5 |
ShuffleNet V2(2×)[ | 5.3 | 5.5 | 186.1 | 186.5 | 92.0 | 71.8 |
GhostNet (0.5×)[ | 1.3 | 1.4 | 13.2 | 13.4 | 91.4 | 69.9 |
GhostNet (1×)[ | 3.9 | 4.0 | 44.9 | 45.0 | 92.6 | 72.8 |
Mobile_BLNet | 0.5 | 0.6 | 32.9 | 33.0 | 92.7 | 72.9 |
表4 不同网络在CIFAR-10和CIFAR-100上的实验结果
网络 | 参数量/M | 浮点计算量/M | 准确率/% | |||
---|---|---|---|---|---|---|
CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | |
SqueezeNet[ | 0.7 | 0.8 | 54.1 | 54.9 | 90.5 | 69.4 |
Xception[ | 20.8 | 21.0 | 1 135.7 | 1 135.9 | 94.8 | 75.0 |
MobileNet V1[ | 3.2 | 3.3 | 47.2 | 47.3 | 89.6 | 66.0 |
MobileNet V2[ | 2.3 | 2.4 | 94.6 | 94.7 | 91.8 | 68.1 |
MobileNet V3-Small[ | 1.4 | 1.5 | 18.9 | 19.1 | 91.6 | 68.8 |
MobileNet V3-Large[ | 4.3 | 4.4 | 69.5 | 69.6 | 93.4 | 74.3 |
ShuffleNet V1(g=3)[ | 0.9 | 0.9 | 42.2 | 42.3 | 89.9 | 70.1 |
ShuffleNet V2(1×)[ | 1.3 | 1.4 | 47.4 | 47.5 | 90.9 | 69.5 |
ShuffleNet V2(2×)[ | 5.3 | 5.5 | 186.1 | 186.5 | 92.0 | 71.8 |
GhostNet (0.5×)[ | 1.3 | 1.4 | 13.2 | 13.4 | 91.4 | 69.9 |
GhostNet (1×)[ | 3.9 | 4.0 | 44.9 | 45.0 | 92.6 | 72.8 |
Mobile_BLNet | 0.5 | 0.6 | 32.9 | 33.0 | 92.7 | 72.9 |
网络类型 | 参数量/M | 浮点计算量/M | 准确率/% | |||
---|---|---|---|---|---|---|
Food101 | ImageNet | Food101 | ImageNet | Food101 | ImageNet | |
SqueezeNet[ | 0.8 | 1.2 | 751.8 | 830.1 | 77.9 | 57.5 |
Xception[ | 21.0 | 22.9 | 8 418.7 | 8 420.5 | 85.7 | 79.0 |
MobileNet V1[ | 3.3 | 4.2 | 574 | 575 | 82.5 | 70.6 |
MobileNet V2[ | 2.4 | 3.4 | 299 | 300 | 83.1 | 72.0 |
MobileNet V3-Small[ | 1.5 | 2.5 | 55 | 56 | 78.2 | 67.4 |
MobileNet V3-Large[ | 4.4 | 5.4 | 218 | 219 | 83.3 | 75.2 |
ShuffleNet V1(g=3)[ | 1.0 | 1.8 | 139 | 140 | 77.3 | 67.4 |
ShuffleNet V2(1×)[ | 1.4 | 2.3 | 145 | 146 | 79.7 | 69.4 |
ShuffleNet V2(2×)[ | 5.6 | 7.4 | 590 | 591 | 83.2 | 74.9 |
CondenseNet(G=C=8)[ | 2.0 | 2.9 | 273 | 274 | 82.7 | 71.0 |
GhostNet (0.5×)[ | 1.4 | 2.6 | 40 | 42 | 75.0 | 66.2 |
GhostNet (1×)[ | 4.0 | 5.2 | 140 | 141 | 82.4 | 73.9 |
EfficientNet b0[ | 4.1 | 5.3 | 398.2 | 399.3 | 84.0 | 77.1 |
Mobile_BLNet | 1.5 | 2.4 | 373.0 | 373.9 | 83.8 | 71.6 |
表5 不同网络在Food101和ImageNet上的实验结果
网络类型 | 参数量/M | 浮点计算量/M | 准确率/% | |||
---|---|---|---|---|---|---|
Food101 | ImageNet | Food101 | ImageNet | Food101 | ImageNet | |
SqueezeNet[ | 0.8 | 1.2 | 751.8 | 830.1 | 77.9 | 57.5 |
Xception[ | 21.0 | 22.9 | 8 418.7 | 8 420.5 | 85.7 | 79.0 |
MobileNet V1[ | 3.3 | 4.2 | 574 | 575 | 82.5 | 70.6 |
MobileNet V2[ | 2.4 | 3.4 | 299 | 300 | 83.1 | 72.0 |
MobileNet V3-Small[ | 1.5 | 2.5 | 55 | 56 | 78.2 | 67.4 |
MobileNet V3-Large[ | 4.4 | 5.4 | 218 | 219 | 83.3 | 75.2 |
ShuffleNet V1(g=3)[ | 1.0 | 1.8 | 139 | 140 | 77.3 | 67.4 |
ShuffleNet V2(1×)[ | 1.4 | 2.3 | 145 | 146 | 79.7 | 69.4 |
ShuffleNet V2(2×)[ | 5.6 | 7.4 | 590 | 591 | 83.2 | 74.9 |
CondenseNet(G=C=8)[ | 2.0 | 2.9 | 273 | 274 | 82.7 | 71.0 |
GhostNet (0.5×)[ | 1.4 | 2.6 | 40 | 42 | 75.0 | 66.2 |
GhostNet (1×)[ | 4.0 | 5.2 | 140 | 141 | 82.4 | 73.9 |
EfficientNet b0[ | 4.1 | 5.3 | 398.2 | 399.3 | 84.0 | 77.1 |
Mobile_BLNet | 1.5 | 2.4 | 373.0 | 373.9 | 83.8 | 71.6 |
裁剪率 | 裁剪后参数量/M | 裁剪后浮点计算量/M | 裁剪后准确率/% |
---|---|---|---|
CIFAR-10/100 | CIFAR-10/100 | CIFAR-10/100 | |
0.1 | 0.49/0.53 | 30.63/30.69 | 91.6/69.7 |
0.2 | 0.44/0.48 | 27.73/27.80 | 91.6/67.0 |
0.3 | 0.40/0.43 | 24.85/24.92 | 91.0/64.5 |
0.4 | 0.35/0.38 | 21.96/22.03 | 90.8/61.5 |
0.5 | 0.31/0.33 | 18.93/19.00 | 89.5/54.1 |
0.6 | 0.27/0.28 | 16.08/16.16 | 87.2/36.5 |
0.7 | 0.22/0.23 | 13.19/13.27 | 79.7/15.0 |
0.8 | 0.18/0.19 | 10.31/10.40 | 41.5/4.7 |
0.9 | 0.13/0.14 | 7.41/7.51 | 16.6/1.8 |
占总和比值 | 0.23/0.37 | 11.92/15.75 | 91.5/71.4 |
表6 不同裁剪阈值下的实验结果
裁剪率 | 裁剪后参数量/M | 裁剪后浮点计算量/M | 裁剪后准确率/% |
---|---|---|---|
CIFAR-10/100 | CIFAR-10/100 | CIFAR-10/100 | |
0.1 | 0.49/0.53 | 30.63/30.69 | 91.6/69.7 |
0.2 | 0.44/0.48 | 27.73/27.80 | 91.6/67.0 |
0.3 | 0.40/0.43 | 24.85/24.92 | 91.0/64.5 |
0.4 | 0.35/0.38 | 21.96/22.03 | 90.8/61.5 |
0.5 | 0.31/0.33 | 18.93/19.00 | 89.5/54.1 |
0.6 | 0.27/0.28 | 16.08/16.16 | 87.2/36.5 |
0.7 | 0.22/0.23 | 13.19/13.27 | 79.7/15.0 |
0.8 | 0.18/0.19 | 10.31/10.40 | 41.5/4.7 |
0.9 | 0.13/0.14 | 7.41/7.51 | 16.6/1.8 |
占总和比值 | 0.23/0.37 | 11.92/15.75 | 91.5/71.4 |
数据集 | 参数量 变化/M | 参数量 裁剪 率/% | FLOPs 变化/M | FLOPs裁剪率/% | 准确率变化/% | 微调准 确率/% |
---|---|---|---|---|---|---|
CIFAR-10 | 0.5-> 0.2 | 60.00% | 32.9-> 11.9 | 63.80% | 91.7-> 91.5 | 91.7 |
CIFAR-100 | 0.6-> 0.4 | 33.30% | 33.0-> 15.1 | 54.20% | 71.6-> 71.4 | 71.5 |
Food101 | 1.5-> 1.2 | 20.00% | 373.0-> 219.8 | 41.10% | 82.8-> 81.9 | 82.8 |
ImageNet | 2.4-> 2.1 | 12.50% | 373.9-> 257.5 | 31.10% | 70.8-> 69.7 | 70.7 |
表7 Mobile_BLNet裁剪前后变化
数据集 | 参数量 变化/M | 参数量 裁剪 率/% | FLOPs 变化/M | FLOPs裁剪率/% | 准确率变化/% | 微调准 确率/% |
---|---|---|---|---|---|---|
CIFAR-10 | 0.5-> 0.2 | 60.00% | 32.9-> 11.9 | 63.80% | 91.7-> 91.5 | 91.7 |
CIFAR-100 | 0.6-> 0.4 | 33.30% | 33.0-> 15.1 | 54.20% | 71.6-> 71.4 | 71.5 |
Food101 | 1.5-> 1.2 | 20.00% | 373.0-> 219.8 | 41.10% | 82.8-> 81.9 | 82.8 |
ImageNet | 2.4-> 2.1 | 12.50% | 373.9-> 257.5 | 31.10% | 70.8-> 69.7 | 70.7 |
Layers | 输入通道变化 | |||
---|---|---|---|---|
CIFAR-10 | CIFAR-100 | Food101 | ImageNet | |
B_bottleneck1 | 32->32 | 32->32 | 32->32 | 32->32 |
L_bottleneck1 | 32->32 | 32->32 | 32->25 | 32->32 |
B_bottleneck2 | 64->57 | 64->57 | 64->68 | 64->57 |
L_bottleneck2 | 64->22 | 64->34 | 64->68 | 64->57 |
B_bottleneck3 | 128->71 | 128->109 | 128->122 | 128->136 |
L_bottleneck3 | 128->28 | 128->65 | 128->73 | 128->108 |
B_bottleneck4 | — | — | 256->292 | 256->292 |
L_bottleneck4 | — | — | 256->116 | 256->233 |
表8 Mobile_BLNet通道结构变化
Layers | 输入通道变化 | |||
---|---|---|---|---|
CIFAR-10 | CIFAR-100 | Food101 | ImageNet | |
B_bottleneck1 | 32->32 | 32->32 | 32->32 | 32->32 |
L_bottleneck1 | 32->32 | 32->32 | 32->25 | 32->32 |
B_bottleneck2 | 64->57 | 64->57 | 64->68 | 64->57 |
L_bottleneck2 | 64->22 | 64->34 | 64->68 | 64->57 |
B_bottleneck3 | 128->71 | 128->109 | 128->122 | 128->136 |
L_bottleneck3 | 128->28 | 128->65 | 128->73 | 128->108 |
B_bottleneck4 | — | — | 256->292 | 256->292 |
L_bottleneck4 | — | — | 256->116 | 256->233 |
数据集 | 裁剪前后参数量/M | 裁剪前后FLOPs/M | 裁剪前后准确率/% | 微调准 确率/% | |
---|---|---|---|---|---|
CIFAR-10 | 重构前 | 0.5->0.2 | 32.9->11.9 | 91.7->91.5 | 91.7 |
重构后 | 0.3->0.1 | 25.1->9.6 | 91.2->91.0 | 91.2 | |
CIFAR-100 | 重构前 | 0.6->0.4 | 33.0->15.1 | 71.6->71.4 | 71.5 |
重构后 | 0.4->0.3 | 27.4->12.7 | 71.5->71.3 | 71.5 | |
Food101 | 重构前 | 1.5->1.2 | 373.0->219.8 | 82.8->81.9 | 82.8 |
重构后 | 1.2->1.0 | 350.3->203.0 | 82.8->81.8 | 82.8 | |
ImageNet | 重构前 | 2.4->2.1 | 373.9->257.5 | 70.8->69.7 | 70.8 |
重构后 | 2.4->2.1 | 355.8->249.6 | 71.0->69.9 | 70.9 |
表9 Mobile_BLNet重构前后在各数据集上的表现
数据集 | 裁剪前后参数量/M | 裁剪前后FLOPs/M | 裁剪前后准确率/% | 微调准 确率/% | |
---|---|---|---|---|---|
CIFAR-10 | 重构前 | 0.5->0.2 | 32.9->11.9 | 91.7->91.5 | 91.7 |
重构后 | 0.3->0.1 | 25.1->9.6 | 91.2->91.0 | 91.2 | |
CIFAR-100 | 重构前 | 0.6->0.4 | 33.0->15.1 | 71.6->71.4 | 71.5 |
重构后 | 0.4->0.3 | 27.4->12.7 | 71.5->71.3 | 71.5 | |
Food101 | 重构前 | 1.5->1.2 | 373.0->219.8 | 82.8->81.9 | 82.8 |
重构后 | 1.2->1.0 | 350.3->203.0 | 82.8->81.8 | 82.8 | |
ImageNet | 重构前 | 2.4->2.1 | 373.9->257.5 | 70.8->69.7 | 70.8 |
重构后 | 2.4->2.1 | 355.8->249.6 | 71.0->69.9 | 70.9 |
1 | KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe Nevada: ACM, 2012: 1097-1105. |
2 | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2014-09-04)[2021-12]. . |
3 | SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9. |
4 | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778. |
5 | HUANG H H, ZHOU P, LI Y, et al. A lightweight attention-based CNN model for efficient gait recognition with wearable IMU sensors[J]. Sensors(Basel), 2021, 21(8): 2866. |
6 | SHUVO S B, ALI S N, SWAPNIL S I, et al. A lightweight CNN model for detecting respiratory diseases from lung auscultation sounds using EMD-CWT-based hybrid scalogram[J]. IEEE Journal of Biomedical and Health Informatics, 2021, 25(7): 2595-2603. |
7 | IANDOLA F N, HAN S, MOSKEWICZ M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[EB/OL]. (2016-02-24)[2021-12]. . |
8 | HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. (2017-04-17)[2021-12]. . |
9 | SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510-4520. |
10 | ZHANG X Y, ZHOU X Y, LIN M X, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 6848-6856. |
11 | MA N N, ZHANG X Y, ZHENG H T, et al. ShuffleNet V2: Practical guidelines for efficient CNN architecture design[C]//European Conference on Computer Vision. Munich: Springer, 2018: 122-138. |
12 | TAN M X, LE Q V. EfficientNet: Rethinking model scaling for convolutional neural networks[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach: MLResearch Press, 2019, 97: 6105-6114. |
13 | CHEN C F, FAN Q F, MALLINAR N, et al. Big-Little Net: An efficient multi-scale feature representation for visual and speech recognition[EB/OL]. (2018-07-10)[2021-12]. . |
14 | HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu: IEEE,2017: 2261-2269. |
15 | DENTON E, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. New York: ACM, 2014: 1269-1277. |
16 | DONG X Y, HUANG J S, YANG Y, et al. More is less: A more complicated network with less inference complexity[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu: IEEE, 2017: 1895-1903. |
17 | LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient ConvNets[EB/OL]. (2016-08-31)[2021-12]. . |
18 | TANG Y H, WANG Y H, XU Y X, et al. Manifold regularized dynamic network pruning[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 5016-5026. |
19 | HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural networks[EB/OL]. (2015-06-08)[2021-12]. . |
20 | WEN W, WU C P, WANG Y D, et al. Learning structured sparsity in deep neural networks[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 2082-2090. |
21 | LIU Z, LI J G, SHEN Z Q, et al. Learning efficient convolutional networks through network slimming[C]//2017 IEEE International Conference on Computer Vision(ICCV).Venice: IEEE, 2017: 2755-2763. |
22 | PARK E, YOO S, VAJDA P. Value-aware quantization for training and inference of neural networks[C]//European Conference on Computer Vision. Munich: Springer, 2018: 608-624. |
23 | YAMAMOTO K. Learnable companding quantization for accurate low-bit neural networks[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 5027-5036. |
24 | ZHANG D Q, YANG J L, YE D, et al. LQ-Nets: Learned quantization for highly accurate and compact deep neural networks[C]//European Conference on Computer Vision. Munich: Springer, 2018: 373-390. |
25 | CHEN W L, WILSON J T, TYREE S, et al. Compressing neural networks with the hashing trick[C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning. New York: ACM, 2015: 2285-2294. |
26 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09)[2021-12]. . |
27 | ZHU J G, TANG S X, CHEN D P, et al. Complementary relation contrastive distillation[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 9256-9265. |
28 | GOLUBEVA A, NEYSHABUR B, GUR-ARI G. Are wider nets better given the same number of parameters?[EB/OL]. (2020-10-27)[2021-12]. . |
29 | 袁海英, 成君鹏. 面向移动端图像分类的轻量级卷积神经网络的设计方法: 202110462584.4[P]. 2021-07-02. |
30 | CHOLLET F. Xception: Deep learning with depthwise separable convolutions[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 1800-1807. |
31 | HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). Seoul: IEEE, 2019: 1314-1324. |
32 | HAN K, WANG Y H, TIAN Q, et al. GhostNet: More features from cheap operations[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Seattle: IEEE, 2020: 1577-1586. |
33 | HUANG G, LIU S C, MAATEN L V D, et al. CondenseNet: An efficient DenseNet using learned group convolutions[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 2752-2761. |
[1] | 李钦, 刘伟, 牛朝阳, 宝音图, 惠周勃. 低信噪比下基于分裂EfficientNet网络的雷达信号调制方式识别[J]. 电子学报, 2023, 51(3): 675-686. |
[2] | 黄赟, 张帆, 郭威, 陈立, 羊光. 一种基于数据标准差的卷积神经网络量化方法[J]. 电子学报, 2023, 51(3): 639-647. |
[3] | 范兵兵, 何庭建, 张聪炫, 陈震, 黎明. 联合遮挡约束与残差补偿的特征金字塔光流计算方法[J]. 电子学报, 2023, 51(3): 648-657. |
[4] | 吕杭, 蒋明峰, 李杨, 张鞠成, 王志康. 基于混合时频域特征的卷积神经网络心律失常分类方法的研究[J]. 电子学报, 2023, 51(3): 701-711. |
[5] | 张聿远, 闫文君, 张立民. 基于多模态特征融合网络的空时分组码识别算法[J]. 电子学报, 2023, 51(2): 489-498. |
[6] | 许新征, 李杉. 基于特征膨胀卷积模块的轻量化技术研究[J]. 电子学报, 2023, 51(2): 355-364. |
[7] | 丁琪, 田萱, 孙国栋. 基于注意力增强的热点感知新闻推荐模型[J]. 电子学报, 2023, 51(1): 93-104. |
[8] | 李滔, 董秀成, 林宏伟. 基于深监督跨尺度注意力网络的深度图像超分辨率重建[J]. 电子学报, 2023, 51(1): 128-138. |
[9] | 郭晓轩, 冯其波, 冀振燕, 郑发家, 杨燕燕. 多线激光光条图像缺陷分割模型研究[J]. 电子学报, 2023, 51(1): 172-179. |
[10] | 贾童瑶, 卓力, 李嘉锋, 张菁. 基于深度学习的单幅图像去雾研究进展[J]. 电子学报, 2023, 51(1): 231-245. |
[11] | 何滢婕, 刘月峰, 边浩东, 郭威, 张小燕. 基于Informer的电池荷电状态估算及其稀疏优化方法[J]. 电子学报, 2023, 51(1): 50-56. |
[12] | 张永梅, 孙捷. 基于动静态特征双输入神经网络的咳嗽声诊断COVID-19算法[J]. 电子学报, 2023, 51(1): 202-212. |
[13] | 王神龙, 雍宇, 吴晨睿. 基于伪孪生神经网络的低纹理工业零件6D位姿估计[J]. 电子学报, 2023, 51(1): 192-201. |
[14] | 吴靖, 叶晓晶, 黄峰, 陈丽琼, 王志锋, 刘文犀. 基于深度学习的单帧图像超分辨率重建综述[J]. 电子学报, 2022, 50(9): 2265-2294. |
[15] | 毛国君, 王者浩, 黄山, 王翔. 基于剪边策略的图残差卷积深层网络模型[J]. 电子学报, 2022, 50(9): 2205-2214. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||