李江昀, 赵义凯, 薛卓尔, 蔡铮, 李擎. 深度神经网络模型压缩综述[J]. 工程科学学报, 2019, 41(10): 1229-1239. DOI: 10.13374/j.issn2095-9389.2019.03.27.002
引用本文: 李江昀, 赵义凯, 薛卓尔, 蔡铮, 李擎. 深度神经网络模型压缩综述[J]. 工程科学学报, 2019, 41(10): 1229-1239. DOI: 10.13374/j.issn2095-9389.2019.03.27.002
LI Jiang-yun, ZHAO Yi-kai, XUE Zhuo-er, CAI Zheng, LI Qing. A survey of model compression for deep neural networks[J]. Chinese Journal of Engineering, 2019, 41(10): 1229-1239. DOI: 10.13374/j.issn2095-9389.2019.03.27.002
Citation: LI Jiang-yun, ZHAO Yi-kai, XUE Zhuo-er, CAI Zheng, LI Qing. A survey of model compression for deep neural networks[J]. Chinese Journal of Engineering, 2019, 41(10): 1229-1239. DOI: 10.13374/j.issn2095-9389.2019.03.27.002

深度神经网络模型压缩综述

A survey of model compression for deep neural networks

  • 摘要: 深度神经网络近年在计算机视觉以及自然语言处理等任务上不断刷新已有最好性能,已经成为最受关注的研究方向.深度网络模型虽然性能显著,但由于参数量巨大、存储成本与计算成本过高,仍然难以部署到硬件受限的嵌入式或移动设备上.相关研究发现,基于卷积神经网络的深度模型本身存在参数冗余,模型中存在对最终结果无用的参数,这为深度网络模型压缩提供了理论支持.因此,如何在保证模型精度条件下降低模型大小已经成为热点问题.本文对国内外学者近几年在模型压缩方面所取得的成果与进展进行了分类归纳并对其优缺点进行评价,并探讨了模型压缩目前存在的问题以及未来的发展方向.

     

    Abstract: In recent years, deep neural networks (DNN) have attracted increasing attention because of their excellent performance in computer vision and natural language processing. The success of deep learning is due to the fact that the models have more layers and more parameters, which gives them stronger nonlinear fitting ability. Furthermore, the continuous updating of hardware equipment makes it possible to quickly train deep learning models. The development of deep learning is driven by the greater amounts of available annotated or unannotated data. Specifically, large-scale data provide models with greater learning space and stronger generalization ability. Although the performance of deep neural networks is significant, they are difficult to deploy in embedded or mobile devices with limited hardware due to their large number of parameters and high storage and computing costs. Recent studies have found that deep models based on a convolutional neural network are characterized by parameter redundancy as well as parameters that are irrelevant to the final model results, which provides theoretical support for the compression of deep network models. Therefore, determining ways to reduce model size while retaining model precision has become a hot research issue. Model compression refers to the reduction of a trained model through some operation to obtain a lightweight network with equivalent performance. After model compression, there are fewer network parameters and usually a reduction in the computation required, which greatly reduces the computational and storage costs and enables the deployment of the model in restricted hardware conditions. In this paper, the achievements and progress made in recent years by domestic and foreign scholars with respect to model compressionwere classified and summarized and their advantages and disadvantages were evaluated, including network pruning, parameter sharing, quantization, network decomposition, and network distillation. Then, existing problems and the future development of model compression were discussed.

     

/

返回文章
返回