Page 101 - 《应用声学》2025年第2期
P. 101

第 44 卷 第 2 期          申小虎等: 网络剪枝与知识蒸馏相结合的轻量级鸟声识别方法                                          361


                 2019.                                             Processing. New Orleans, USA, 2017: 4820–4824.
             [17] Tan M, Chen B, Pang R, et al.  MnasNet: Platform-  [25] Aflalo Y, Noy A, Lin M, et al. Knapsack pruning with
                 aware neural architecture search for mobile[C]// 2019  inner distillation[J]. arXiv: 2002.08258, 2020.
                 IEEE/CVF Conference on Computer Vision and Pat-  [26] Zhu J, Zhao Y, Pei J. Progressive kernel pruning based
                 tern Recognition (CVPR). Long Beach, CA, USA, 2019:  on the information mapping sparse index for CNN com-
                 2815–2823.                                        pression[J]. IEEE Access, 2021, 9: 10974–10987.
             [18] Hu J, Shen L, Albanie S, et al. Squeeze-and-excitation  [27] Li T, Li J, Liu Z, et al. Few sample knowledge distillation
                 networks[J]. IEEE Transactions on Pattern Analysis and  for efficient network compression[C]// Proceedings of the
                 Machine Intelligence, 2019, 42(8): 2011–2023.
                                                                   IEEE/CVF Conference on Computer Vision and Pattern
             [19] 杨宏炳, 迟勇欣, 王金光. 基于剪枝网络的知识蒸馏对遥
                                                                   Recognition. Seattle, USA, 2020: 14639–14647.
                 感卫星图像分类方法 [J]. 计算机应用研究, 2021, 38(8):
                                                                [28] Kahl S, Stter F R, Goau H, et al. Overview of BirdCLEF
                 2469–2473.
                                                                   2019: Large-scale bird recognition in soundscapes[C]//
                 Yang Hongbing, Chi Yongxin, Wang Jinguang. Knowl-
                                                                   Conference and Labs of the Evaluation Forum, 2019.
                 edge distillation method for remote sensing satellite im-
                                                                [29] Molchanov P, Tyree S, Karras T, et al. Pruning con-
                 age classification based on pruning network[J]. Applica-
                                                                   volutional neural networks for resource efficient transfer
                 tion Research of Computers, 2021, 38(8): 2469–2473.
                                                                   learning[J]. arXiv: 1611.06440, 2017.
             [20] 姜晓勇, 李忠义, 黄朗月, 等. 神经网络剪枝技术研究综述 [J].
                                                                [30] Lasseck M. Bird species identification in soundscapes[C]//
                 应用科学学报, 2022, 40(5): 838–849.
                                                                   Conference and Labs of the Evaluation Forum, 2019.
                 Jiang Xiaoyong, Li Zhongyi, Huang Langyue, et al. Re-
                                                                [31] Iandola F N, Han S, Moskewicz M W, et al. SqueezeNet:
                 view of neural network pruning techniques[J]. Journal of
                                                                   Alexnet-level accuracy with 50x fewer parameters and
                 Applied Sciences, 2022, 40(5): 838–849.
                                                                   < 0.5 MB model size[J]. arXiv: 1602.07360, 2016.
             [21] 黄震华, 杨顺志, 林威, 等. 知识蒸馏研究综述 [J]. 计算机学
                                                                [32] Han K, Wang Y, Zhang Q, et al. Model rubik’s cube:
                 报, 2022, 45(3): 624–653.
                 Huang Zhenhua, Yang Shunzhi, Lin Wei, et al. Knowledge  Twisting resolution, depth and width for TinyNets[J].
                 distillation: A survey[J]. Chinese Journal of Computers,  arXiv: 2010.14819, 2020.
                 2022, 45(3): 624–653.                          [33] Howard A G, Zhu M, Chen B, et al. MobileNets: Efficient
             [22] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a  convolutional neural networks for mobile vision applica-
                 neural network[J]. Computer Science, 2015, 14(7): 38–39.  tions[J]. arXiv: 1704.04861, 2017.
             [23] Zhang L, Song J, Gao A, et al. Be your own teacher: Im-  [34] Sandler M, Howard A G, Zhu M, et al.  Mo-
                 prove the performance of convolutional neural networks  bilenetv2: Inverted residuals and linear bottlenecks[C]//
                 via self distillation[C]// 2019 IEEE/CVF International  2018 IEEE/CVF Conference on Computer Vision and
                 Conference on Computer Vision (ICCV), Seoul, KR, 2019:  Pattern Recognition, Salt Lake City, UT, USA, 2018:
                 3712–3721.                                        4510–4520.
             [24] Lu L, Guo M, Renals S. Knowledge distillation for small-  [35] Ma N N, Zhang X Y, Zheng H T, et al.  Shufflenet
                 footprint highway networks[C]// Proceedings of the IEEE  v2: Practical guidelines for efficient CNN architecture de-
                 International Conference on Acoustics, Speech and Signal  sign[J]. arXiv: 1807.11164, 2018.
   96   97   98   99   100   101   102   103   104   105   106