Page 58 - 《应用声学》2022年第3期
P. 58

380                                                                                  2022 年 5 月


                 International Conference on Machine Learning, 1993:  international conference on Multimedia, 2014: 1041–1044.
                 41–48.                                         [15] Boddapati V, Petef A, Rasmusson J, et al.  Classify-
              [7] Hinton G, Vinyals O, Dean J. Distilling the knowledge  ing environmental sounds using image recognition net-
                 in a neural network[J]. Computer Science, 2015, arXiv:  works[C]// Procedia Computer Science, 2017: 2048–2056.
                 1503.02531v1.
                                                                [16] Tokozume Y, Harada T. Learning environmental sounds
              [8] Søgaard A, Goldberg Y. Deep multi-task learning with low
                                                                   with end-to-end convolutional neural network[C]// IEEE
                 level tasks supervised at lower layers[C]//Proceedings of  International Conference on Acoustics, Speech and Signal
                 the 54th Annual Meeting of the Association for Compu-  Processing (ICASSP), 2017: 2721–2725.
                 tational Linguistics, 2016: 231–235.
                                                                [17] Piczak K J. Environmental sound classification with con-
              [9] Zhao R, Pandit V, Qian K, et al. Deep sequential im-
                                                                   volutional neural networks[C]//IEEE 25th International
                 age features on acoustic scene classification[C]// Work-
                                                                   Workshop on Machine Learning for Signal Processing
                 shop on Detection and Classification of Acoustic Scenes
                                                                   (MLSP), 2015: 1–6.
                 and Events, 2017.
                                                                [18] Tokozume Y, Ushiku Y, Harada T. Learning from
             [10] Chen H, Liu Z, Liu Z, et al. Integrating the data augmen-
                                                                   between-class examples for deep sound recognition[C]//
                 tation scheme with various classifiers for acoustic scene
                                                                   arXiv Preprint, arXiv: 1711.10282.
                 modeling[J]. arXiv Preprint, arXiv: 1907.06639.
             [11] Ioffe S, Szegedy C. Batch normalization: accelerating deep  [19] Zhang Z, Xu S, Cao S, et al. Deep convolutional neu-
                 network training by reducing internal covariate shift[J].  ral network with mixup for environmental sound clas-
                                                                   sification[C]//Pattern Recognition and Computer Vision
                 arXiv Preprint, arXiv: 1502.03167, 2015.
                                                                   (PRCV), 2018: 356–367.
             [12] Nair V, Hinton G E. Rectified linear units improve re-
                 stricted boltzmann machines[C]//Proceedings of the 27th  [20] Agrawal D, Sailor H, Soni M, et al. Novel TEO-based
                 International Conference on Machine Learning, 2010:  Gammatone features for environmental sound classifica-
                 807–814.                                          tion[C]// in 2017 25th European Signal Processing Con-
             [13] Piczak K J. ESC: dataset for environmental sound clas-  ference (EUSIPCO). 2017: 1809–1813.
                 sification[C]//Proceedings of the 23rd Annual ACM Con-  [21] Abdoli S, Cardinal P, Koerich A. End-to-end environ-
                 ference on Multimedia, 2015: 1015–1018.           mental sound classification using a 1d convolutional neu-
             [14] Salamon J, Jacoby C, Bello J. A dataset and taxonomy for  ral network[C]//Expert Systems with Applications, 2019:
                 urban sound research[C]// Proceedings of the 22nd ACM  252–263.
   53   54   55   56   57   58   59   60   61   62   63