Page 64 - 《应用声学》2020年第2期
P. 64

222                                                                                  2020 年 3 月


              [5] Cao W, Xu J, Liu Z. Speaker-independent speech emotion  2014, 34(2): 558–561, 579.
                 recognition based on random forest feature selection algo-  [14] 魏艳, 张雪英. 噪声条件下的语音特征 PLP 参数的提取 [J].
                 rithm[C]. 2017 36th Chinese Control Conference (CCC),  太原理工大学学报, 2009, 40(3): 222–224.
                 Dalian, 2017: 10995–10998.                        Wei Yan, Zhang Xueying. A PLP speech feature extrac-
              [6] 刘博, 范钰超, 徐明星. 基于特征级决策级双层融合的语音                    tion method in noisy environment[J]. Journal of Taiyuan
                 情感识别 [C]//中国中文信息学会语音信息专业委员会. 第十                   University of Technology, 2009, 40(3): 222–224.
                 三届全国人机语音通讯学术会议 (NCMMSC2015) 论文集,               [15] Haque S, Togneri R, Zaknich A. Perceptual features for
                 2015: 6.                                          automatic speech recognition in noisy environments[J].
              [7] 张文克. 融合 LPCC 和 MFCC 特征参数的语音识别技术的                 Speech Communication, 2008, 51(1): 15–25.
                 研究 [D]. 长沙: 湘潭大学, 2016.                        [16] 魏艳. 改进 RASTA-PLP 语音特征参数提取算法研究 [D]. 太
              [8] 王忠民, 刘戈, 宋辉. 基于多核学习特征融合的语音情感识别                   原: 太原理工大学, 2009.
                 方法 [J]. 计算机工程, 2019, 45(8): 248–254.           [17] Hermansky H, Morgan N, Bayya A, et al. RASTA-PLP
                 Wang Zhongmin, Liu Ge, Song Hui. Feature fusion based  speech analysis technique[C]// IEEE International Con-
                 on multiple kernel learning for speech emotion recogni-  ference on Acoustics. IEEE, 1992.
                 tion[J]. Computer Engineering, 2019, 45(8): 248–254.  [18] 韩文静, 李海峰, 阮华斌, 等. 语音情感识别研究进展综述 [J].
              [9] Liu G, He W, Jin B. Feature fusion of speech emotion  软件学报, 2014, 25(1): 37–50.
                 recognition based on deep learning[C]. 2018 International  Han Wenjing, Li Haifeng, Ruan Huabin, et al. Review on
                 Conference on Network Infrastructure and Digital Con-  speech emotion recognition[J]. Journal of Software, 2014,
                 tent (IC-NIDC), Guiyang, 2018: 193–197.           25(1): 37–50.
             [10] 宋静, 张雪英, 孙颖, 等. 基于 PAD 情绪模型的情感语音识             [19] Gobl C, Chasaide A N. The role of voice quality in com-
                 别 [J]. 微电子学与计算机, 2016, 33(9): 128–131.            municating emotion, mood and attitude[J]. Speech Com-
                 Song Jing, Zhang Xueying, Sun Ying, et al. Emotional  munication, 2003, 40(1/2): 189−212.
                 speech recognition based on PAD emotion model[J]. Mi-  [20] 高慧, 苏广川, 陈善广. 不同情绪状态下汉语语音的声学特征
                 croelectronics & Computer, 2016, 33(9): 128–131.  分析 [J]. 航天医学与医学工程, 2005(5): 350–354.
             [11] Benesty J, Sondhi M M, Huang Y. Springer handbook of  Gao Hui, Su Guangchuan, Chen Shanguang. Acoustic
                 speech processing[M]. Berlin: Springer-Verlag, 2008.  features analysis of mandarin speech under various emo-
             [12] Sandipan C, Anindya R, Sourav M. Capturing comple-  tional status[J]. Space Medicine & Medical Engineering,
                 mentaiy information via reversed filter bank and par-  2005(5): 350–354.
                 allel implementation with MFCC for improved text-  [21] 赵力. 语音信号处理 [M]. 北京: 机械工业出版社, 2016:
                 independent speaker identification[C]//Proceedings of the  11–14.
                 2007 International Conference on Computing: Theory and  [22] Adigwe A, Tits N, EI Haddad K, et al. The emotional
                 Application. Piscataway: IEEE, 2007: 463–467.     voices database: towards controlling the emotion dimen-
             [13] 鲜晓东, 樊宇星. 基于 Fisher 比的梅尔倒谱系数混合特征提                sion in voice generation systems[J]. arXiv: 1806.09514,
                 取方法 [J]. 计算机应用, 2014, 34(2): 558–561, 579.        2018.
                 Xian Xiaodong, Fan Yuxing.  Parameter extraction  [23] Burkhardt F, Paeschke A, Rolfes M, et al. A database
                 method for Mel frequency cepstral coefficients based on  of German emotional speech[C]. In: Proc. of the 2005
                 Fisher criterion[J]. Journal of Computer Applications,  INTERSPEECH. Lisbon: ISCA, 2005: 1517–1520.
   59   60   61   62   63   64   65   66   67   68   69