张晴晴, 刘勇, 潘接林, 颜永红. 基于卷积神经网络的连续语音识别[J]. 工程科学学报, 2015, 37(9): 1212-1217. DOI: 10.13374/j.issn2095-9389.2015.09.015
引用本文: 张晴晴, 刘勇, 潘接林, 颜永红. 基于卷积神经网络的连续语音识别[J]. 工程科学学报, 2015, 37(9): 1212-1217. DOI: 10.13374/j.issn2095-9389.2015.09.015
ZHANG Qing-qing, LIU Yong, PAN Jie-lin, YAN Yong-hong. Continuous speech recognition by convolutional neural networks[J]. Chinese Journal of Engineering, 2015, 37(9): 1212-1217. DOI: 10.13374/j.issn2095-9389.2015.09.015
Citation: ZHANG Qing-qing, LIU Yong, PAN Jie-lin, YAN Yong-hong. Continuous speech recognition by convolutional neural networks[J]. Chinese Journal of Engineering, 2015, 37(9): 1212-1217. DOI: 10.13374/j.issn2095-9389.2015.09.015

基于卷积神经网络的连续语音识别

Continuous speech recognition by convolutional neural networks

  • 摘要: 在语音识别中,卷积神经网络(convolutional neural networks,CNNs)相比于目前广泛使用的深层神经网络(deep neural network,DNNs),能在保证性能的同时,大大压缩模型的尺寸.本文深入分析了卷积神经网络中卷积层和聚合层的不同结构对识别性能的影响情况,并与目前广泛使用的深层神经网络模型进行了对比.在标准语音识别库TIMIT以及大词表非特定人电话自然口语对话数据库上的实验结果证明,相比传统深层神经网络模型,卷积神经网络明显降低模型规模的同时,识别性能更好,且泛化能力更强.

     

    Abstract: Convolutional neural networks (CNNs), which show success in achieving translation invariance for many image processing tasks, were investigated for continuous speech recognition. Compared to deep neural networks (DNNs), which are proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the neural network model sizes significantly, and at the same time achieve even a better recognition accuracy. Experiments on standard speech corpus TIMIT and conversational speech corpus show that CNNs outperform DNNs in terms of the accuracy and the generalization ability.

     

/

返回文章
返回