电子学报 ›› 2021, Vol. 49 ›› Issue (11): 2177-2185.DOI: 10.12263/DZXB.20201187

• 学术论文 • 上一篇    下一篇

融合语言模型的端到端中文语音识别算法

吕坤儒1, 吴春国1,2,3, 梁艳春1,2,3, 袁宇平1, 任智敏1, 周柚1,2, 时小虎1,2,3   

  1. 1.吉林大学计算机科学与技术学院,吉林 长春 130012
    2.吉林大学符号计算与知识工程教育部重点实验室,吉林 长春 130012
    3.珠海科技学院计算机学院,广东 珠海 519041
  • 收稿日期:2020-10-23 修回日期:2021-07-20 出版日期:2021-11-25 发布日期:2021-11-25
  • 作者简介:吕坤儒 男,1993年生,河南安阳人.吉林大学软件学院硕士研究生.研究方向:机器学习.E-mail:892539843@qq.com
    时小虎(通信作者) 男,1974年生,河北玉田人.吉林大学计算机科学与技术学院教授.研究方向:机器学习.E-mail:shixh@jlu.edu.cn
  • 基金资助:
    国家自然科学基金(61972174);吉林省预算内基本建设资金(2021C044-1);广东省国际科技合作项目(2020A0505100018);吉林省自然科学基金(20200201163JC)

An End-to-End Chinese Speech Recognition Algorithm Integrating Language Model

Kun-ru LÜ1, Chun-guo WU1,2,3, Yan-chun LIANG1,2,3, Yu-ping YUAN1, Zhi-min REN1, You ZHOU1,2, Xiao-hu SHI1,2,3   

  1. 1.College of Computer Science and Technology,Jilin University,Changchun,Jilin 130012,China
    2.Ministry of Education Key Laboratory of Symbol Computation and Knowledge Engineering,Jilin University,Changchun,Jilin 130012,China
    3.School of Computer Science,Zhuhai College of Science and Technology,Zhuhai,Guangdong 519041,China
  • Received:2020-10-23 Revised:2021-07-20 Online:2021-11-25 Published:2021-11-25

摘要:

为了解决语音识别模型在识别中文语音时鲁棒性差,缺少语言建模能力而无法有效区分同音字或近音字的不足,本文提出了融合语言模型的端到端中文语音识别算法.算法建立了一个基于深度全序列卷积神经网络和联结时序分类的从语音到拼音的语音识别声学模型,并借鉴Transformer的编码模型,构建了从拼音到汉字的语言模型,之后通过设计语音帧分解模型将声学模型的输出和语言模型的输入相连接,克服了语言模型误差梯度无法传递给声学模型的难点,实现了声学模型和语言模型的联合训练.为验证本文方法,在实际数据集上进行了测试.实验结果表明,语言模型的引入将算法的字错误率降低了21%,端到端的联合训练算法起到了关键作用,其对算法的影响达到了43%.和已有5种主流算法进行比较的结果表明本文方法的误差明显低于其他5种对比模型,与结果最好的DeepSpeech2模型相比字错误率降低了28%.

关键词: 语音识别, 联结时序分类, 语言模型, 声学模型, 语音帧分解

Abstract:

To address the problems of poor robustness, lack of language modeling ability and inability to distinguish between homophones or near-tone characters effectively in the recognition of Chinese speech, an end-to-end Chinese speech recognition algorithm integrating language model is proposed. Firstly, an acoustic model from speech to Pinyin is established based on Deep Fully Convolutional Neural Network (DFCNN) and Connectionist Temporal Classification (CTC). Then the language model from Pinyin to Chinese character is constructed by using the encoder of Transformer. Finally, the speech frame decomposition model is designed to link the output of the acoustic model with the input of the language model, which overcomes the difficulty that the gradient of loss function cannot be passed from the language model to the acoustic model, and realizes the end-to-end training of the acoustic model and the language model. Real data sets are applied to verify the proposed method. Experimental results show that the introduction of language model reduces the word error rate (WER) of the algorithm by 21%, and the end-to-end integrating training algorithm plays a key role, which improves the performance by 43%. Compared with five up-to-date algorithms, our method achieves a 28% WER, lower than that of the best model among comparison methods—DeepSpeech2.

Key words: speech recognition, CTC, language model, acoustic model, speech frame decomposition

中图分类号: