电子学报 ›› 2021, Vol. 49 ›› Issue (1): 140-148.DOI: 10.12263/DZXB.20190238

• 学术论文 • 上一篇    下一篇

基于韵母发音事件匹配与位置时延分析的音唇一致性判决方法

朱铮宇1,2, 廖丽平1, 杨春玲2, 王泳1, 蔡君1, 邱华愉1   

  1. 1. 广东技术师范大学网络空间安全学院, 广东广州 510665;
    2. 华南理工大学电子与信息学院, 广东广州 510641
  • 收稿日期:2019-03-03 修回日期:2020-05-10 出版日期:2021-01-25
    • 作者简介:
    • 朱铮宇 男,1984年出生,广东广州人,博士后,讲师.主要从事音视频多模态信号处理方面的研究工作.E-mail:zhuzhengyu0701@163.com;廖丽平 女,1981年出生,福建厦门人,教授,硕士生导师,广东省系统工程学会理事.主要从事软件定义、智能路由、大数据处理及应用等方面的研究工作.E-mail:liping1110@hotmail.com
    • 基金资助:
    • 国家自然科学基金 (No.61672173); 广东省普通高校青年创新人才 (No.2018KQNCX140)

Lip Motion and Voice Consistency Recognition Based on Audio-Visual Matching of Vowel Pronunciation Events and Position Delay Analysis

ZHU Zheng-yu1,2, LIAO Li-ping1, YANG Chun-ling2, WANG Yong1, CAI Jun1, QIU Hua-yu1   

  1. 1. School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, Guangdong 510665, China;
    2. School of Electronic and Information Engineering, South China University of Technology, Guangzhou, Guangdong 510641, China
  • Received:2019-03-03 Revised:2020-05-10 Online:2021-01-25 Published:2021-01-25
    • Supported by:
    • National Natural Science Foundation of China (No.61672173); Youth Innovative Talents of Colleges and Universities in Guangdong Province (No.2018KQNCX140)

摘要: 针对传统一致性判决方法主要对整句(段)话进行分析,并无对分析内容加以筛选,存在字典规模过大、计算复杂度高及结果易受静音等弱关联片段影响等不足,本文以唇型变化显著的韵母为代表性发音事件,结合音唇初始时延分布范围的统计结果,提出基于韵母发音事件匹配与位置时延分析的一致性判决方法.先利用提出的音视频结合韵母切分法对字典学习数据进行韵母段筛选,再通过学习所得的韵母字典分析韵母事件的音唇匹配度,并对各韵母出现位置的时延分布进行统计评分.最后由韵母发音事件音唇匹配度得分与位置时延分析评分相融合的评分机制判决一致性.实验结果表明,本文算法在识别性能上优于多种比较算法,且与传统字典法相比降低了一定的运算量.

 

关键词: 一致性分析, 声韵母切分, 字典学习

Abstract: For the mainstream lip motion and voice coherence judgment method, the whole sentence (segment) is analyzed without screening the content. This leads to large dictionary size and high computational complexity, and the result is vulnerable to weak related segments such as mute. Considering the vowel with significant lip shape changes as a representative pronunciation event and combining with the statistical results of the audio-visual initial delay distribution range, a consistent decision method based on audio-visual matching of vowel pronunciation events and position delay analysis is proposed. Firstly, the dictionary learning data is selected by the proposed audio-visual vowel segmentation method, and then the vowel dictionary is used to analyze the matching of the vowel event, and the time delay distribution of each vowel position is statistically scored. A consistency judgment is made by a scoring mechanism in which the vowel pronunciation event lip matching score and the position delay analysis score are combined. Experimental results show that the proposed method is superior to compared algorithms in recognition performance and reduces the amount of computation compared with the traditional dictionary method.

 

Key words: coherence analysis, initial/final segmentation, dictionary learning

中图分类号: