电子学报 ›› 2020, Vol. 48 ›› Issue (3): 503-509.DOI: 10.3969/j.issn.0372-2112.2020.03.012

• 学术论文 • 上一篇    下一篇

LSCN:一种用于动作识别的长短时序关注网络

杨珂, 王敬宇, 戚琦, 孙海峰, 王晶, 廖建新   

  1. 北京邮电大学网络与交换国家重点实验室, 北京 100876
  • 收稿日期:2019-03-13 修回日期:2019-05-28 出版日期:2020-03-25
    • 通讯作者:
    • 王敬宇
    • 作者简介:
    • 杨珂 男,1995年9月出生于辽宁省丹东市.北京邮电大学网络与交换国家重点实验室硕士研究生.研究方向为视频动作识别.E-mail:18811582632@163.com
    • 基金资助:
    • 国家自然科学基金 (No.61771068,No.61671079); 北京市自然科学基金 (No.4182041)

LSCN: Concerning Long and Short Sequence Together for Action Recognition

YANG Ke, WANG Jing-yu, QI Qi, SUN Hai-feng, WANG Jing, LIAO Jian-xin   

  1. State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2019-03-13 Revised:2019-05-28 Online:2020-03-25 Published:2020-03-25
    • Corresponding author:
    • WANG Jing-yu
    • Supported by:
    • National Natural Science Foundation of China (No.61771068, No.61671079); National Natural Science Foundation of Beijing Municipality,  China (No.4182041)

摘要: 相较于图像分析,如何分析时序信息是动作识别中的一个主要问题.大多数先前的方法,如3D卷积网络、双流卷积网络,仅使用包含全局时域信息的特征作为视频的表征,忽略了局部时序特征的重要性.考虑到这样的问题,本文提出一种基于时序交互感知模块的长短时序关注网络——Long and Short Sequence Concerned Networks(LSCN),融合不同的时序信息,利用不同卷积层时序特征的交互加强对不同时序长度的动作实例的表示,兼顾长短动作实例对时序信息的需求.实验结果表明,基于3D ResNext101的LSCN在两个公共数据集(UCF101和HMDB51)上,相较于基础的网络分别有0.4%和2.9%的准确率提升.

关键词: 动作识别, 时序特征, 特征融合, 人机交互, 视频分析, 深度学习

Abstract: Compared with image analysis, how to analyze temporal information is a challenging problem in action recognition. Most of the previous methods, such as 3D CNNs (convolutional neural networks) and two-streams CNNs, only used features containing global temporal information as video representation, ignoring the importance of local temporal features. To solve this problem, we propose long and short sequence concerned networks (LSCN) based on temporal interaction perception module, which can combine different temporal information. LSCN makes use of the interactions of temporal features from different convolution layers to enhance the representation of videos and takes into account the needs of temporal information for long and short sequence actions. The results of experiments show that LSCN based on 3D ResNext101 can be generalized in two public datasets (UCF101 and HMDB51). Moreover, compared with the basic network, there are 0.4% and 2.9% accuracy improvements respectively.

Key words: action recognition, temporal feature, feature fusion, human-computer interaction, video analysis, deep learning

中图分类号: