电子学报 ›› 2020, Vol. 48 ›› Issue (6): 1052-1061.DOI: 10.3969/j.issn.0372-2112.2020.06.002

• 学术论文 • 上一篇    下一篇

基于循环区域关注和视频帧关注的视频行为识别网络设计

桑海峰1, 赵子裕1, 何大阔2   

  1. 1. 沈阳工业大学信息科学与工程学院, 辽宁沈阳 110870;
    2. 东北大学信息科学与工程学院, 辽宁沈阳 110819
  • 收稿日期:2019-06-03 修回日期:2019-11-22 出版日期:2020-06-25
    • 通讯作者:
    • 赵子裕
    • 作者简介:
    • 桑海峰 男,1978年生于辽宁沈阳,博士,沈阳工业大学教授,主要研究方向为视觉检测技术与图像处理. E-mail:sanghaif@163.com
    • 基金资助:
    • 国家自然科学基金 (No.61773105,No.61374147); 辽宁省自然科学基金 (No.20170540675); 辽宁省教育厅科研项目 (No.LQGD2017023)

Recurrent Region Attention and Video Frame Attention Based Video Action Recognition Network Design

SANG Hai-feng1, ZHAO Zi-yu1, HE Da-kuo2   

  1. 1. School of Information Science&Engineering, Shenyang University of Technology, Shenyang, Liaoning 110870, China;
    2. College of Information Science&Engineering, Northeastern University, Shenyang, Liaoning 110819, China
  • Received:2019-06-03 Revised:2019-11-22 Online:2020-06-25 Published:2020-06-25
    • Corresponding author:
    • ZHAO Zi-yu
    • Supported by:
    • National Natural Science Foundation of China (No.61773105, No.61374147); Natural Science Foundation of Liaoning Province (No.20170540675); Research Project of Education Department of Liaoning Province (No.LQGD2017023)

摘要: 视频帧中复杂的环境背景、照明条件等与行为无关的视觉信息给行为空间特征带来了大量的冗余和噪声,一定程度上影响了行为识别的准确性.针对这一点,本文提出了一种循环区域关注单元以捕捉空间特征中与行为相关的区域视觉信息,并根据视频的时序特性又提出了循环区域关注模型.其次,本文又提出了一种能够突显整段行为视频序列中较为重要帧的视频帧关注模型,以减少异类行为视频序列间相似的前后关联给识别带来的干扰.最后,提出了一个能够端到端训练的网络模型:基于循环区域关注和视频帧关注的视频行为识别网络(Recurrent Region Attention and Video Frame Attention based video action recognition Network,RFANet).在两个视频行为识别基准UCF101数据集和HMDB51数据集上的实验表明,本文提出的端到端网络RFANet能够可靠地识别出视频中行为的所属类别.受双流结构启发,本文构建了双模态RFANet网络.在相同的训练环境下,双模态RFANet网络在两个数据集上达到了最优的性能.

关键词: 行为识别, 循环区域关注, 视频帧关注, 循环神经网络

Abstract: In video frames, the complex environment background, lighting conditions and other visual information unrelated to action bring a lot of redundancy and noise to action spatial feature, which affects the accuracy of action recognition to some extent. In view of this, this paper proposes a recurrent region attention cell to capture the visual information of the region related to the action in spatial features. Based on the sequence nature of video, a recurrent region attention model (RRA) is proposed. Secondly, this paper proposes a video frame attention model (VFA) that can highlight the more important frames in the video sequence of the whole action, so as to reduce the interference brought by the similar before and after correlation between video sequences of different actions. Finally, this paper presents a network model which can perform end-to-end training: recurrent region attention and video frame attention based video action recognition network (RFANet). Experiments on two video action recognition benchmark UCF101 dataset and HMDB51 dataset show that the RFANet proposed in this paper can reliably identify the category of action in the video. Inspired by the two-stream structure, we construct a two-modalities RFANet network. In the same training conditions, the two-modalities RFANet network achieved optimal performance on both datasets.

Key words: action recognition, recurrent region attention, video frame attention, recurrent neural network

中图分类号: