电子学报 ›› 2020, Vol. 48 ›› Issue (8): 1457-1464.DOI: 10.3969/j.issn.0372-2112.2020.08.001

• 学术论文 •    下一篇

一种基于帧图像的动态视觉传感器样本集建模方法

陆兴鹏1,4, 王明羽1,2, 曹扬3, 赵瑞莲4, 周武3, 李兆麟1, 魏少军2   

  1. 1. 清华大学计算机科学与技术系, 北京 100084;
    2. 清华大学微电子学研究所, 北京 100084;
    3. 北京航天晨信科技有限公司, 北京 102308;
    4. 北京化工大学信息科学与技术学院, 北京 100029
  • 收稿日期:2019-04-17 修回日期:2019-12-02 出版日期:2020-08-25 发布日期:2020-08-25
  • 通讯作者: 王明羽
  • 作者简介:陆兴鹏 男,安徽无为人.现为北京化工大学信息科学与技术学院硕士研究生,2018年4月至2020年1月于清华大学计算机科学与技术系从事实习工作,主要研究方向为图像处理、模式识别、深度学习等. E-mail:luxingpeng@foxmail.com;曹 扬 女,辽宁营口人.现工作于北京航天晨信科技有限责任公司战略发展研究室,主要从事指挥信息系统智能化技术研究工作;周 武 男,上海人.现任北京航天晨信科技有限公司副总经理,中国航天科工集团公司大数据与人工智能专业组副组长,指挥信息系统及装备智能化技术专家;李兆麟 男,黑龙江大庆人.清华大学计算机科学与技术系教授,清华大学移动计算研究中心副主任,电子元器件技术专家,十三五科技创新基础研究领域专家,主要研究方向为计算机体系结构、多核/众核处理器、人工智能芯片、嵌入式系统等.
  • 基金资助:
    中国博士后科学基金(No.2019M650030);清华大学自主科研计划(No.2018Z05JDX005)

A Dynamic Vision Sensor Sample Set Modeling Method Based on Frame Images

LU Xing-peng1,4, WANG Ming-yu1,2, CAO Yang3, ZHAO Rui-lian4, ZHOU Wu3, LI Zhao-lin1, WEI Shao-jun2   

  1. 1. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China;
    2. Institute of Microelectronics, Tsinghua University, Beijing 100084, China;
    3. Beijing Aerospace Chenxin Science and Technology Ltd., Beijing 102308, China;
    4. School of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
  • Received:2019-04-17 Revised:2019-12-02 Online:2020-08-25 Published:2020-08-25

摘要: 动态视觉传感器(Dynamic Vision Sensor,DVS)利用事件驱动原理实现运动目标的快速提取,具有低延迟、低存储空间和高动态范围等优势.目前研究表明,基于DVS的神经网络在目标检测等领域具有明显的速度优势.但是,这类神经网络在训练时所需要的样本集主要依赖DVS相机产生,缺少高效的样本集生成方法,这制约了这类神经网络的应用与发展.本文根据DVS原理,提出了一种基于帧图像的DVS建模以及样本集建模方法.该方法设定每个像素单元独立工作,经过动态差分和逻辑判断后输出触发的地址-事件数据,这些数据通过编码和归一化生成神经网络训练时所需要的样本集.通过对MNIST和CIFAR-10样本集建模的实验结果表明,该建模方法效果与DVS相机基本一致;与基于帧图像的存储方式相比,该样本集可以明显减少存储空间.该方法所生成样本集已经通过卷积神经网络训练和测试验证.

关键词: 动态视觉传感器, 样本集建模, 事件驱动, 存储优化

Abstract: Dynamic vision sensor (DVS)shows significant advantages on low computational latency,low memory usage and high dynamic range by utilizing the event-driven principle to extract features from moving objects.Current research shows that DVS-based neural networks improve object detection speed obviously.However,the sample sets required by such neural networks mainly rely on specific DVS cameras while lacking efficient generation methods for the sample sets.It limits the application and development of those neural networks.According to the principle of DVS,this paper presents a DVS sample set modeling method based on frame images,in which the sample set can be generated by encoding and normalizing the address-event (AE)data after being trigged by dynamic differential comparisons and logical judgments.The experimental results for modeling the MNIST and CIFAR-10 sample sets show that,the sample set modeled by the proposed method is basically matched with the real DVS cameras.Compared with traditional frame image sample sets,this method can significantly reduce the memory usage.The sample set generated by the proposed modeling method has also been verified by training and testing a typical convolutional neural network.

Key words: dynamic vision sensor, sample set modeling, event-driven, memory optimization

中图分类号: