人工神经网络
循环神经网络
模式识别(心理学)
特征提取
深度学习
特征(语言学)
机器学习
代表(政治)
鉴定(生物学)
任务(项目管理)
事件(粒子物理)
作者
Yanxiang Wang,Xian Zhang,Yiran Shen,Bowen Du,Guangrong Zhao,Lizhen Cui Cui Lizhen,Hongkai Wen
标识
DOI:10.1109/tpami.2021.3054886
摘要
Dynamic vision sensors (event cameras) are recently introduced to solve a number of different vision tasks such as object recognition, activities recognition, tracking, etc.Compared with the traditional RGB sensors, the event cameras have many unique advantages such as ultra low resources consumption, high temporal resolution and much larger dynamic range. However, those cameras only produce noisy and asynchronous events of intensity changes, i.e., event-streams rather than frames, where conventional computer vision algorithms can't be directly applied. We hold the opinion that the key challenge of improving the performance of event cameras in vision tasks is finding the appropriate representations of the event-streams so that cutting-edge learning approaches can be applied to fully uncover the spatial-temporal information contained in the event-streams. In this paper, we focus on the event-based human gait identification task and investigate the possible representations of the event-streams when deep neural networks are applied as the classifier. We propose new event-based gait Recognition approaches basing on two different representations of the event-stream, i.e., graph and image-like representations, and use Graph-based Convolutional Network (GCN) and Convolutional Neural Networks (CNN) respectively to recognize gait from the event-streams.
科研通智能强力驱动
Strongly Powered by AbleSci AI