计算机科学
人工智能
光流
融合
计算机视觉
监督学习
事件(粒子物理)
传感器融合
深度学习
特征提取
模式识别(心理学)
机器学习
人工神经网络
图像(数学)
物理
哲学
量子力学
语言学
作者
Cong Li,Dianxi Shi,Ruihao Li,Huachi Xu
标识
DOI:10.1109/ijcnn52387.2021.9534106
摘要
Compared with conventional cameras, dynamic vision sensors (namely event cameras) are especially suitable for high-speed and high-dynamic applications due to their advantages (high dynamic range, low latency, etc.). However, it still struggles when encountering low-speed and low-texture scenes. Aiming at the problem of optical flow estimation, we propose a novel unsupervised learning estimation method with both event data and gray image frames as the input. Two different fusion mechanisms are presented and discussed in our work, one is the Direct Fusion Network (DFEV-FlowNet), and the other one is Local Squeeze Extraction Network (LSENet). DFEV-FlowNet directly fuses synthesized event frames and gray image frames, and LSENet adopts a local squeeze extraction weights adaptive mechanism. In order to achieve self-supervised learning, photometric constraints between consecutive frames are used to drive the network training. We use the public event dataset MVSEC to evaluate the proposed optical flow estimation method qualitatively and quantitatively. The results show that our method demonstrate better performance in terms of the estimation accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI