计算机科学
里程计
人工智能
计算机视觉
视觉里程计
保险丝(电气)
点云
雷达
惯性测量装置
深度学习
姿势
卷积神经网络
公制(单位)
移动机器人
机器人
工程类
电信
运营管理
电气工程
作者
Chris Xiaoxuan Lu,Muhamad Risqi U. Saputra,Peijun Zhao,Yasin Almalıoğlu,Pedro P. B. de Gusmão,Changhao Chen,Ke Sun,Niki Trigoni,Andrew Markham
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:18
标识
DOI:10.48550/arxiv.2006.02266
摘要
Robust and accurate trajectory estimation of mobile agents such as people and robots is a key requirement for providing spatial awareness for emerging capabilities such as augmented reality or autonomous interaction. Although currently dominated by optical techniques e.g., visual-inertial odometry, these suffer from challenges with scene illumination or featureless surfaces. As an alternative, we propose milliEgo, a novel deep-learning approach to robust egomotion estimation which exploits the capabilities of low-cost mmWave radar. Although mmWave radar has a fundamental advantage over monocular cameras of being metric i.e., providing absolute scale or depth, current single chip solutions have limited and sparse imaging resolution, making existing point-cloud registration techniques brittle. We propose a new architecture that is optimized for solving this challenging pose transformation problem. Secondly, to robustly fuse mmWave pose estimates with additional sensors, e.g. inertial or visual sensors we introduce a mixed attention approach to deep fusion. Through extensive experiments, we demonstrate our proposed system is able to achieve 1.3% 3D error drift and generalizes well to unseen environments. We also show that the neural architecture can be made highly efficient and suitable for real-time embedded applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI