人工智能
里程计
惯性测量装置
计算机科学
计算机视觉
卷积神经网络
特征(语言学)
机器人学
深度学习
职位(财务)
传感器融合
惯性导航系统
均方误差
视觉里程计
惯性参考系
机器人
移动机器人
数学
物理
经济
财务
哲学
统计
量子力学
语言学
作者
Muhammet Fatih Aslan,Akif Durdu,Abdullah Yusefi,Alper Yılmaz
标识
DOI:10.1016/j.neunet.2022.09.001
摘要
Sensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual-Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation.
科研通智能强力驱动
Strongly Powered by AbleSci AI