计算机科学
人工智能
校准
激光雷达
计算机视觉
点云
预处理器
帧(网络)
转化(遗传学)
人工神经网络
数据预处理
摄像机切除
模式识别(心理学)
遥感
数学
电信
基因
统计
地质学
生物化学
化学
作者
An Nguyen,Myungsik Yoo
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:10: 121261-121271
被引量:4
标识
DOI:10.1109/access.2022.3222797
摘要
With the rapid growth of self-driving vehicles, automobiles demand diverse data from multiple sensors to perceive the surrounding environment. Calibrating preprocessing between multiple sensors is necessary to utilize the data effectively. In particular, the LiDAR-camera pair, a suitable complement with 2D-3D information for each other, has been widely used in autonomous vehicles. Most traditional calibration methods require specific calibration targets set up under complicated environmental conditions, which require expensive human manual work. In this study, we propose a deep neural network that does not require any specific targets and offline setup to find the six degrees of freedom (6 DoF) transformation between LiDAR and the camera. Unlike previous deep learning CNN-based methods, which use raw 3D point clouds and 2D images frame by frame, CalibBD utilizes Bi-LSTM for sequence data to extract temporal features between consecutive frames. It not only predicts the calibration parameters by minimizing both transformation and depth losses but also calibrates the camera parameters by using temporal loss to refine the calibration parameters. The proposed model achieves a steady performance under various deviations of mis-calibration parameters and achieved higher results in terms of accuracy than the state-of-the-art CNN-based method on the KITTI datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI