无人地面车辆
占用网格映射
人工智能
计算机科学
机器人
传感器融合
计算机视觉
全球地图
概率逻辑
同时定位和映射
机器人学
网格
激光雷达
移动机器人
地理
遥感
大地测量学
作者
Jianqiang Li,Yanyan Cheng,Jin Zhou,Jie Chen,Zun Liu,Shuqing Hu,Victor C. M. Leung
出处
期刊:IEEE transactions on green communications and networking
[Institute of Electrical and Electronics Engineers]
日期:2021-08-24
卷期号:6 (1): 69-78
被引量:12
标识
DOI:10.1109/tgcn.2021.3107291
摘要
With the development of science and technology, robots have been widely used in smart cities. The traversability mapping of environment perception is the prerequisite for robots to perform tasks. To save the energy consumption of traversability mapping for unmanned ground vehicle (UGV), we fusion a wide range of aerial images and a small amount of ground images to provide vision for UGV. Current map fusion methods are usually constrained by homogeneous model of robotic systems and lack of diverse sensors. As a result, they cannot work well in heterogeneous collaborative robotic systems that consist of aerial and ground robots. In this paper, we use heterogeneous robot systems, including UGV and unmanned aerial vehicles (UAV) to build an occupancy grid map that can be used for navigation. To fuse sensor data of different types, we propose a Collaborative Map Fusion algorithm based on Multi-task Gaussian Process Classification (MTGPC) using heterogeneous robotic systems. Besides, probabilistic model is exploited in traversability mapping, so the active perception can be used to build the map efficiently. Our system is tested in real scenes and can achieve an accuracy of more than 70%. The map fusion using active perception is better than map fusion using random strategy in terms of speed and accuracy. To our knowledge, this is the first work that can build the occupancy grid map using sparse data points sampled from aerial images and ground lidar map.
科研通智能强力驱动
Strongly Powered by AbleSci AI