Autonomous Navigation using Lidar Sensor in ROS and GAZEBO
激光雷达
计算机科学
遥感
计算机视觉
地质学
作者
K. Subhashini,G Rathiksha,B. Keerthana,P M Amirthavarshini
标识
DOI:10.1109/icwite59797.2024.10502689
摘要
The field of autonomous navigation has been significantly advanced. Autonomous Vehicle depends on their perception system to obtain critical data pertaining to their immediate environment. This article elucidates the process of developing an autonomous Segway model which uses LIDAR sensor. This Segway can only be used on smooth indoor surfaces and continues on the designated path, dodging obstacles. LiDAR-based autonomous navigation system outlines the development of algorithms for obstacle detection, path planning, and localization. In response to the imperative requirement of safety and precision, LIDAR integration system has been introduced as a supplementary element alongside camera and radar based perception system. Conversely, contemporary practice involves planning most processes through simulations, enabling the anticipation and resolution of future production issues. Therefore, the entire stimulation is done within the Gazebo simulation environment and the entire data communication is managed through ROS (Robot Operating System) which provides a robust platform for implementing in Segway. The robot operating system offers specific configuration options and functionality for a mobile Segway working in autonomous navigation mode. There are phases for making a map, localizing it, and navigating it. The settings of the simulation control parameters as well as the Segway model itself have been optimized and fine-tuned through experiments.