This paper presents investigation of various ROS- based visual SLAM methods and analyzes their feasibility for a mobile robot application in homogeneous indoor environment. We compare trajectories obtained by processing different sensor data (conventional camera, LIDAR, ZED stereo camera and Kinect depth sensor) during the experiment with UGV prototype motion. These trajectories were computed by monocular ORB-SLAM, monocular DPPTAM, stereo ZedFu (based on ZED camera data) and RTAB-Map (based on MS Kinect 2.0 depth sensor data), and verified by LIDAR-based Hector SLAM and a tape measure as the ground truth.