Collaborative Visual SLAM#

Collaborative visual SLAM is compiled natively for both Intel® Core™ and Intel® Atom® processor-based systems. This tutorial uses the Intel® Core™ processor-based system. If you are running an Intel® Atom® processor-based system, you must make the changes detailed in Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems for collaborative visual SLAM to work.

  • Collaborative Visual SLAM with Two Robots: uses as input two ROS 2 bags that simulate two robots exploring the same area

    • The ROS 2 tool rviz2 is used to visualize the two robots, the server, and how the server merges the two local maps of the robots into one common map.

    • The output includes the estimated pose of the camera and visualization of the internal map.

    • All input and output are in standard ROS 2 formats.

  • Collaborative Visual SLAM with FastMapping Enabled: uses as an input a ROS 2 bag that simulates a robot exploring an area

    • Collaborative visual SLAM has the FastMapping algorithm integrated.

    • For more information on FastMapping, see How it Works.

    • The ROS 2 tool rviz2 is used to visualize the robot exploring the area and how FastMapping creates the 2D and 3D maps.

  • Collaborative Visual SLAM also contains mapping and can operate in localization mode.

  • Collaborative Visual SLAM with Multi-Camera Feature: uses as an input a ROS 2 bag that simulates a robot with two Intel® RealSense™ cameras exploring an area.

    • Collaborative visual SLAM enables tracker frame-level pose fusion using Kalman Filter (part of loosely coupled solution for multi-camera feature).

    • The ROS 2 tool rviz2 is used to visualize estimated pose of different cameras.

  • Collaborative Visual SLAM with 2D Lidar Enabled: uses as an input a ROS 2 bag that simulates a robot exploring an area

    • Collaborative visual SLAM enables 2D Lidar based frame-to-frame tracking for RGBD input.

    • The ROS 2 tool rviz2 is used to visualize the trajectory of robot when 2D Lidar is used.

  • Collaborative Visual SLAM with Region-wise Remapping Feature: uses as an input a ROS 2 bag that simulates a robot to update pre-constructed keyframe/landmark map and 3D octree map with manual region input from user in remapping mode.

    • The ROS 2 tool rviz2 is used to visualize the process of region-wise remapping feature including loading and updating the pre-constructed keyframe/landmark and 3D octree map.

  • Collaborative Visual SLAM with GPU Offloading

    • Offloading to the GPU only works on systems with 12th or 11th Generation Intel® Core™ processors with Intel® Iris® Xe Integrated Graphics or Intel® UHD Graphics.

  • Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems

    • Running Collaborative visual SLAM with an Intel® Atom® processor-based system. We cannot assure full performance of Collaborative Visual SLAM on an Intel® Atom® processor, but for enhanced performance, we highly recommend utilizing a system based on an Intel® Core™ processor.

Collaborative Visual SLAM with Two Robots#

Prerequisites:

  • The main input is a camera, either monocular or stereo or RGB-D.

  • IMU and odometry data are supported as auxiliary inputs.

  1. Check if your installation has the amr-collab-slam Docker* image.

    docker images |grep amr-collab-slam
    #if you have it installed, the result is:
    amr-collab-slam
    

    Note

    If the image is not installed, continuing with these steps triggers a build that takes longer than an hour (sometimes, a lot longer depending on the system resources and internet connection).

  2. If the image is not installed, Intel® recommends re-installing the EI for AMR Robot Kit with the Get Started Guide for Robots.

  3. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  4. Run the collaborative visual SLAM algorithm using two bags simulating two robots going through the same area:

    docker compose -f $AMR_TUTORIALS/cslam.tutorial.yml up
    

    Expected result: On the server rviz2, both trackers are seen.

    • Red indicates the path robot 1 is taking right now.

    • Blue indicates the path robot 2 took.

    • Green indicates the points known to the server.

    ../_images/collab_slam.gif

Collaborative Visual SLAM with FastMapping Enabled#

  1. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  2. Run the collaborative visual SLAM algorithm with FastMapping enabled:

    docker compose -f $AMR_TUTORIALS/collab-slam-fastmapping.tutorial.yml up
    

    Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D map.

  3. You can disable the /univloc_tracker_0/local_map, univloc_tracker_0/fused_map, or both topics.

    Visible Test: Showing keypoints, the 3D map, and the 2D map

    Expected Result:

    ../_images/c-slam-fm-full.png

    Visible Test: ``Showing the 3D map

    Expected Result:

    ../_images/c-slam-fm-3D.png

    Visible Test: Map showing the 2D map

    Expected Result:

    ../_images/c-slam-fm-2D.png

    Visible Test: Showing keypoints and the 2D map

    Expected Result:

    ../_images/c-slam-fm-keypoints.png

Collaborative Visual SLAM with Multi-Camera Feature#

Note: The following part illustrates part of the multi-camera feature in Collaborative SLAM that uses Kalman Filter to fuse SLAM poses from different trackers in a loosely-coupled manner, and we treat each individual camera as a separate tracker (ROS2 node). For other parts of the multi-camera feature, they are not yet ready and will be integrated later.

  1. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  2. Run the collaborative visual SLAM algorithm with tracker frame-level pose fusion using Kalman Filter:

    docker compose -f $AMR_TUTORIALS/collab-slam-2-cameras-robot-localization.yml up
    

    Expected result: On the opened rviz windows, you see the pose trajectory outputs for each camera.

  3. Use the python script in a separate terminal to visualize the three trajectories obtained from ROS2 topics univloc_tracker_0/kf_pose, univloc_tracker_2/kf_pose, /odometry/filtered.

    cd $CONTAINER_BASE_PATH/01_docker_sdk_env/artifacts/01_amr/amr_generic/config/cslam_multi_camera
    python3 cslam-multi-camera-traj-compare.py
    

    Expected result: On the python window, three trajectories are shown. An example image is as follows.

    • Blue indicates the trajectory generated by front camera.

    • Gray indicates the trajectory generated by rear camera.

    • Red indicates the fused trajectory generated by Kalman Filter.

    The trajectory from Kalman Filter should be the fused result of the other two trajectories indicating the multi-camera pose fusion is working properly.

    ../_images/compare_trajectories.png

Collaborative Visual SLAM with 2D Lidar Enabled#

  1. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  2. Run the collaborative visual SLAM algorithm with auxiliary Lidar data input:

    docker compose -f $AMR_TUTORIALS/collab-slam-2d-lidar.tutorial.yml up
    
  3. Use docker exec in a separate terminal to debug and capture the output ROS2 topic. You can check if certain topic has been published and view its messages.

    docker ps
    # This command will list the running docker containers. Find the container id for c-slam.
    
    docker exec -it <container_id_for_c-slam> bash
    
    # Inside the container, you can run the following commands.
    source ros_entrypoint.sh
    ros2 node list
    ros2 topic list
    ros2 topic echo /univloc_tracker_0/lidar_states
    exit
    

    Expected result: the values of pose_failure_count and feature_failure_count should not be 0, since they are the default values and should increase over time. On the opened rviz, you see the pose trajectory when Lidar data is used.

    header:
    stamp:
        sec: 1
        nanosec: 683876706
    frame_id: ''
    feature_failure_count: 30
    pose_failure_count: 1
    
    ../_images/use_lidar.png

Collaborative Visual SLAM with Region-wise Remapping Feature#

  1. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  2. Run the collaborative visual SLAM algorithm in mapping mode to construct the keyframe/landmark and 3D octree map:

    docker compose -f $AMR_TUTORIALS/collab-slam-remapping-mapping.tutorial.yml up
    

    Expected result: On the opened server rviz, you see the keyframe and landmark constructed in mapping mode

    ../_images/constructed_keyframes_and_landmarks_map.png

    On the opened tracker rviz, you see the 3D octree map constructed in mapping mode

    ../_images/constructed_octree_map.png
  3. To stop the robot from mapping and save the keyframe/landmark and 3D octree map, do the following operation:

    • Type Ctrl-c in the terminal where the collab-slam-remapping-mapping.tutorial.yml was run.

    • Remove the stopped containers:

    docker compose -f $AMR_TUTORIALS/collab-slam-remapping-mapping.tutorial.yml down
    
  4. Run the collaborative visual SLAM algorithm in remapping mode to load and update pre-constructed keyframe/landmark and 3D octree map:

    docker compose -f $AMR_TUTORIALS/collab-slam-remapping-remapping.tutorial.yml up
    

    Expected result: On the opened server rviz, you see the loaded pre-constructed keyframe/landmark map in mapping mode. Within the remapping region, corresponding map will be deleted.

    ../_images/loaded_keyframes_and_landmarks_map.png

    On the opened tracker rviz, initially you see the loaded 3D octree map.

    ../_images/loaded_octree_map.png

    On the opened tracker rviz, after bag playing is done, you see the 3D octree map inside the remapping region will be updated.

    ../_images/updated_map_after_remapping.png
  5. To stop the robot from remapping, do the following operation:

    • Type Ctrl-c in the terminal where the collab-slam-remapping-remapping.tutorial.yml was run.

    • Remove the stopped containers:

    docker compose -f $AMR_TUTORIALS/collab-slam-remapping-remapping.tutorial.yml down
    

Collaborative Visual SLAM with GPU Offloading#

  1. Check if your installation has the amr-collab-slam Docker* image.

    docker images |grep amr-collab-slam-gpu
    #if you have it installed, the result is:
    amr-collab-slam-gpu
    

    Note

    If the image is not installed, continuing with these steps triggers a build that takes longer than an hour (sometimes, a lot longer depending on the system resources and internet connection).

  2. If the image is not installed, Intel® recommends installing the Robot Complete Kit with the Get Started Guide for Robots.

  3. Check that EI for AMR environment is set:

    echo $AMR_TUTORIALS
    # should output the path to EI for AMR tutorials
    /home/user/edge_insights_for_amr/Edge_Insights_for_Autonomous_Mobile_Robots_2023.1/AMR_containers/01_docker_sdk_env/docker_compose/05_tutorials
    

    If nothing is output, refer to Get Started Guide for Robots Step 5 for information on how to configure the environment.

  4. Run the collaborative visual SLAM algorithm with GPU offloading:

    docker compose -f $AMR_TUTORIALS/collab-slam-gpu.tutorial.yml up
    

    Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D map

    ../_images/c-slam-fm-full.png
  5. On a different terminal, check how much of the GPU is using intel-gpu-top.

    sudo apt-get install intel-gpu-tools
    sudo intel_gpu_top
    
    ../_images/kudan_slam_gpu_top.png
  6. To close this execution, close the rviz2 window, and press Ctrl-c in the terminal.

  7. Clean up the Docker* images:

    docker compose -f $AMR_TUTORIALS/collab-slam-gpu.tutorial.yml down --remove-orphans
    

Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems#

  1. Open the collaborative visual SLAM yml file for editing (depending on which tutorial you want to run, replace <collab-slam-tutorial> with cslam, collab-slam-fastmapping, collab-slam-gpu, etc.):

    cd $CONTAINER_BASE_PATH
    gedit 01_docker_sdk_env/docker_compose/05_tutorials/<collab-slam-tutorial>.tutorial.yml
    
  2. Replace this line:

    source /home/eiforamr/workspace/ros_entrypoint.sh
    

    With these lines:

    unset CMAKE_PREFIX_PATH
    unset AMENT_PREFIX_PATH
    unset LD_LIBRARY_PATH
    unset COLCON_PREFIX_PATH
    unset PYTHONPATH
    source /home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_atom/setup.bash
    

Troubleshooting#

  • If the tracker (univloc_tracker_ros) fails to start, giving this error:

    amr-collab-slam | [ERROR] [univloc_tracker_ros-2]: process has died [pid 140, exit code -4, cmd '/home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_core/univloc_tracker/lib/univloc_tracker/univloc_tracker_ros --ros-args -r __node:=univloc_tracker_0 -r __ns:=/ --params-file /tmp/launch_params_zfr70odz -r /tf:=tf -r /tf_static:=tf_static -r /univloc_tracker_0/map:=map'].
    

    See Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems.

  • The odometry feature use_odom:=true does not work with these bags.

    The ROS 2 bags used in this example do not have the necessary topics recorded for the odometry feature of collaborative visual SLAM.

    If the use_odom:=true parameter is set, the collab-slam reports errors.

  • The bags fail to play.

    The collab_slam Docker* is started with the local user and needs access to the ROS 2 bags folder.

    Make sure that your local user has read and write access to this path: <path to edge_insights_for_amr>//Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/01_docker_sdk_env/docker_compose/06_bags

    The best way to do this is to make your user the owner of the folder. If the EI for AMR bundle was installed with sudo, chown the folder to your local user.

  • If the following error is encountered:

    amr-collab-slam-gpu | [univloc_tracker_ros-2] /workspace/src/gpu/l0_rt_helpers.h:56: L0 error 78000001
    

    The render group might be on a different ID then 109 which is placed in the yaml files used in the examples.

    To find what ID the render group has on your system:

    getent group render
    cat /etc/group |grep render
    

    If the result is not render:x:109, change the yml file:

    gedit 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml
    # Change the at the line 26 from 109, to the number you got above.
    
  • For general robot issues, go to: Troubleshooting for Robot Tutorials.