EI for AMR 2023.1.0 Release Notes#
Revision History#
Date |
Software Version |
Description |
June, 2023 |
2023.1.0 |
Updated several features and libraries, ADL-P support |
January 20, 2023 |
2022.3.1 |
Update of the Kudan Visual SLAM license file inside the amr-kudan-slam Docker* image |
December 16, 2022 |
2022.3.0 |
Document-only update: added ADL-P with Ubuntu* 22.04 support for evaluation |
September 26, 2022 |
2022.3.0 |
Update release |
New Features#
Added support for 12th Generation Intel® Core™ processors
Added support for multi-camera, region-wise remapping and 2D Lidar in Collab-SLAM
Added new modules, improvements, and bug fixes to PCL Optimization libraries
Added stereo camera support for GPU ORB (Oriented FAST and Rotated BRIEF)
Added Intel® RealSense™ depth image input support in ADBSCAN
Follow-me application based on ADBSCAN
Support Ackermann drive in ITS planner
Enabled Lidar SLAM based on SLAM toolbox
Updates to Fleet Management
Updated OpenVINO™ to 2022.3 LTS
Kudan SLAM with updated core algorithm, new tutorials, and extended license validity
Platform configuration#
Robot: Ubuntu* 22.04 Intel® IOT, ROS* 2 foxy, kernel v5.15+
Server: Ubuntu* 20.04 LTS
Collab-SLAM Updates#
Added support for tracker frame-level pose fusion using Kalman Filter (part of loosely coupled solution for multi-camera feature)
Added support for region-wise remapping feature that updates pre-constructed keyframe/landmark map and octree map with manual region input from user in remapping mode
Added support of 2D Lidar based frame-to-frame tracking for RGBD input
Added support for Multicamera SLAM using RTABMAP#
Enabled Multicamera support up to four Intel® RealSense™ depth cameras for VSLAM
PCL Optimization Libraries Updates#
Added oneAPI version of Sample Consensus Initial Alignment
Added oneAPI version of Normal Estimation
Added oneAPI version of Statistical Outlier Removal
Added OpenMP version of Greedy Projection Triangulation
Added OpenMP version of Statistical Outlier Removal
Added GPU memory manager to support all oneAPI PCL modules
Follow-me Application#
This is an object tracking application for autonomous mobile robots based on Intel® patented ADBSCAN clustering algorithm.
Supported for Pengo robot, that will follow a moving object or person, keeping a safe distance and is not impacted by surrounding clutter or temporary occlusion.
Ready-to-use launch and parameter configuration files for 2D, 3D LIDAR and Intel® RealSense™ depth camera.
ITS Global Path Planner#
In this release we added support for Ackermann steering, which is used by car-like robots with limited turning radius. This version of the planner is based on the concept of Dubins paths. The default version of the ITS is based on differential drive.
Enabled Lidar SLAM based on SLAM toolbox#
Enabled “Lidar only” based SLAM on Aaeon robot and pengo robot using SLAM Toolbox algorithm
Supported Lidar: RP Lidar A1, A3 and Sick nanoScan
Added a sample application, which uses one Lidar sensor, runs synchronous mapping, and uses that map to navigate in localization mode
Fleet Management#
SOTA update supported with Ubuntu* 22.04 on Robot
Improvements on Error handling for Robot onboarding
Updated OpenVINO™ to 2022.3 LTS#
Update to ROS* 2 based OpenVINO™ Toolkit
Update to Open Model Zoo examples and models
Kudan Visual SLAM (Simultaneous Localization and Mapping)#
Updated core algorithm of the Kudan Visual SLAM system to Release 2.2.1.0
Works with the latest ORB Feature Extractor (Version 2023.1), which is optimized for Intel® GPUs
Updated launch file format (python -> xml)
Updated tutorials for Kudan Visual SLAM using a video stream from an Intel® RealSense™ camera, including new launch files and configuration data
New tutorials for Kudan Visual SLAM using a video stream from a stereo camera
New tutorial for Wandering App based on Kudan SLAM using AAEON robot kits
Extended license validity until end of 2023
Resolved Issues#
Kudan Visual SLAM#
Fixed issue that Kudan Visual SLAM with the ORB Feature Extractor on the GPU stops mapping using the
robot_moving_15fps
ROS 2 bag.
Known Issues and Limitations#
It currently takes up to 100 minutes to install the EI for AMR bundle. The time varies depending on the speed of your internet connection and system’s capabilities.
The POTA implementation in the onboarding flow requires manual input for Product Name and Manufacturer for each type of robot added to the flow.
The Inertial Measurement Unit (IMU) cannot be started on Intel® RealSense™.
Due to the camera’s discontinuation, the Intel® RealSense™ Lidar L515 camera is no longer supported by EI for AMR.
In the Wandering App, the robot radius is hard-coded to 0.177 m. Depending on the actual robot diameters, this may affect the quality of pathfinding and mapping.
RTAB_MAP is not best suited for indoor navigation. Therefore, some obstacles may not be detected with the highest accuracy, due to reflections, etc.
The oneAPI implementation of the convex hull algorithm might provide incorrect outputs. This will be fixed in a future version of the EI for AMR software.
The Server Solution does not cover SEO telemetry and Metric Scrapping.
In case of very high frequency of MQTT message transfers between Thingsboard and Turtle Creek, some performance issues might be caused due to packet loss.
Fleet Management SOTA/FOTA might throw permission denied error. Refer to Developer Guide for workaround.
Due to a compatibility issue of the TensorFlow library the object-detection container of the fleet management is not starting when running on AAEON EHL robot target.
The barometer-collectd is not starting in Cluster of Fleet Management
Intel® Atom® CPU#
The server setup deploys collaborative visual SLAM on Cogniteam’s Pengo robot without checking the type of CPU it has. (The Intel® Atom® CPU in Cogniteam’s Pengo robot is not supported by default.)
The installed TensorFlow* version in EI for AMR contains Intel® Advanced Vector Extensions (Intel® AVX) instructions. These Intel® AVX instructions are not supported by Intel® Atom® CPUs like the CPU in Elkhart Lake platform. Any action, including the OpenVINO™ sample application, fails on a platform with an Intel® Atom® CPU. To be able to run TensorFlow* on an Intel® Atom® CPU, it must be re-compiled without the Intel® AVX instructions using the steps from: How to Build and Install the Latest TensorFlow* Without CUDA GPU.
Gazebo* simulation does not work on the Intel® Atom® 3000 processor family like the Apollo Lake-based UP2 (UP Squared). Intel® recommends creating the Gazebo* simulation environment on more powerful systems that have, for example, 12th or 11th generation Intel® Core™ or Intel® Xeon® Scalable processors.
AAEON’s UP Xtreme i11 Robotic Kit#
It has been observed that, after 20 minutes of run time, an AAEON* robot gets too close to obstacles and, because Intel® RealSense™ depth does not give information under 20 cm, the robot collides with obstacles.
The AAEON’s UP Xtreme i11 Robotic Kit loses its orientation and redraws the walls in the test area multiple times. Due to this improper mapping, the robot cannot correctly identify the position of the obstacles and might collide with them.
Collaborative Visual SLAM#
For long runs, where the edge server accumulates more than 80,000 key frames, the shutdown process takes more time to exit cleanly.
For visual odometry fusion with monocular input, after the visual tracking is lost, the system relies on odometry input (if enabled) to sustain tracking and is never able to switch back to visual tracking.
The visual-inertial fusion is not supported in localization mode.
Fusion of visual, inertial, and odometry data at the same time is not supported.
Map merge does not happen if robots are moving in opposite directions through a common area.
When both the server and tracker are launched, the robot looses tracking while rotating in place.
The collaborative visual SLAM tracker crashes when running in localization mode with 30 FPS.
For multi-camera feature, current implementation only works when the trackers never get lost; meanwhile, multi-camera feature on the server side are not supported yet.
Local Bundle Adjustment and global Bundle Adjustment are not considered in the 2D Lidar support.
Intel® ESDQ for EI for AMR#
The Intel® RealSense™ camera test fails if there is no Intel® RealSense™ camera attached to the target System, reporting this error message:
[ERROR]: No RealSense devices were found
.If the internet connection is not stable or GitHub is blocked by a firewall, some Intel® ESDQ tests fail.
On an Intel® Atom® CPU, some of the tests in Intel® ESDQ will not work due to the missing AVX support.