paint-brush
Building a Lidar Map using Graphs and Open-Source Toolsby@anatoly93rus
1,008 reads
1,008 reads

Building a Lidar Map using Graphs and Open-Source Tools

by Anatoly KabakovMay 11th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Lidar data can be used to localize the robot and simultaneously map the environment. For this, join up the lidar scans accurately so that there are no object duplications, overlaps and other artefacts. Two popular approaches to merging lidar data are: 1. filters (e.g. the Kalman Filter) that iteratively optimize the current pose of the robot based on several uncertainty distributions and 2. graph-based methods that optimize all of the robot’s poses in a graph using measurements from various sensors. For graph optimization, i.e. to correct the accumulated error in sensors’ measurements, the least-squares method can be used. To build a lidar map using graph-based method, open a ROS bag containing lidar scand, odometry and GNSS values in the hdl_graph_slam package. This tool builds the first rough version of the map with inconsistencies. Next, load this map in interactive_slam. Run the Automatic Loop Closing function several times with the sets of parameters described in the article until you are happy with the map quality. To dive deeper in graph SLAM, explore the scientific articiles and educational videos mentioned in the article.
featured image - Building a Lidar Map using Graphs and Open-Source Tools
Anatoly Kabakov HackerNoon profile picture


If you are lucky enough to have lidar (light detection and ranging) equipment on your mobile robot, you can use lidar data to localize the robot (define its pose) and simultaneously map the environment around it.


In short, you can perform simultaneous localization and mapping (SLAM). To build a map, you need to join up all the lidar scans of the surroundings, making sure that:


  1. The map is globally precise; it has minimal deviations from an orthophoto of the same area.

Figure 1. A good-quality lidar map. This image is obtained with the help the rviz_satellite visualization tool


2. The map is locally precise, i.e. objects in the map can be seen in detail.

Figure 2. Walls, trees, and other objects are clearly defined without duplications and other artefacts


3. Localization works (a robot can move from Point A to Point B using the map and lidar localization).


At Evocargo, a company providing cargo transportation services in supervised areas, I build maps for highly autonomous vehicles (AVs). In order to solve real-life AV tasks, I sometimes need to use modified or custom-built tools and perform operations that are not always needed in other robotics projects. To make this post useful for a wider audience of robotics enthusiasts, I’ll give you a universal algorithm for building a lidar map using free and publicly available ROS tools, so you can follow along and apply it in any project.


In this post, I will:


  • specify which data I use to build a map;

  • briefly outline the two main approaches to data fusion;

  • explain graph optimization;

  • show simple and universal steps of building a lidar map;

  • show how open-source tools – hdl_graph_slam and interactive_slam – can help.


Data used to build a lidar map

First, to build a lidar map, I obviously need lidar scans. Lidar scans are point clouds that show as many objects around a robot or vehicle as possible. The better the visibility and variety of the landscape, the more unique objects are seen in a cloud, so algorithms can match scans more precisely and I get better localization results.


For example, if I scan the route of an autonomous vehicle across a desert, there is not much in the scene to catch the eye, and lidar localization won’t work well. Or if lidar covers only two meters ahead and two meters to the sides, it probably won’t capture enough unique objects to match the scans precisely.


Secondly, to join the lidar scans accurately, I need the location and pose of each scan. So, I also use:


  • Odometry from lidars or wheels

    I prefer odometry calculated by wheel encoders over lidar odometry because errors in wheel odometry are easier to correct. Moreover, lidar odometry depends too much on unique objects in scans.


  • GNSS

    Wheel and lidar odometry is fairly accurate for short distances, but it is prone to distortion at long distances. GNSS measurements are not that good for short distances and suffer from static errors, but they are globally accurate, so I use poses obtained from satellites to correct accumulated odometry errors.


All these data are written to ROS bags together with other metadata that may also come in handy. For example, it’s useful to match scans and odometry measurements by time.


Approaches to data fusion

So, to build a map, I use lidar scans, wheel odometry, and GNSS. I need to merge (fuse) data from these three sources and find an optimal pose estimation.


There are two popular approaches to data fusion:

  • Filter-based.
  • Graph-based.


Filter-based methods are incremental by nature, as they iteratively optimize the current pose of the robot based on several uncertainty distributions. The Kalman Filter and Particle Filter are the two most-used methods.


Figure 3. The classic illustration of the Kalman Filter (image source:https://www.mathworks.com/videos/understanding-kalman-filters-part-3-optimal-state-estimator--1490710645421.html)



In this image, the pose of a robot at a certain moment is defined by the X coordinate and an uncertainty distribution. A pose estimate obtained from a satellite is defined by the Y coordinate and a distribution. I can put the two distributions together to get the optimal estimate.


Graph-based methods allow for building a graph that represents the robot’s trajectory and optimizing all of the robot’s poses using measurements from various sensors. For this, a graph is built whose nodes are the robot’s poses at different points in time and edges are relations between the poses such as odometry measurements.


To get the best pose estimate, the graph is optimized by minimizing inconsistencies between the robot’s actual poses and measurements. The Gauss-Newton and the Levenberg-Marquardt algorithms are often used for this, and g2o and gtsam frameworks are popular choices for graph optimization.


Filter-based methods only find the optimal value for the last pose, while graph optimization allows adding more measurements and factors and updates the poses of all the graph nodes, that’s why I trust the results of graph-based methods more and prefer to use them in my work.


Graph optimization

To explain graph optimization the following example is often used (you can see a similar case explained in the Graph-Based SLAM lecture by Prof. Dr. Wolfram Burgard as part of the “Introduction in Mobile Robotics“ course). A robot moves from Point Xi. According to encoders, the robot arrives at Point Xj, however, the robot is actually observed at a different point.


Figure 4. The error in encoder’s measurements



The accumulated error in sensors’ measurements leads to a difference between the calculated pose and the observed pose. To correct this error, the least-squares method can be used which allows finding the best pose by minimizing the sum of the squared errors:



This optimization task can be solved using the popular Gauss-Newton algorithm.


In general, the steps would be as follows:

  1. Linearize the error function around the current approximation (for example, using Taylor expansion).
  2. Compute its derivative and set it to zero.
  3. Solve the linear system and get a new approximation.
  4. Repeat the steps until convergence.


To dive deeper into the method and the calculations, I recommend you read “A Tutorial on Graph-Based SLAM” by G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard.


Building a lidar map: step-by-step

So, I have lidar scans, odometry measurements, and GNSS written to bags, and I have decided to use a graph-based method of building and optimizing the map. What are my next steps? I follow the proven framework described in the scientific article “Interactive 3D Graph SLAM for Map Correction”.


Figure 5. The interactive framework for mapcorrection proposed in the article



This framework is built on top of the ROS ecosystem, and it allows me to create a map and map correction constraints and refine them in a GUI until I am satisfied with the result. The authors not only describe the given framework but also share the source code in public repositories, so every step can be performed using free open-source tools.


After the map is created according to this framework, final testing is performed depending on the project and safety requirements. For example, at Evocargo, localization checks include tests in a simulated environment and on an autonomous vehicle driving around the testing ground.


Next, let’s take a closer look at what the mentioned open-source tools can do.


Building a map with hdl_graph_slam

hdl_graph_slam package builds a graph based on data in your ROS bag file. In simple terms, it does the following:


  1. filters the point clouds from your bag (downsampling, outlayer removal, conversion from sensor frame to base link frame if needed)

  2. calculates odometry based on the current and the next point cloud using scan-matching methods,

  3. detects floor planes in filtered point clouds by applying RANSAC,

  4. builds the graph using GNSS, floor plane coefficients, and odometry as the inputs,

  5. optimizes the graph to compensate for accumulated errors of the scan matching.



Figure 6. The logic and functions of the hdl_graph_slam



In my work, I have good-quality odometry from the vehicle’s wheels, so I can skip odometry calculations. And I don’t usually use floor detection, because the vehicle drives at real sites where the roads are not flat, so making the floor plane flat will only make things worse for me. That makes the logic even easier.



Figure 7. The lighter flow defined by the project’s specifics and a better-quality input



Next, I save the produced map:

  • to a .pcd file to view and evaluate the map’s quality in rviz.


Figure 8. Inconsistencies can be seen in this map, so it needs improvement


  • as a map dump containing a set of scans and poses so I can do more work on the map using the interactive_slam package.


Refining the map with interactive_slam

interactive_slam lets me correct mapping failures such as corrupted odometry, wrong loop detection, distorted map, etc. For this, it offers several automatic and semi-automatic functions (loop closing, pose constraint refinement, and plane-based constraints). The pose graph is optimized by the Levenberg-Marquardt optimizer in g2o, a hyper-graph optimization library, and the updated map is displayed.


Out of all the suggested functions, I use Automatic Loop Closing more often, and usually it’s enough to achieve the desired quality. This function matches a scan with neighboring scans within the pre-set distance threshold. If they match, an edge is added between them. Automatic Loop Closing is described in more detail here.


The GUI is super-intuitive. First, I open the map dump saved from hdl_graph_slam.


Figure 9. A map version with inconsistencies opened in interactive_slam for correction



Then I open the Automatic Loop Close settings from the Graph menu, set a few parameters, and click Start. As easy as that.


Figure 10. The Automatic Loop Close settings window


Here I work with three parameters:

  • Fitness score threshis the minimal acceptable scan matching quality. It is defined by how many points in two scans match by neighboring points in a certain range. The more scan pairs meet the specified threshold, the more edges will be added.
    I set 0.400, but if too little scan matchings are found I lower my requirement down to 0.500.


  • Distance threshis the maximal distance (in meters) between the neighboring nodes. Scan matching will be performed for node pairs that are closer to each other than the specified value.
    I normally runAutomatic Loop Closing several times. First, for sparse closing I set Distance thresh to 25.000, and then for dense closing, I lower it to 15.000 or even 10.000.


  • Accum distance threshis the minimal distance (in meters) between the neighboring nodes.
    For sparse closing, I set 20.000, and for dense closing, I lower it to 0.500.


After running Automatic Loop Closing a few times with the mentioned parameter values, I get a good-quality map and I save it to a .pcd file.


Figure 11. The lidar map is ready


Figure 12. Zoomed in map. The walls, trees, and other objects are seen clearly in detail

Conclusion

Building a lidar map can be easy and efficient. In this article, I’ve made an overview of the proven framework that allows building a map and improving its quality using graphs. All the operations are made in open-source tools: hdl_graph_slam and interactive_slam, so you can try things out right away.


To make sure my article is not too long and hard to read, I’ve outlined only the main ideas of described approaches and tools and focused on the functions that I find most useful based on my experience. If you are encouraged to dive deeper, please follow the links I’ve added throughout the article – there is so much more to learn there.


I hope this was useful! Don’t hesitate to leave your comments, ask questions and share how you build lidar maps for your projects.