Lidar SLAM: The Ultimate Guide to Simultaneous Localization and Mapping

author avatar

04 May, 2023

Lidar SLAM: The Ultimate Guide to Simultaneous Localization and Mapping

In this article, we will dive deep into the world of simultaneous localization and mapping using Lidar technology. Lidar SLAM has been gaining popularity in recent years, thanks to its versatility and applications across various domains, including autonomous vehicles, mobile robotics, and indoor mapping.

Introduction

In this comprehensive guide, we will explore the fundamentals of Lidar SLAM, its working principles, the role of Lidar sensors, popular algorithms, real-world applications, and the challenges and limitations associated with this technology. We will also discuss future trends and the impact of advancements in Lidar sensor technology and machine learning on Lidar SLAM systems. By the end of this guide, you will have gained a thorough understanding of Lidar SLAM and its practical applications, as well as the technical knowledge required to implement and optimize Lidar SLAM systems. So, let's begin our journey into the fascinating world of Lidar SLAM.

Suggested reading: Autonomous Vehicle Technology Report

What is Lidar SLAM?

Lidar SLAM, or Light Detection and Ranging Simultaneous Localization and Mapping, is a process that enables robots or autonomous systems to build a map of an unknown environment while simultaneously determining their position within that map. This technology relies on Lidar sensors, which emit laser pulses and measure the time it takes for the pulses to bounce back after hitting objects in their path. By analyzing the returned laser signals, Lidar SLAM systems can create detailed 3D point clouds representing the environment and use this information to estimate the robot's position and orientation within the map.

The acronym SLAM stands for Simultaneous Localization and Mapping, which highlights the two primary tasks of the process. Localization refers to determining the robot's position and orientation within the map, while mapping involves creating a spatial representation of the environment. Lidar SLAM systems have become increasingly popular due to their ability to provide highly accurate maps and real-time localization in a variety of applications, such as autonomous vehicles, mobile robotics, and indoor mapping for navigation and asset tracking.

The majority of SLAM systems, however, consist of at least two key components:

  1. Range measurement:

Every SLAM system has some sort of instrument or gadget that enables a robot or other vehicle to see and measure its surroundings. Sonar, LiDAR laser scanner technology, cameras, and other image sensors can all be used for this. In essence, a SLAM system can consist of any apparatus that is capable of measuring physical characteristics like location, distance, or velocity.

  1. Data Extraction:

A SLAM system must have some form of software that aids in the interpretation of the measurements' calculated results. On this front, there are also a variety of alternatives, from a number of interlacing algorithms to various kinds of intricate scan-matching. However, the fundamental function of all of these "back-end" solutions is to extract the sensory data gathered by the range measurement device and use it to recognise landmarks in an unknown environment. A properly working SLAM solution involves constant interaction between the robot or vehicle itself, the range measurement device, the data extraction software, and any other hardware, software, or processing technologies.

How Lidar SLAM Works

To understand how Lidar SLAM works, it is crucial to comprehend the principles behind Lidar SLAM In this section, we will delve into the working mechanisms of Lidar SLAM systems, the role of Lidar sensors, and the process of generating maps and estimating robot poses within those maps.

Principles of Simultaneous Localization and Mapping

The core idea behind SLAM is to enable a robot or autonomous system to explore an unknown environment and create a map of that environment while also determining its position within the generated map. This is achieved by fusing data from various sensors, such as Lidar, cameras, and inertial measurement units (IMUs), to estimate the robot's trajectory and the positions of landmarks within the environment.

SLAM algorithms typically consist of two main components: a prediction step and a correction step. The prediction step, also known as motion or odometry update, estimates the robot's new position based on its previous position and movement information provided by the sensors. The correction step, also known as observation or sensor update, refines the predicted position using measurements from the environment, such as the distances to landmarks detected by the Lidar sensor.

By iteratively performing prediction and correction steps, SLAM algorithms can continuously update the robot's position and the map of the environment. The resulting map can be represented in various forms, such as occupancy grids, point clouds, or graphs of spatial relationships between landmarks.

Role of Lidar Sensors in SLAM

Lidar sensors play a crucial role in SLAM systems by providing accurate and high-resolution data about the environment. These sensors emit laser pulses that bounce off objects and return to the sensor, enabling it to measure the distance to the objects based on the time it takes for the laser pulses to travel back. Lidar sensors can generate dense point clouds, which are sets of points in 3D space representing the surfaces of objects in the environment.

The point cloud data generated by Lidar sensors can be used to identify and track landmarks, such as walls, corners, or other distinct features in the environment. These landmarks are essential for SLAM algorithms, as they provide information that can be used to refine the robot's estimated position and orientation. Moreover, Lidar data can also be used to identify and avoid obstacles, making it particularly valuable for autonomous navigation and safe operation.

Map Generation and Pose Estimation

The process of generating a map using Lidar data and estimating the robot's pose within the map is central to Lidar SLAM. The map can be represented in several forms, such as an occupancy grid, where each cell in the grid indicates the probability of that cell being occupied by an object, or as a point cloud, where the positions of objects in the environment are represented as sets of points in 3D space.

To estimate the robot's pose, SLAM algorithms use the Lidar data to identify and track landmarks in the environment. By comparing the observed landmarks to the landmarks stored in the map, the algorithm can determine the robot's position and orientation relative to the map. This process typically involves finding the transformation that best aligns the observed Lidar data with the stored map, which can be achieved using techniques such as iterative closest point (ICP) or other optimization algorithms.

As the robot explores the environment and collects new Lidar data, the map is continuously updated, and the robot's pose is refined. This iterative process enables Lidar SLAM systems to create accurate maps and provide real-time localization in various applications, from autonomous vehicles to mobile robotics and indoor mapping.

Lidar Sensors for SLAM

Lidar sensors are a critical component of SLAM systems, providing the necessary environmental data for map generation and localization. There are several types of Lidar sensors, each with its unique features and advantages. Choosing the right Lidar sensor for a particular SLAM application depends on factors such as the desired range, resolution, field of view, and cost.

Mechanical Scanning Lidar:

Mechanical scanning Lidar sensors are among the most common types of Lidar sensors used in SLAM applications. They consist of a laser emitter and a detector mounted on a rotating platform, which allows the sensor to scan the environment by directing the laser pulses across a wide field of view. Mechanical scanning Lidar sensors can provide high-resolution point clouds with a range of up to 200 meters, depending on the specific sensor model and configuration.

The Velodyne HDL-64E is an example of a mechanical scanning Lidar sensor commonly used in autonomous vehicles and robotics applications. Its dataset features 64 laser scan beams, a 360-degree horizontal field of view, and a 26.9-degree vertical field of view, with a maximum range of 120 meters and an angular resolution of 0.08 degrees. The high resolution and wide field of view make it suitable for various SLAM applications that require detailed environmental data.

Solid-State Lidar:

Solid-state Lidar sensors represent a newer generation of Lidar technology that does not rely on moving parts to scan the environment. Instead, solid-state Lidar sensors use electronic beam steering methods, such as optical phased arrays or microelectromechanical systems (MEMS) mirrors, to direct the laser pulses across the field of view. Solid-state Lidar sensors offer several advantages over mechanical scanning Lidar sensors, including increased durability, lower power consumption, and reduced size and weight.

The InnovizOne by Innoviz Technologies is an example of a solid-state Lidar sensor designed for automotive and robotics applications. It offers a range of up to 250 meters, a 73-degree horizontal field of view, and a 20-degree vertical field of view. The sensor provides high-resolution point clouds with an angular resolution of 0.1 degrees, making it suitable for SLAM applications that require precise environmental data.

Flash Lidar:

Flash Lidar sensors are another type of solid-state Lidar technology that illuminates the entire scene with a single, wide-angle laser pulse, capturing the reflected light with a 2D sensor array. Flash Lidar sensors can provide rapid, single-shot measurements of the environment, making them well-suited for applications that require high-speed data acquisition, such as aerial mapping or obstacle detection in high-speed autonomous vehicles.

The ASC TigerCub by Advanced Scientific Concepts is an example of a Flash Lidar sensor designed for robotics and aerial mapping applications. It offers a range of up to 150 meters, a 90-degree horizontal field of view, and a 20-degree vertical field of view. The sensor provides point cloud data with a spatial resolution of 1 centimeter, allowing for accurate mapping and localization in SLAM systems.

Selecting the Right Lidar Sensor for SLAM

Choosing the appropriate Lidar sensor for a SLAM application is essential for obtaining the best performance and meeting specific requirements. Here are  the factors that influence the selection of a Lidar sensor for SLAM and provide guidance on how to make informed decisions when choosing a sensor for a particular application.

Range and Resolution

The range and resolution of a Lidar sensor directly impact the accuracy and detail of the generated maps and the localization performance within those maps. A longer range allows the SLAM system to detect and map objects at greater distances, which can be crucial for applications such as autonomous vehicles operating at high speeds. High-resolution Lidar sensors can provide more detailed point clouds, enabling the identification of smaller or more complex landmarks in the environment.

For example, the Velodyne HDL-64E offers a maximum range of 120 meters and an angular resolution of 0.08 degrees, making it suitable for applications that require detailed environmental data over a large area. In contrast, a lower-cost Lidar sensor with a shorter range and lower resolution may suffice for indoor robotics applications where the environment is smaller and less complex.

Field of View

The field of view (FoV) of a Lidar sensor determines the spatial extent of the environment that can be captured by the sensor in a single scan. A wide FoV enables the SLAM system to map larger areas of the environment and detect objects at various angles relative to the sensor. This is particularly important for applications that require a comprehensive understanding of the surrounding environment, such as collision avoidance in autonomous vehicles or mapping large indoor spaces.

For instance, the InnovizOne solid-state Lidar sensor provides a 73-degree horizontal FoV and a 20-degree vertical FoV, allowing it to capture a broad view of the environment. This wide FoV is well-suited for applications that require a thorough understanding of the surroundings, such as urban driving or complex indoor environments.

Update Rate and Latency

The update rate and latency of a Lidar sensor can significantly impact the performance of a SLAM system, particularly in dynamic environments where objects and landmarks may change rapidly. A higher update rate enables the SLAM system to capture more frequent snapshots of the environment, providing a more accurate representation of the changing surroundings. Low-latency Lidar sensors can deliver near real-time environmental data, which is crucial for applications that require rapid decision-making, such as obstacle detection and avoidance in fast-moving autonomous vehicles.

The ASC TigerCub Flash Lidar sensor is an example of a high-speed Lidar sensor designed for applications that demand quick data acquisition. With a single-shot measurement capability, the TigerCub can capture the entire scene in one instant, providing rapid environmental updates suitable for high-speed autonomous navigation.

Cost and Complexity

The cost and complexity of a Lidar sensor can also be critical factors when selecting a sensor for a SLAM application. High-end Lidar sensors with long ranges, high resolutions, and wide FoVs tend to be more expensive and may require more sophisticated integration and calibration procedures. In contrast, lower-cost sensors may have reduced performance characteristics but can be more accessible and easier to integrate into a SLAM system.

When selecting a Lidar sensor for a specific SLAM application, it is essential to balance the performance requirements against the cost and complexity of the sensor. By carefully considering factors such as range, resolution, FoV, update rate, and latency, alongside the cost and complexity, it is possible to select a Lidar sensor that meets the unique needs of the specific SLAM application.

Autonomous self- driving car moving through the highwayDepending on the use case, each of these components can vary, but they all need to function flawlessly in order for a SLAM system to accurately explore its surroundings.

Popular Lidar SLAM Algorithms

Lidar SLAM algorithms are the cornerstone of generating accurate maps and localizing a robot or autonomous vehicle within those maps. The development and implementation of efficient algorithms have led to significant advancements in the field of SLAM. 

Several Lidar SLAM algorithms have gained popularity due to their performance, robustness, and scalability. In this section, we will delve into some of the most popular algorithms used in the industry and research, highlighting their key features and applications.

GMapping

GMapping is a well-known implementation of the Rao-Blackwellized Particle Filter (RBPF) for Lidar-based SLAM. The algorithm leverages a grid map representation and uses scan matching to align Lidar scans with the map. GMapping is capable of providing accurate mapping and localization results in both small and large-scale environments.

GMapping employs an adaptive resampling strategy, which adjusts the number of particles used in the filter based on the uncertainty in the robot's pose estimate. This strategy allows the algorithm to maintain a balance between computational efficiency and robustness. GMapping has been widely used in robotics applications, including autonomous navigation, indoor mapping, and mobile robot localization.

Hector SLAM

Hector SLAM is a Lidar-based SLAM algorithm that does not rely on odometry data, making it suitable for platforms without wheel encoders or other motion sensors. The algorithm uses a grid map representation and a multi-resolution approach to process Lidar data at different resolutions, enabling it to handle large-scale environments efficiently.

The core of Hector SLAM is a fast scan matching technique, which aligns consecutive Lidar scans to estimate the robot's motion. The algorithm also incorporates loop closure detection and optimization to correct the accumulated drift in the robot's pose estimate over time. Hector SLAM has been successfully applied to various platforms, including aerial vehicles, ground robots, and even handheld Lidar scanners.

Cartographer

Cartographer is a versatile and scalable Lidar SLAM algorithm developed by Google. It supports both 2D and 3D mapping and is designed to work with various types of sensors, including Lidar, IMU, and odometry data. Cartographer employs a combination of local and global optimization techniques to create consistent and accurate maps of the environment.

In the local optimization phase, Cartographer uses real-time scan matching to estimate the robot's pose and update the submap currently being built. In the global optimization phase, the algorithm employs a pose graph optimization technique to correct for drift and maintain a globally consistent map. Cartographer has been used in a wide range of applications, from indoor mapping and autonomous navigation to large-scale outdoor mapping projects.

LOAM (Lidar Odometry and Mapping)

LOAM is a real-time Lidar SLAM algorithm specifically designed for 3D mapping using high-resolution Lidar sensors. The algorithm decomposes the Lidar data into two components: a low-resolution component for estimating the robot's odometry and a high-resolution component for building a detailed map of the environment.

LOAM utilizes a multi-threaded approach to process the Lidar data efficiently. The algorithm first extracts features from the Lidar scans, such as edges and planar surfaces, and then matches these features between consecutive scans to estimate the robot's motion. The motion estimate is then used to build a detailed 3D map of the environment. LOAM has been successfully applied to autonomous vehicles, drones, and robotic platforms, demonstrating its robustness and accuracy in a variety of environments.

Implementing a Lidar SLAM Algorithm

Implementing a Lidar SLAM algorithm involves several crucial steps, from understanding the problem domain to selecting the right algorithm, configuring the parameters, and finally integrating the solution into the application. In this section, we will discuss these steps in detail, providing guidance on how to successfully implement a Lidar SLAM algorithm for your specific use case.

Understanding the Problem Domain

The first step in implementing a Lidar SLAM algorithm is understanding the problem domain. This includes recognizing the challenges and requirements of the application, such as the scale of the environment, the type of sensor data, the expected motion dynamics, and the desired accuracy and real-time performance. Identifying these factors will help you choose the most suitable SLAM algorithm and configure it appropriately for your needs.

Selecting the Right Algorithm

As discussed earlier, there are various Lidar SLAM algorithms available, each with its strengths and weaknesses. It is essential to choose an algorithm that matches the requirements of your application. For example, if real-time performance is crucial, a computationally efficient algorithm like Hector SLAM or ICP may be suitable. Conversely, if high accuracy and scalability are required, a graph-based SLAM algorithm like Cartographer could be the better choice.

Configuring Parameters

Once you have selected the most appropriate Lidar SLAM algorithm, the next step is to configure its parameters. These parameters control various aspects of the algorithm, such as the sensor model, motion model, and optimization settings. Some critical parameters to consider include:

  1. Sensor noise model: The Lidar sensor's noise characteristics, such as the range and angular measurement noise, can significantly affect the performance of the SLAM algorithm. It is essential to model this noise accurately to achieve reliable mapping and localization results.

  2. Motion model: The motion model describes the robot's kinematics and dynamics, which are used by the SLAM algorithm to predict the robot's pose over time. Selecting an appropriate motion model that matches the robot's characteristics is crucial for accurate pose estimation.

  3. Optimization settings: Many SLAM algorithms involve optimization processes, such as graph optimization or particle filtering. The settings for these processes, such as convergence criteria, number of iterations, and optimization algorithms, can significantly impact the algorithm's performance and accuracy.

Configuring these parameters may require experimentation and fine-tuning, as the optimal values may vary depending on the specific application and environment.

Integrating the Solution

After configuring the Lidar SLAM algorithm, the final step is to integrate it into your application. This involves interfacing the algorithm with the Lidar sensor, motion data sources, and other relevant systems, such as localization, navigation, and control.

When integrating the SLAM algorithm, consider factors such as data synchronization, data processing rates, and hardware requirements. Ensuring that the data from the Lidar sensor and other sources are properly synchronized and processed at the correct rates is crucial for achieving reliable and accurate SLAM results. Additionally, ensure that the hardware resources, such as processing power and memory, are sufficient for the chosen SLAM algorithm to perform optimally.

Popular Applications of Lidar SLAM in Real Life:

Lidar SLAM (Simultaneous Localization and Mapping) technology has become increasingly popular in recent years, finding applications in a wide range of industries. Here are some popular applications of Lidar SLAM in real life:

Self- driving Cars:

Lidar SLAM (Simultaneous Localization and Mapping) technology is an integral part of the development of self-driving cars. The Lidar sensors used in these vehicles create a 3D map of the surrounding environment, which helps the car's onboard computer to navigate autonomously. Lidar SLAM is used for several applications in self-driving cars, including obstacle detection and avoidance, localization, and mapping. The sensors detect and classify obstacles in the car's path, such as other vehicles, pedestrians, and road hazards. This information is used to plan a safe and efficient route, avoiding potential collisions. Lidar SLAM helps the car's onboard computer to locate itself accurately within the environment, which is essential for planning an optimal driving path and avoiding collisions. Finally, the highly detailed 3D map of the surrounding environment created by Lidar SLAM can be used to improve the car's perception of the environment and optimize its driving behaviour.

Cleaning Robots: 

Lidar SLAM is used for several applications in cleaning robots, including obstacle detection and avoidance, localization, and mapping. The sensors detect and classify obstacles in the robot's path, such as furniture and other objects, and use this information to plan an efficient cleaning path, avoiding collisions. Lidar SLAM helps the robot to locate itself accurately within the environment, which is essential for planning an optimal cleaning path and avoiding collisions. Finally, the highly detailed 3D map of the environment created by Lidar SLAM can be used to optimize the robot's cleaning behaviour, ensuring that it covers all areas of the room and avoids missing any spots.

SLAM Drones:

Drones equipped with Lidar sensors can create a 3D map of the surrounding environment, which can be used for a wide range of applications, including surveying, mapping, and inspection. Lidar SLAM is used for several applications in automation drones, including obstacle detection and avoidance, localization, and mapping. The sensors detect and classify obstacles in the drone's path, such as buildings, trees, and other objects, and use this information to plan a safe and efficient flight path, avoiding collisions. Lidar SLAM helps the drone to locate itself accurately within the environment, which is essential for planning an optimal flight path and avoiding collisions. Finally, the highly detailed 3D map of the environment created by Lidar SLAM can be used for applications such as mapping terrain, inspecting infrastructure, and the monitoring of wildlife populations.

Challenges and Limitations of Lidar SLAM

While Lidar SLAM has shown remarkable success in various applications, it is not without its challenges and limitations. In this section, we will delve into the main challenges and limitations that can arise when implementing and using Lidar SLAM in real-world scenarios.

Sensor Limitations

Lidar sensors, though powerful and accurate, have inherent limitations that can impact the performance of a SLAM algorithm. Some of these limitations include:

  1. Range limitations: Lidar sensors have a maximum range, beyond which they cannot provide accurate or reliable measurements. This range limitation can affect the ability of the SLAM algorithm to map and localize in large environments or when distant objects are of interest.

  2. Resolution and accuracy: The resolution and accuracy of a Lidar sensor can vary depending on the sensor type, the environment, and the operating conditions. Lower resolution and accuracy can lead to less precise maps and localization estimates, particularly in complex environments with small or intricate features.

  3. Sensitivity to environmental factors: Lidar sensors can be sensitive to various environmental factors, such as ambient light, dust, fog, and rain. These factors can degrade the quality of the sensor data and, in turn, affect the performance of the SLAM algorithm.

  4. Limited field of view: Most Lidar sensors have a limited field of view, which can lead to incomplete or partial mapping of the environment. This limitation can be especially problematic in situations where the robot needs to have a comprehensive understanding of its surroundings for safe and efficient navigation.

Computational Complexity

Lidar SLAM algorithms can be computationally intensive, particularly when dealing with large-scale environments and high-resolution sensor data. The computational complexity can pose challenges in terms of real-time performance and hardware requirements. In some cases, optimizing the SLAM algorithm for a specific application or hardware platform may be necessary to achieve the desired performance.

Dynamic Environments

Lidar SLAM algorithms are often designed for static environments, where the assumption is that the environment does not change significantly over time. However, in real-world scenarios, dynamic environments with moving objects, such as pedestrians, vehicles, or other robots, are common. Handling dynamic environments in Lidar SLAM can be challenging, as the algorithm must be able to distinguish between static and dynamic objects and update the map accordingly.

Loop Closure and Global Consistency

One of the main challenges in SLAM is achieving global consistency, particularly when dealing with large-scale environments or long-duration missions. The process of loop closure, where the algorithm recognizes that the robot has returned to a previously visited location, can be challenging in Lidar SLAM due to the high dimensionality of the sensor data and the potential for perceptual aliasing (i.e., different places appearing similar).

Robustness and Reliability

Ensuring robustness and reliability in Lidar SLAM is crucial, particularly in safety-critical applications, such as autonomous vehicles or robotic assistants. Developing algorithms that can handle sensor noise, data outliers, and other uncertainties is an ongoing challenge in the field of SLAM. Additionally, achieving fault tolerance and fail-safe operation in the face of hardware or software failures is essential for real-world applications.

In summary, Lidar SLAM faces several challenges and limitations, including sensor limitations, computational complexity, dynamic environments, loop closure, and robustness. Addressing these challenges and developing reliable and efficient SLAM solutions is an ongoing area of research and development in the field of robotics and autonomous systems.

Conclusion

In this article, we have explored the world of Lidar SLAM, a critical technology that has revolutionized the field of robotics and autonomous systems. Lidar SLAM combines the high-precision measurements provided by Lidar sensors with advanced SLAM algorithms to create accurate maps of the environment and estimate the robot's position within that map.

We have covered the fundamental principles of Lidar SLAM, including its definition, working principles, and the different types of Lidar sensors. We discussed the various factors to consider when selecting the right Lidar sensor for a SLAM application and presented an overview of popular Lidar SLAM algorithms. Additionally, we delved into the implementation process of a Lidar SLAM algorithm and examined the challenges and limitations associated with this technology.

As the field of robotics and autonomous systems continues to advance, Lidar SLAM will undoubtedly play a pivotal role in enabling a wide range of applications, from autonomous vehicles to intelligent robots in smart factories. Researchers and engineers will continue to push the boundaries of Lidar SLAM, addressing its current limitations and exploring new and innovative ways to leverage this powerful technology.

Frequently Asked Questions (FAQs)

1: What are the main applications of Lidar SLAM?

Lidar SLAM has numerous applications, including autonomous vehicles, drones, mobile robots, indoor navigation, surveying, mapping, and even virtual reality. These applications span various industries such as automotive, agriculture, logistics, and construction.

2: Can Lidar SLAM work in darkness or low-light conditions?

Yes, Lidar sensors are active sensors that emit their own light in the form of laser pulses, which enables them to work effectively in darkness or low-light conditions. This is one of the advantages of Lidar SLAM over other techniques that rely on passive sensors, such as camera-based SLAM.

3: How does Lidar SLAM handle moving objects in the environment?

Handling dynamic environments with moving objects can be challenging for Lidar SLAM algorithms. Advanced algorithms typically use techniques such as object tracking, data association, or segmentation to differentiate between static and dynamic objects and update the map accordingly.

4: What is the difference between 2D and 3D Lidar SLAM?

The primary difference between 2D and 3D Lidar SLAM lies in the dimensionality of the data collected by the Lidar sensor. In 2D Lidar SLAM, the sensor provides data in a single plane, whereas, in 3D Lidar SLAM, the sensor captures data in three-dimensional space. As a result, 3D Lidar SLAM can provide more detailed and accurate maps of the environment, but at the cost of increased computational complexity.

5: Is Lidar SLAM suitable for outdoor applications?

Yes, Lidar SLAM is suitable for outdoor applications. However, the performance of Lidar sensors can be affected by certain environmental factors such as rain, fog, or dust. It is crucial to select a Lidar sensor designed for outdoor applications and ensure that the SLAM algorithm can handle these environmental challenges.

References:

https://ori.ox.ac.uk/labs/drs/lidar-simultaneous-localization-and-mapping/

https://www.kudan.io/blog/3d-lidar-slam-the-basics/

https://www.flyability.com/simultaneous-localization-and-mapping

https://leddarsensor.com/why-lidar/#:~:text=A%20Mechanical%20Scanning%20LiDAR%20can,to%20steer%20a%20light%20beam.


More by Saliha Malik

Welcome to my profile, where technology meets creativity! I'm a Computer Scientist from NUST, Pakistan. With over 5 years of writing experience and a strong background in AI/ML, IoT, Cloud and Data Science, I specialize in crafting content that bridges the gap between complex technical concepts an...