A Primer on Lidar for autonomous vehicles

author avatar

08 Oct, 2019

A Primer on Lidar for autonomous vehicles

Autonomous vehicles, the imminent future that everyone is looking towards has implications in day-to-day life of an average joe commuting between work and home, to mission critical life-saving applications.

Autonomous vehicles, the imminent future that everyone is looking towards has implications in day-to-day life of an average joe commuting between work and home, to mission critical life-saving applications. Being a part of such exciting journey, Playment builds products that enable perception engineers create highly accurate datasets at scale to train their models.

Based on the robustness of the sensors and the algorithmic performance, the car might be able to perform tasks more or less by itself, with minimal to non-existent human intervention. These varying levels of human intervention is the reason behind the segregation of AV technology into levels of Autonomy.

In order for a car to “see”, different types of sensors are required that one way or the other allow it to safely navigate: radars, high-resolution video cameras, high-precision inertial GPS, ultrasonic sensors and LiDARs. Today we will explain what is a LIDAR, one of the most ubiquitous sensors in autonomous cars, and how it works.

LIDAR stands for ‘Laser Imaging Detection and Ranging’. Interestingly, it is an acronym in which there is another acronym (LASER: Light Amplification by Stimulated Emission of Radiation ). Similar to a RADAR that emits radio waves that “reflect” on hitting the objects, a LIDAR emits beams of infrared laser light rays.

Types of LiDAR

In some other context, you may have heard about them described as laser scanner. They are quite common in fields of Topography, Geology, Architecture and Geo-Spatial informatics. Those of this type happen to be very similar in appearance to a theodolite or a total station and are also placed on a tripod, stationed at a certain vertex.


Environmental factors play a great role in deciding how good of a driver are we. We all have difficulties in driving during nights and so do the image(camera) based autonomous driving systems. LiDAR based systems do not rely only on the reflectance value of the object surface to perceive the environment, they sample the 3D world at regular intervals to capture illumination invariant object features.

Of course, LiDARs are not all mighty and no feeble devices. They often encounter difficulties while driving in rainy conditions and have a limited range. They could be bulky and require expensive setup rigs on the vehicles.

Differences between Camera/LiDAR and Radar 

How a LIDAR works

A LIDAR in a very basic way is a focussed emitter of infrared beams (and that therefore cannot be seen with naked-eye), and a receiver to capture laser beams. Under the conditions of intended use, they are not dangerous to the eyes, so calm down.

Although there are some stationary LIDARs, for example, the most basic models are mounted on the roof of a car. Some models rotate 360-Degrees about itself to cover the whole environment.

The laser beams that hit the objects are reflected, and the reflected rays are detected by the lens. A radar emits radio, a sonar emits acoustic and similarly, LiDAR emits infrared waves that reflect when then encounter an object in their line of sight. The time difference between the emission and reception of a light wave is used to calculate the relative position of the object. This is simultaneously performed with multiple vertical lasers sometimes rotating at high speed.

Working Principle of LiDAR

This way, a LIDAR device obtains a cloud of points from the environment, with which the computer generates a three-dimensional image, this is done multiple times per second(fps) to determine the moving objects and their profile.

A LiDAR gives you a precise description of the environment through millions of points.

LiDARs have been considered very useful in autonomous cars, as it not only allows computers to understand exactly how far each of them is with great precision (by measuring the time it takes for each laser beam to come back) but also identify the objects. This way you can anticipate the situations that will occur (for example the movement of other vehicles or pedestrians whose trajectory might intersect the ego-car’s path), or determine if there is a danger of grazing or hitting something.

Comparing Top LiDAR Sensors

On the one hand we have a stationary LiDAR. They consist of a fairly compact device unit with lenses for the emission of laser beams and a lens for capturing the reflected beams. They are placed on the roof of the car, in front of the rear view mirror, to have a better visual. Sometimes the unit combines LIDAR with a video camera to recognize lane lines, pedestrians or traffic signals. They are used primarily for autonomous emergency braking systems. For example, Volvo and Ford often use such devices.

On the other hand, we have the 360-degree rotating LIDAR. This is an adaptation of the topographic type LIDAR for automotive purposes.

Three LIDAR models from the Velodyne brand (from left to right): HDL-64E, HDL-32E, and VLP-16 (PUCK).

Google’s self-driving car: Their LiDAR is a mushroom kind of structure placed on the roof of cars, which through successive redesigns is being integrated into the vehicle’s design.

Ouster LiDAR: This a peculiar device that captures the environment as an image and then generates the LiDAR point cloud from the images.

The Velodyne’s HDL-64E model emits 64 laser beams and covers 360 degrees, at 900 turns per minute, to capture the entire environment of the car, with up to 2.2 million points per second. It has a range of 50m for the pavement and 120m for vehicles, pedestrians and trees. A single unit in principle is enough to meet AV needs.

Velodyne has also developed other compact and cheaper LIDARs. For example, in the first prototype of Ford Mondeo autonomous, 4 units of the HDL-32E model are used. Each emits 32 laser beams, they also rotate around themselves 360 degrees at 600 turns per minute, capturing up to 0.7 million points per second. The range is between 80 to 100m from objects vehicles, pedestrians and trees. A total of more than 2.5 million points are processed per second.

The latest model is also the most compact and the cheapest, VLP-16 . There are three variants: Puck, Puck Lite and Puck Hi-Res. It emits 16 laser beams, rotates 360 degrees, covers up to 0.3 million points per second and reaches up to 100 meters range.

Not all car manufacturers use 360-degree LiDAR devices in their prototypes of autonomous cars (for their price, or for their difficulty of aesthetic integration). Sometimes they prefer to use multiple high resolution video cameras, stereoscopic frequently, complemented by radars of different scope and aperture. Another day we will talk about these alternatives to LIDAR.

Typical LiDARs involve an array of lasers with mechanical support to enable them capture the 360 degree scene. These are bulky, fragile(the mechanical setup could causes wobbling based perturbations in the data) and expensive. However, with the advent of the cheap solid state based LiDARs, we could achieve the same functionality at a great form factor(compact device). However, they are limited in the field-of-view, typically ranging between 90–120 degrees. This could be compensated by using multiple devices to capture the scene.

The individual point cloud based information is fused using point cloud “registration” algorithms to achieve an output similar to that of the mechanical LiDARs.

Despite the limitations in the object based range, self-driving cars are required to detect them as long as they are in the LiDAR’s range. A number of annotation difficulties arise because of this.

LiDAR Point Cloud data labeling challenges

  • Annotation of LiDAR requires understanding of camera 3D → 2D projection to reduce the false negatives (missing annotations).
  • Working on futuristic and low resolution or fewer beam LiDARs might not give the complete profile to understand the object.
  • The LiDAR echo, a scene and device based distortion might occlude real objects. Annotating for segmentation task in such conditions require a near point level accuracy which sometimes is difficult when multiple scenes are merged due to echo and stray points.
  • Indoor LiDAR environments do not really change at the high frame-rate that the LiDAR is designed to capture. Hence, sequences could be labelled at lower rate to avoid overfitting your model. Our experience helps you to cut-down the costs for such cases.

More by Mothi Venkatesh

I’m a full-stack marketer with 5 years of experience in creating and optimizing digital campaigns seeking to apply actionable strategies and marketing experience to a management level position. Ready to assist businesses leverage digital media with digital strategic planning and measures to achieve ...

Wevolver 2022