How autonomous robots know where they are

author avatar

Last updated on 03 Aug, 2019

Image: Highway image using LIDAR - Oregon State University (CC BY-SA 2.0)

Image: Highway image using LIDAR - Oregon State University (CC BY-SA 2.0)

A brief introduction to localization and mapping, and how these two functions are performed simultaneously during SLAM (Simultaneous Localization and Mapping).

What sensors do we need for building a fully autonomous robot?

Autonomous Robots are equipped with numerous sensors to perceive its surroundings as well as be aware of its own movements. The first step for achieving autonomy of robots i.e., robots moving from point A to point B on its own without colliding with anything, is awareness of the surrounding environment. Autonomous robots have a stack of sensors, among them, Wheel Odometers, IMU, GPS, Lidar, and Multiple Cameras being the most common. The current developments in the autonomous vehicles space also make use of sensors like Radar, Stereo Cameras in combination with the above stack. 

What is localization?

Localization means to determine the current position and orientation of a body with respect to some coordinate system.

How humans localize, is it a difficult task?

Humans very naturally tend to determine their  current position with respect to the different environmental landmarks/features around them. So, when we localize ourselves we tend to understand we are at some given distance and at certain angle from some house/tree/lamp post or any other landmarks.
Given an empty featureless space even humans cannot localize. Thus, features are important for this localization task and thus we need a map filled with features/landmark to localize ourselves in that map, else it is an impossible task to perform.

Then how do robots do it?

The SLAM algorithm has two parts; the first is mapping and the second is localization. Autonomous ground robots have visual sensors like Lidar and Camera which can pretty well map its environment around. Then with 3D reconstruction from their data, robots generate something called a HD map.
HD maps differ from normal maps, the former has a lot more features than the later. Once this map is ready, the robot starts localizing itself in this map. Particle Filter, Triangulation, Visual Odometry are some methods used for this purpose. 

Another widely used method for localization is Extended Kalman Filter. An EKF is a non-linear version of the linear Kalman Filter. It's a state-space estimator, what that means is that it can help in estimating the current state given input of the previous state. It's is used as a data fusion filter for localization tasks in robotics. It takes inputs from the IMU, Wheel Odometers and GPS and performs its computation based on a CTRV (Constant Turn Rate Velocity) model for a wheeled ground vehicle, to estimate the current position and orientation of the vehicle. Often this localization information is fused with visual odometry to achieve an accuracy of 100mm in the localization task.

Once the robots know where they are in a map they can now start planning their path for the destination point B. Thus, invoking another interesting field of research called Path Planning.  


More resources on SLAM :

1. Teaching robots presense

2. SLAM

3. Simultaneous Localization and Mapping

 

More about Tanmay Chakraborty

Mr. Chakraborty is an early stage researcher and consultant in the domain of AI, CV, and Collaborative AI. He has a number of publications and is actively developing novel patent-pending technologies in Industry 4.0 domains.