The what and why of GRAPH SLAM!

‘SLAM’ refers to the method of simultaneous localisation (i.e. position/orientation of) some sensor with respect to its surroundings, while at the same time mapping the environment.This is an active area of research in the fields of robotics and autonomous systems.

So, let’s get started!

SLAM (Simultaneous Localisation and Mapping) can be implemented for a 2-dimensional world! It is basically a combination of robot sensor measurements and movement to create /locate a robot and create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives you a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features.

3D map of abandoned underground coal mine in Pennsylvania

So the key insights in building a map is that the robot itself might lose track of where it is by virtue of its motion uncertainty since there is no presence of existing map (because the map is being built). That’s why SLAM comes into play. GRAPH SLAM is by far the easiest method to understand (also it’s one of my favorites).

SLAM generated map

A graph-based SLAM approach constructs a simplified estimation problem by abstracting the raw sensor measurements. These raw measurements are replaced by the edges in the graph which can then be seen as “virtual measurements”.

To understand this in a simple way let’s assume an initial position of the robot X=0 & Y=0, for this example assume that there is no concern of heading direction & compass.
Let’s assume the robot moves to the right in the x-direction by 10, so now the location after motion(X1) in a perfect world would be X1= X0+10 and Y1= 0.

But various Kalman Filters have suggested that the motion is actually uncertain so, it’s better to say that the actual location after motion(X1) would be a Gaussian centered around that (10,0) but it’s possible that the robot is somewhere else.

The maths for the X variable goes as follows:
Rather than setting X1 to X0+10, let’s express it in Gaussian that peaks when these two things are the same. So, if we do X1- X0- 10 put this into a square format and turn this into Gaussian, we’ll get a probability distribution that relates X1 & X0. This can be done equally for Y & since there is no change in Y according to our motion so Y1 & Y0 shall remain as close as possible.

The product of these two Gaussian is now our constraint. The goal is to maximize the likelihood of the position X1 given the position X0 is (0,0). So, what GRAPH SLAM does is it defines the probabilities using a sequence of such constraints.
GRAPH SLAM collects it’s initial location which is (0,0) initially (Initial Constraints) then lots of relative constraints that relate each robot pose to the previous robot pose( Relative Motion Constraints). As an example, let’s use landmarks which can be seen by the robot at various locations which would be Relative Measurement Constraints(every time a robot sees a landmark).

These constraints altogether find the most likely configuration of the robot path along with the location of landmark and that is the mapping process.

Reference: Computer Vision nanodegree by Udacity!

If you want to see it’s implementation in code, check out this github link
Landmark Detection Tracking SLAM

Output in code:




A Machine Learning Research scholar who loves to moonlight as a blogger.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

A review of the California House Price dataset using Python

Why Unified Data is Inevitable — The Relevance to Enterprise [Part 3]

Accelerating Growth Cycles with Analytics, Part 1: User Acquisition

Dispatches from the First-Ever Quiet Drones e-Symposium

Creating a 3D mesh with Nearmap Imagery and DSM

Salary of a Data Scientist

4 Python libraries to detect English and Non-English language

Blog on BigData

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Garima Nishad

Garima Nishad

A Machine Learning Research scholar who loves to moonlight as a blogger.

More from Medium

How to See with Sound

Artificial Intelligence: How is It Changing Medical Sciences and The Future

Are The Algorithmic Accountability Profs Being A Bit Reckless?

Hollywood is dead, long live Hollywood