Monte Carlo Localization Background

This post contains an overview and description of former work and terminology related to Monte Carlo Localization.

Posted by Vegard Bergsvik Øvstegård on November 23, 2020

Recently by the same author:


Fifth progress presentation

Brief presentation of current progress.


Vegard Bergsvik Øvstegård

Master Student at University of Oslo's Department of Informatics

Robot localization

Robot localization refers to a robot’s ability to establish its position and orientation within its frame of reference. Vision-based localization or optical localization uses computer vision techniques and algorithms, with optical sensors to extract features of the surrounding environment required to localize an agent.

Particle filter

A Particle filter is a type of algorithm used to solve states in dynamical systems[1]. They are often called Monte Carlo algorithms as they rely on repeated random sampling to obtain results. In other words, these algorithms use randomness to solve problems that might be deterministic in principle. Our problem is locating the UAV on a map, and a Monte Carlo approach is to compare the environment visible to the UAV with random locations on the map. The probability of the UAV being in an area is related to how similar the environment is around the UAV and a given random sample in the map.

Monte Carlo localization

Monte Carlo Localization, a.k.a particle filter localization[1], is an algorithm for robots to obtain pose and orientation. The algorithm uses a particle filter to represent probable states’ distribution, where each particle represents a possible state, i.e., a hypothesis of where the robot is[1]. As the robot has no information about where it is and assumes any pose and orientation is equally likely, the algorithm typically starts with a uniform random distribution of the configuration space. The particles shift whenever the robot moves to predict its next state. Every time the robot senses its environment, the particles are resampled based on how well the sensed data correlate with the predicted state. This resampling method is called recursive Bayesian estimation. Over time, the particles should converge towards the actual position of the robot[1].

Description

Consider a robot with an internal map of its surroundings. As the robot moves around, it needs to know its position within said map. Predicting its pose and orientation by using its sensor observations is known as previously mentioned, robot localization. Due to imperfections within the robot’s sensors, its mechanics, and perhaps the environment’s randomness, the movement might not behave entirely predictably. For this reason, it generates many random guesses on where heading. These guesses are known as particles. Each particle contains a full description of a future state, such as orientation, coordinates, etc. When the robot does an observation, it discards the guesses inconsistent with what the robot can see and generates more particles of those that appear consistent. Over time, the idea is that most particles, or guesses, converge to where the robot is.

Example of 1D robot:

Consider a robot in a 1D circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door.

t = 0

The algorithm initializes with a uniform distribution of particles. The robot considers itself equally likely to be at any point in space along the corridor, even though it is physically at the first door.
The algorithm initializes with a uniform distribution of particles. The robot considers itself equally likely to be at any point in space along the corridor, even though it is physically at the first door.
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of the three doors.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of the three doors.

t = 1

Motion update: the robot moves some distance to the right. All particles also move right, and some noise is applied. The robot is physically between the second and third doors.
Motion update: the robot moves some distance to the right. All particles also move right, and some noise is applied. The robot is physically between the second and third doors.
Sensor update: the robot detects no door. It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Sensor update: the robot detects no door. It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of two locations.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of two locations.

t = 2

Motion update: the robot moves some distance to the left. All particles also move left, and some noise is applied. The robot is physically at the second door.
Motion update: the robot moves some distance to the left. All particles also move left, and some noise is applied. The robot is physically at the second door.
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. The robot has successfully localized itself.
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. The robot has successfully localized itself.

At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired.

Algorithm:

Given an a priori known map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment.

At every time t the algorithm takes as input the previous belief Xt − 1 = {xt − 1[1], xt − 1[2], …, xt − 1[M]}, an actuation command ut, and data received from sensors zt; and the algorithm outputs the new belief Xt.[1]

Monte Carlo Localization algorithm [1]
Monte Carlo Localization algorithm [1]

GIL Simulation:

Below is a short simulation of the GIL-framework demonstrating MCL. Red dot is UAV and black dots are particles.

[1] S. Thrun, “Probabilistic robotics,” Communications of the ACM, vol. 45, no. 3, pp. 52–57, 2002, doi: 10.1145/504729.504754.