Robot localization
Robot localization refers to a robot’s ability to establish its position and orientation within its frame of reference. Vision-based localization or optical localization uses computer vision techniques and algorithms, with optical sensors to extract features of the surrounding environment required to localize an agent.
Particle filter
A Particle filter is a type of algorithm used to solve states in dynamical systems[1]. They are often called Monte Carlo algorithms as they rely on repeated random sampling to obtain results. In other words, these algorithms use randomness to solve problems that might be deterministic in principle. Our problem is locating the UAV on a map, and a Monte Carlo approach is to compare the environment visible to the UAV with random locations on the map. The probability of the UAV being in an area is related to how similar the environment is around the UAV and a given random sample in the map.
Monte Carlo localization
Monte Carlo Localization, a.k.a particle filter localization[1], is an algorithm for robots to obtain pose and orientation. The algorithm uses a particle filter to represent probable states’ distribution, where each particle represents a possible state, i.e., a hypothesis of where the robot is[1]. As the robot has no information about where it is and assumes any pose and orientation is equally likely, the algorithm typically starts with a uniform random distribution of the configuration space. The particles shift whenever the robot moves to predict its next state. Every time the robot senses its environment, the particles are resampled based on how well the sensed data correlate with the predicted state. This resampling method is called recursive Bayesian estimation. Over time, the particles should converge towards the actual position of the robot[1].
Description
Consider a robot with an internal map of its surroundings. As the robot moves around, it needs to know its position within said map. Predicting its pose and orientation by using its sensor observations is known as previously mentioned, robot localization. Due to imperfections within the robot’s sensors, its mechanics, and perhaps the environment’s randomness, the movement might not behave entirely predictably. For this reason, it generates many random guesses on where heading. These guesses are known as particles. Each particle contains a full description of a future state, such as orientation, coordinates, etc. When the robot does an observation, it discards the guesses inconsistent with what the robot can see and generates more particles of those that appear consistent. Over time, the idea is that most particles, or guesses, converge to where the robot is.
Example of 1D robot:
Consider a robot in a 1D circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door.
t = 0
t = 1
t = 2
At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired.
Algorithm:
Given an a priori known map of the environment, the goal of the algorithm is for the robot to determine its pose within the environment.
At every time t the algorithm takes as input the previous belief Xt − 1 = {xt − 1[1], xt − 1[2], …, xt − 1[M]}, an actuation command ut, and data received from sensors zt; and the algorithm outputs the new belief Xt.[1]
GIL Simulation:
Below is a short simulation of the GIL-framework demonstrating MCL. Red dot is UAV and black dots are particles.
[1] S. Thrun, “Probabilistic robotics,” Communications of the ACM, vol. 45, no. 3, pp. 52–57, 2002, doi: 10.1145/504729.504754.