Topic > The concept of Slam (simultaneous localization and mapping)

IndexChanging the state of the robot to achieve the goalKF techniquesParticle filter techniquesExpectation maximization (EM) technique and improvement with missing or hidden dataGraph-based SLAM techniquesVisual SLAM is a algorithm that simultaneously locate the robot and create a map of the environment. Basic steps involved in a SLAM problem: Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay Given an unknown environment and a pose of the robotMove through the environmentEstimate the pose of the robotGenerate a map of environmental featuresUse the 3D map to identify the targetChange the state of the robot in order to reach the goalSimultaneous Localization and Mapping (SLAM) is the process by which a mobile robot can construct a map of an unknown environment and simultaneously calculate its position using the map. SLAM has been formulated and solved as a theoretical problem in many different forms. It has been implemented in different fields, from internal to external, and the possibility of combining robotics in surgical matters has caught the attention of the medical community. The common point is that navigation accuracy affects the success and results of a task, regardless of the scope. Since its inception, the SLAM problem has been developed and optimized in several ways. There are three main algorithms: Kalman filters (KF), particle filters, and graph-based SLAM. The first two are also referred to as filtering techniques, where location and map estimates are augmented and refined by incorporating new measurements as they become available. Due to their incremental nature, these approaches are generally recognized as online SLAM techniques. In contrast, graph-based SLAM estimates the entire trajectory and map from the complete set of measurements and is called a complete SLAM problem. Techniques KF Smith et al. were the first to present the idea of ​​representing the structure of the navigation area in a discrete state-space-time framework, introducing the concept of a stochastic map. Since the original KF algorithm is based on the assumption of linearity, which is rarely satisfied, two variants are mainly used since then: extended KF (EKF) and information filtering (IF). The EKF overcomes the linearity assumption that describes the probability of the next state and measurement probabilities using nonlinear functions. Unscented KF (UKF) has been developed in recent years to overcome some major problems with EKF. It approximates the state distribution with a Gaussian random variable, as in EKF, but here it is represented using a minimal set of carefully chosen sample points, called σ points. When propagated through the nonlinear system, they capture the posterior mean and covariance with precision up to 3rd order of the Taylor series for any nonlinearity. The dual of the KF is the information filter, which is based on the same assumptions but the key difference lies in the way the Gaussian belief is represented. The estimated covariance and estimated state are replaced by the information matrix and information vector, respectively. It has several advantages over KF: data are filtered by simply adding the matrices and vector information, providing more accurate estimates; information filtering tends to be numerically more stable in many applications. KF is more advantageous in the prediction phase because the update phase is additive while UKF involves inverting two matrices, which means an increase in complexitycomputational with a high-dimensional state space. However, these roles are reversed in the measurement phase, illustrating the dual character of Kalman and information filters. A variant of EIF, which consists of an approximation that maintains a sparse representation of environmental dependencies to achieve constant temporal updating. They were inspired by other works on SLAM filters that represent relative distances but none of them are capable of constant temporal updating. To overcome the difficulties of both EKF and IF and to be more efficient in terms of computational complexity, a SLAM algorithm combined with kalman-information filter (CF-SLAM) was developed. It is a combination of EKF and EIF that allows you to run highly efficient SLAM in large environments. Particle Filter Techniques Particle filters comprise a large family of sequential Monte Carlo algorithms, the latter being represented by a set of samples of random states, called particles. Almost any probabilistic robot model that features a Markov chain formulation can be suitable for your application. Their accuracy increases with the available computational resource, so it does not require a fixed computation time. They are also relatively easy to implement: they do not need to linearize nonlinear models and do not worry about closed-form solutions of the conditional probability as in KF. Poor performance in higher dimensional spaces is their main limitation. The need to increase the consistency of the estimate, together with the problem of the heterogeneity of the trajectory samples, has led to the adoption of different sampling strategies.iii.FastSLAM denotes a family of algorithms that integrates particle filters and EKF. It takes advantage of the fact that feature estimates are condition independent given observations, controls, and the robot's path. This implies that the mapping problem can be divided into separate problems, one for each map feature, considering that individual map errors are also independent. FastSLAM uses particle filters to estimate the robot's path and, for each particle, uses the EKF to estimate feature positions, providing computational advantages over simple EKF implementations and handling nonlinear robot motion patterns well. However, the particle approximation does not converge uniformly over time due to the presence of the map in the state space, which is a static parameter. Expectation maximization (EM) technique and improvement with missing or hidden data EM is an iterative procedure efficient for computing parameter estimates in probabilistic models with missing or hidden data. Each iteration consists of two processes: the expectation, or E-step, which estimates missing data given the current model and observed data; the M step, which calculates the parameters that maximize the predicted log likelihood found in the E step. The missing data estimate from the E-step is used in place of the actual missing data. The algorithm guarantees convergence to a local maximum of the objective function, since it requires that all data be available at each iteration. an online version has been implemented, where there is no need to store data as it is used sequentially. This algorithm has also been used to relax the assumption that the environment in many SLAM problems is static. Most existing methods are robust to mapping static, structured, and small-scale environments, while mapping unstructured, dynamic, or large-scale environments remains an open research problem. In literature, there are mainly two.