没有合适的资源?快使用搜索试试~ 我知道了~
Robust Monte Carlo localization for mobile robots
需积分: 9 12 下载量 162 浏览量
2013-04-22
18:24:41
上传
评论
收藏 1.53MB PDF 举报
温馨提示
试读
43页
蒙特卡洛算法原理,基于概率的应用算法,可以用在不同的场合,在移动机器人开发方面的算法。
资源推荐
资源详情
资源评论
Artificial Intelligence 128 (2001) 99–141
Robust Monte Carlo localization for mobile robots
Sebastian Thrun
a,∗
, Dieter Fox
b
, Wolfram Burgard
c
, Frank Dellaert
a
a
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
b
Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
c
Computer Science Department, University of Freiburg, Freiburg, Germany
Received 20 April 2000
Abstract
Mobile robot localization is the problem of determining a robot’s pose from sensor data. This
article presents a family of probabilistic localization algorithms known as Monte Carlo Localization
(MCL). MCL algorithms represent a robot’s belief by a set of weighted hypotheses (samples),
which approximate the posterior under a common Bayesian formulation of the localization problem.
Building on the basic MCL algorithm, this article develops a more robust algorithm called Mixture-
MCL, which integrates two complimentary ways of generating samples in the estimation. To apply
this algorithm to mobile robots equipped with range finders, a kernel density tree is learned that
permits fast sampling. Systematic empirical results illustrate the robustness and computational
efficiency of the approach.
2001 Published by Elsevier Science B.V.
Keywords: Mobile robots; Localization; Position estimation; Particle filters; Kernel density trees
1. Introduction
Mobile robot localization is the problem of estimating a robot’s pose (location,
orientation) relative to its environment. The localization problem is a key problem in
mobile robotics. It plays a pivotal role in various successful mobile robot systems (see
e.g., [10,25,31,45,59,65,74] and various chapters in [4,41]). Occasionally, it has been
referred to as “the most fundamental problem to providinga mobile robot with autonomous
capabilities” [8].
The mobile robot localization problem comes in many different flavors [4,24]. The
most simple localization problem—which has received by far the most attention in the
literature—is position tracking [4,64,74,75]. Here the initial robot pose is known, and
*
Corresponding author.
E-mail address: thrun@cs.cmu.edu (S. Thrun).
0004-3702/01/$ – see front matter 2001 Published by Elsevier Science B.V.
PII:S0004-3702(01)00069-8
100 S. Thrun et al. / Artificial Intelligence 128 (2001) 99–141
the problem is to compensate incremental errors in a robot’s odometry. Algorithms for
position tracking often make restrictive assumptions on the size of the error and the shape
of the robot’s uncertainty, required by a range of existing localization algorithms. More
challenging is the global localization problem [6,34,61], where a robot is not told its
initial pose but instead has to determine it from scratch. The global localization problem
is more difficult, since the error in the robot’s estimate cannot be assumed to be small.
Consequently, a robot should be able to handle multiple, distinct hypotheses. Even more
difficult is the kidnapped robot problem [20,24], in which a well-localized robot is tele-
ported to some other place without being told. This problem differs from the global
localization problem in that the robot might firmly believe itself to be somewhere else
at the time of the kidnapping. The kidnapped robot problem is often used to test a robot’s
ability to recover from catastrophic localization failures. Finally, all these problems are
particularly hard in dynamicenvironments,e.g., if robots operate in the proximityof people
who corrupt the robot’s sensor measurements [5,71].
The vast majority of existing algorithms address only the position tracking problem
(see e.g., the review [4]). The nature of small, incremental errors makes algorithms
such as Kalman filters [28,37,47,68] applicable, which have been successfully applied
in a range of fielded systems (e.g., [27,42,44,63]). Kalman filters estimate posterior
distributions of robot poses conditioned on sensor data. Exploiting a range of restrictive
assumptions—such as Gaussian noise and Gaussian-distributed initial uncertainty—they
represent posteriors by Gaussians. Kalman filters offer an elegant and efficient algorithm
for localization. However, the restrictive nature of the belief representation makes plain
Kalman filters inapplicable to global localization problems.
This limitation is overcome by two related families of algorithms: localization with
multi-hypothesis Kalman filters and Markov localization. Multi-hypothesis Kalman filters
represent beliefs using mixtures of Gaussians [9,34,60,61], thereby enabling them to
pursue multiple, distinct hypotheses, each of which is represented by a separate Gaussian.
However, this approach inherits from Kalman filters the Gaussian noise assumption. To
meet this assumption, virtually all practical implementations extract low-dimensional
features from the sensor data, thereby ignoring much of the information acquired by a
robot’s sensors. Markov localization algorithms, in contrast, represent beliefs by piecewise
constant functions (histograms) over the space of all possible poses [7,24,30,36,40,50,
55,56,66,70]. Just like Gaussian mixtures, piecewise constant functions are capable of
representing complex, multi-modal representations. Some of these algorithms also rely on
features [36,40,50,55,66,70], hence are subject to similar shortcomings as the algorithms
based on multi-hypothesis Kalman filters. Others localize robots based on raw sensor data
with non-Gaussian noise distributions [7,24]. However, accommodating raw sensor data
requires fine-grained representations, which impose significant computational burdens. To
overcomethis limitation, researchers have proposed selective updating algorithms[24] and
tree-based representations that dynamically change their resolution [6]. It is remarkable
that all of these algorithms share the same probabilistic basis. They all estimate posterior
distributions over poses under certain independence assumptions—which will also be the
case for the approach presented in this article.
This article presents a probabilistic localization algorithm called Monte Carlo local-
ization (MCL) [13,21]. MCL solves the global localization and kidnapped robot problem
S. Thrun et al. / Artificial Intelligence 128 (2001) 99–141 101
in a robust and efficient way. It can accommodate arbitrary noise distributions (and non-
linearities in robot motion and perception). Thus, MCL avoids a need to extract features
from the sensor data.
The key idea of MCL is to represent the belief by a set of samples (also called:
particles), drawn according to the posterior distribution over robot poses. In other words,
rather than approximating posteriors in parametric form, as is the case for Kalman
filter and Markov localization algorithms, MCL represents the posteriors by a random
collection of weighted particles which approximates the desired distribution [62]. The
idea of estimating state recursively using particles is not new, although most work on
this topic is very recent. In the statistical literature, it is known as particle filters [17,
18,46,58], and recently computer vision researchers have proposed the same algorithm
under the name of condensation algorithm [33]. Within the context of localization, the
particle representation has a range of characteristics that sets it aside from previous
approaches:
(1) Particle filters can accommodate (almost) arbitrary sensor characteristics, motion
dynamics, and noise distributions.
(2) Particle filters are universal density approximators, weakening the restrictive
assumptions on the shape of the posterior density when compared to previous
parametric approaches.
(3) Particle filters focus computational resources in areas that are most relevant, by
sampling in proportion to the posterior likelihood.
(4) By controlling the number of samples on-line, particle filters can adapt to the
available computational resources. The same code can, thus, be executed on
computers with vastly different speed without modification.
(5) Finally, participle filters are surprisingly easy to implement, which makes them an
attractive paradigm for mobile robot localization. Consequently, MCL has already
been adopted by several research teams [16,43], who have extended the basic
paradigm in interesting new ways.
However, there are pitfalls, too, arising from the stochastic nature of the approximation.
Some of these pitfalls are obvious: For example, if the sample set size is small, a well-
localized robot might lose track of its position just because MCL fails to generate a sam-
ple in the right location. The regular MCL algorithm is also unfit for the kidnapped robot
problem, since there might be no surviving samples nearby the robot’s new pose after it has
been kidnapped. Somewhat counter-intuitive is the fact that the basic algorithm degrades
poorly when sensors are too accurate. In the extreme, regular MCL will fail with perfect,
noise-free sensors. All these problems can be overcome,e.g., by augmenting the sample set
through uniformly distributed samples [21], generating samples consistent with the most
recent sensor reading [43] (an idea familiar from multi-hypothesis Kalman filtering [1,34,
61]), or assuming a higher level of sensor noise than actually is the case. While these ex-
tensions yield improved performance, they are mathematically questionable. In particular,
these extensions do not approximate the correct density; which makes the interpretation of
their results difficult.
To overcome these problems, this article describes an extension of MCL closely related
to [43], called Mixture-MCL [72]. Mixture-MCL addresses all these problems in a way
that is mathematically motivated. The key idea is to modify the way samples are generated
102 S. Thrun et al. / Artificial Intelligence 128 (2001) 99–141
in MCL. Mixture-MCL combines regular MCL sampling with a “dual” of MCL, which
basically inverts MCL’s sampling process. More specifically, while regular MCL first
guesses a new pose using odometry, then uses the sensor measurements to assess the
“importance” of this sample, dual MCL guesses poses using the most recent sensor
measurement, then uses odometry to assess the compliance of this guess with the robot’s
previous belief and odometry data. Neither of these sampling methodologies alone is
sufficient; in combination, however, they work very well. In particular, Mixture-MCL
works well if the sample set size is small (e.g., 50 samples), it recovers faster from robot
kidnappingthan any previousvariation of MCL, and it also works well when sensor models
are too narrow for regular MCL. Thus, from a performance point of view, Mixture-MCL
is uniformly superior to regular MCL and particle filters.
The key disadvantage of Mixture-MCL is a requirement for a sensor model that permits
fast sampling of poses. While in certain cases, such a model can trivially be obtained, in
others, such as the navigation domains studied here and in [24], it cannot. To overcome
this difficulty, our approach uses sufficient statistics and density trees to learn a sampling
model from data. More specifically, in a pre-processing phase sensor readings are mapped
into a set of discriminating features, and potential robot poses are then drawn randomly
using trees generated. Once the tree has been constructed, dual sampling can be done very
efficiently.
To shed light onto the performance of Mixture-MCL in practice, empirical results are
presented using a robot simulator and data collected by physical robots. Simulation is
used since it allows us to systematically vary key parameters such as the sensor noise,
thereby enabling us to characterize the degradation of MCL in extreme situations. To verify
the experimental findings obtained with simulation, Mixture-MCL is also applied to two
extensive data sets gathered in a public museum (a Smithsonian museum in Washington,
DC), where during a two-week period in the fall of 1998 our mobile robot Minerva gave
interactive tours to thousands of visitors [71]. One of the data set comprises laser range
data, where a metric map of the museum is used for localization [71]. The other data set
contains image segments recorded with a camera pointed towards the museum’s ceiling,
using a large-scale ceiling mosaic for cross-referencing the robot’s position [14]. In the
past, these data have been used as benchmark, since localization in this crowded and
feature-impoverished museum is a challenging problem. Our experiments suggest that our
new MCL algorithm is highly efficient and accurate.
The remainder of this article is organized as follows. Section 2 introduces the regular
MCL algorithm, which includes a mathematical derivation from first principles and an
experimental characterization of MCL in practice. The section also compares MCL with
grid-based Markov localization [24], an alternative localization algorithms capable of
global localization. Section 3 presents examples where regular MCL performs poorly,
along with a brief analysis of the underlying causes. This section is followed by the
description of dual MCL and Mixture-MCL in Section 4. Section 5 describes our approach
to learning trees for efficient sampling in dual MCL. Experimental results are given in
Section 6. Finally, we conclude this article by a description of related work in Section 7,
and a discussion of the strengths and weaknesses of Mixture-MCL (Section 8).
S. Thrun et al. / Artificial Intelligence 128 (2001) 99–141 103
2. Monte Carlo localization
2.1. Bayes filtering
MCL is a recursive Bayes filter that estimates the posterior distribution of robot poses
conditioned on sensor data. Bayes filters address the problem of estimating the state x
of a dynamical system (partially observable Markov chain) from sensor measurements.
For example, in mobile robot localization the dynamical system is a mobile robot and its
environment, the state is the robot’s pose therein (often specified by a position in a two-
dimensional Cartesian space and the robot’s heading direction), and measurements may
include range measurements, camera images, and odometry readings. Bayes filters assume
that the environment is Markov, that is, past and future data are (conditionally) independent
if one knows the current state. The Markov assumption will be made more explicit below.
The key idea of Bayes filtering is to estimate a probability density over the state space
conditioned on the data. This posterior is typically called the belief and is denoted
Bel(x
t
) = p(x
t
| d
0...t
).
Here x denotes the state, x
t
is the state at time t ,andd
0...t
denotes the data starting at
time 0 up to time t . For mobile robots, we distinguish two types of data: perceptual data
such as laser range measurements, and odometry data, which carry information about robot
motion. Denoting the former by o (for observation) and the latter by a (for action), we have
Bel(x
t
) = p(x
t
| o
t
,a
t−1
,o
t−1
,a
t−2
,...,o
0
). (1)
Without loss of generality, we assume that observations and actions arrive in an alternating
sequence. Notice that we will use a
t−1
to refer to the odometry reading that measures the
motion that occurred in the time interval [t −1;t], to illustrate that the motion is the result
of the control action asserted at time t − 1.
Bayes filters estimate the belief recursively.Theinitial belief characterizes the initial
knowledge about the system state. In the absence of such knowledge, it is typically
initialized by a uniform distribution over the state space. In mobile robot localization, a
uniform initial distribution corresponds to the global localization problem, where the initial
robot pose is unknown.
To derivea recursive update equation, we observe that expression (1) can be transformed
by Bayes rule to
Bel(x
t
) =
p(o
t
| x
t
,a
t−1
,...,o
0
)p(x
t
| a
t−1
,...,o
0
)
p(o
t
| a
t−1
,...,o
0
)
. (2)
Because the denominator is a constant relative to the variable x
t
, Bayes rule is usually
written as
Bel(x
t
) = ηp(o
t
| x
t
,a
t−1
,...,o
0
)p(x
t
| a
t−1
,...,o
0
), (3)
where η is the normalization constant
η = p(o
t
| a
t−1
,...,o
0
)
−1
. (4)
剩余42页未读,继续阅读
资源评论
伤逝年华why
- 粉丝: 1
- 资源: 7
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 论文(最终)_20240430235101.pdf
- 基于python编写的Keras深度学习框架开发,利用卷积神经网络CNN,快速识别图片并进行分类
- 最全空间计量实证方法(空间杜宾模型和检验以及结果解释文档).txt
- 5uonly.apk
- 蓝桥杯Python组的历年真题
- 2023-04-06-项目笔记 - 第一百十九阶段 - 4.4.2.117全局变量的作用域-117 -2024.04.30
- 2023-04-06-项目笔记 - 第一百十九阶段 - 4.4.2.117全局变量的作用域-117 -2024.04.30
- 前端开发技术实验报告:内含4四实验&实验报告
- Highlight Plus v20.0.1
- 林周瑜-论文.docx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功