没有合适的资源?快使用搜索试试~ 我知道了~
资源详情
资源评论
资源推荐
1
Improved Techniques for Grid Mapping
with Rao-Blackwellized Particle Filters
Giorgio Grisetti
∗†
Cyrill Stachniss
‡∗
Wolfram Burgard
∗
∗
University of Freiburg, Dept. of Computer Science, Georges-K
¨
ohler-Allee 79, D-79110 Freiburg, Germany
†
Dipartimento Informatica e Sistemistica, Universit
´
a “La Sapienza”, Via Salaria 113, I-00198 Rome, Italy
‡
Eidgen
¨
ossische Technische Hochschule Zurich (ETH), IRIS, ASL, 8092 Zurich, Switzerland
Abstract— Recently, Rao-Blackwellized particle filters have
been introduced as an effective means to solve the simultaneous
localization and mapping problem. This approach uses a particle
filter in which each particle carries an individual map of the
environment. Accordingly, a key question is how to reduce the
number of particles. In this paper, we present adaptive techniques
for reducing this number in a Rao-Blackwellized particle filter
for learning grid maps. We propose an approach to compute
an accurate proposal distribution taking into account not only
the movement of the robot but also the most recent observation.
This drastically decreases the uncertainty about the robot’s pose
in the prediction step of the filter. Furthermore, we present an
approach to selectively carry out resampling operations which
seriously reduces the problem of particle depletion. Experimental
results carried out with real mobile robots in large-scale indoor
as well as in outdoor environments illustrate the advantages of
our methods over previous approaches.
Index Terms—SLAM, Rao-Blackwellized particle filter, adap-
tive resampling, motion-model, improved proposal
I. INTRODUCTION
Building maps is one of the fundamental tasks of mobile
robots. In the literature, the mobile robot mapping problem is
often referred to as the simultaneous localization and mapping
(SLAM) problem [4, 6, 9, 15, 16, 26, 29, 32, 39]. It is
considered to be a complex problem, because for localization
a robot needs a consistent map and for acquiring a map a
robot requires a good estimate of its location. This mutual
dependency between the pose and the map estimates makes
the SLAM problem hard and requires searching for a solution
in a high-dimensional space.
Murphy, Doucet, and colleagues [6, 32] introduced Rao-
Blackwellized particle filters as an effective means to solve
the simultaneous localization and mapping problem. The main
problem of the Rao-Blackwellized approaches is their com-
plexity measured in terms of the number of particles required
to build an accurate map. Therefore, reducing this quantity
is one of the major challenges for this family of algorithms.
Additionally, the resampling step can potentially eliminate
the correct particle. This effect is also known as the particle
depletion problem or as particle impoverishment [44].
In this work, we present two approaches to substantially
increase the performance of Rao-Blackwellized particle filters
applied to solve the SLAM problem with grid maps:
• A proposal distribution that considers the accuracy of the
robot’s sensors and allows us to draw particles in a highly
accurate manner.
• An adaptive resampling technique which maintains a
reasonable variety of particles and in this way enables
the algorithm to learn an accurate map while reducing
the risk of particle depletion.
The proposal distribution is computed by evaluating the like-
lihood around a particle-dependent most likely pose obtained
by a scan-matching procedure combined with odometry in-
formation. In this way, the most recent sensor observation is
taken into account for creating the next generation of particles.
This allows us to estimate the state of the robot according to
a more informed (and thus more accurate) model than the
one obtained based only on the odometry information. The
use of this refined model has two effects. The map is more
accurate since the current observation is incorporated into the
individual maps after considering its effect on the pose of
the robot. This significantly reduces the estimation error so
that less particles are required to represent the posterior. The
second approach, the adaptive resampling strategy, allows us
to perform a resampling step only when needed and in this
way keeping a reasonable particle diversity. This results in a
significantly reduced risk of particle depletion.
The work presented in this paper is an extension of our
previous work [14] as it further optimizes the proposal dis-
tribution to even more accurately draw the next generation
of particles. Furthermore, we added a complexity analysis, a
formal description of the used techniques, and provide more
detailed experiments in this paper. Our approach has been
validated by a set of systematic experiments in large-scale
indoor and outdoor environments. In all experiments, our
approach generated highly accurate metric maps. Additionally,
the number of the required particles is one order of magnitude
lower than with previous approaches.
This paper is organized as follows. After explaining how a
Rao-Blackwellized filter can be used in general to solve the
SLAM problem, we describe our approach in Section III. We
then provide implementation details in Section IV. Experi-
ments carried out on real robots are presented in Section VI.
Finally, Section VII discusses related approaches.
II. MAPPING WITH RAO-BLACKWELLIZED PARTICLE
FILTERS
According to Murphy [32], the key idea of the Rao-
Blackwellized particle filter for SLAM is to estimate the joint
posterior p(x
1:t
, m | z
1:t
, u
1:t−1
) about the map m and the
2
trajectory x
1:t
= x
1
, . . . , x
t
of the robot. This estimation is
performed given the observations z
1:t
= z
1
, . . . , z
t
and the
odometry measurements u
1:t−1
= u
1
, . . . , u
t−1
obtained by
the mobile robot. The Rao-Blackwellized particle filter for
SLAM makes use of the following factorization
p(x
1:t
, m | z
1:t
, u
1:t−1
) =
p(m | x
1:t
, z
1:t
) · p(x
1:t
| z
1:t
, u
1:t−1
). (1)
This factorization allows us to first estimate only the trajectory
of the robot and then to compute the map given that trajectory.
Since the map strongly depends on the pose estimate of
the robot, this approach offers an efficient computation. This
technique is often referred to as Rao-Blackwellization.
Typically, Eq. (1) can be calculated efficiently since the pos-
terior over maps p(m | x
1:t
, z
1:t
) can be computed analytically
using “mapping with known poses” [31] since x
1:t
and z
1:t
are known.
To estimate the posterior p(x
1:t
| z
1:t
, u
1:t−1
) over the po-
tential trajectories, one can apply a particle filter. Each particle
represents a potential trajectory of the robot. Furthermore, an
individual map is associated with each sample. The maps are
built from the observations and the trajectory represented by
the corresponding particle.
One of the most common particle filtering algorithms is
the sampling importance resampling (SIR) filter. A Rao-
Blackwellized SIR filter for mapping incrementally processes
the sensor observations and the odometry readings as they
are available. It updates the set of samples that represents the
posterior about the map and the trajectory of the vehicle. The
process can be summarized by the following four steps:
1) Sampling: The next generation of particles {x
(i)
t
} is ob-
tained from the generation { x
(i)
t−1
} by sampling from the
proposal distribution π. Often, a probabilistic odometry
motion model is used as the proposal distribution.
2) Importance Weighting: An individual importance weight
w
(i)
t
is assigned to each particle according to the impor-
tance sampling principle
w
(i)
t
=
p(x
(i)
1:t
| z
1:t
, u
1:t−1
)
π(x
(i)
1:t
| z
1:t
, u
1:t−1
)
. (2)
The weights account for the fact that the proposal distri-
bution π is in general not equal to the target distribution
of successor states.
3) Resampling: Particles are drawn with replacement pro-
portional to their importance weight. This step is nec-
essary since only a finite number of particles is used to
approximate a continuous distribution. Furthermore, re-
sampling allows us to apply a particle filter in situations
in which the target distribution differs from the proposal.
After resampling, all the particles have the same weight.
4) Map Estimation: For each particle, the corresponding
map estimate p(m
(i)
| x
(i)
1:t
, z
1:t
) is computed based on
the trajectory x
(i)
1:t
of that sample and the history of
observations z
1:t
.
The implementation of this schema requires to evaluate
the weights of the trajectories from scratch whenever a new
observation is available. Since the length of the trajectory
increases over time, this procedure would lead to an obviously
inefficient algorithm. According to Doucet et al. [7], we obtain
a recursive formulation to compute the importance weights by
restricting the proposal π to fulfill the following assumption
π(x
1:t
| z
1:t
, u
1:t−1
) = π(x
t
| x
1:t−1
, z
1:t
, u
1:t−1
)
·π(x
1:t−1
| z
1:t−1
, u
1:t−2
). (3)
Based on Eq. (2) and (3), the weights are computed as
w
(i)
t
=
p(x
(i)
1:t
| z
1:t
, u
1:t−1
)
π(x
(i)
1:t
| z
1:t
, u
1:t−1
)
(4)
=
ηp(z
t
| x
(i)
1:t
, z
1:t−1
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
π(x
(i)
t
| x
(i)
1:t−1
, z
1:t
, u
1:t−1
)
·
p(x
(i)
1:t−1
| z
1:t−1
, u
1:t−2
)
π(x
(i)
1:t−1
| z
1:t−1
, u
1:t−2
)
|
{z }
w
(i)
t−1
(5)
∝
p(z
t
| m
(i)
t−1
, x
(i)
t
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
π(x
t
| x
(i)
1:t−1
, z
1:t
, u
1:t−1
)
· w
(i)
t−1
.(6)
Here η = 1/p(z
t
| z
1:t−1
, u
1:t−1
) is a normalization factor
resulting from Bayes’ rule that is equal for all particles.
Most of the existing particle filter applications rely on the
recursive structure of Eq. (6). Whereas the generic algorithm
specifies a framework that can be used for learning maps, it
leaves open how the proposal distribution should be computed
and when the resampling step should be carried out. Through-
out the remainder of this paper, we describe a technique that
computes an accurate proposal distribution and that adaptively
performs resampling.
III. RBPF WITH IMPROVED PROPOSALS AND ADAPTIVE
RESAMPLING
In the literature, several methods for computing improved
proposal distributions and for reducing the risk of particle
depletion have been proposed [7, 30, 35]. Our approach applies
two concepts that have previously been identified as key
pre-requisites for efficient particle filter implementations (see
Doucet et al. [7]), namely the computation of an improved
proposal distribution and an adaptive resampling technique.
A. On the Improved Proposal Distribution
As described in Section II, one needs to draw samples from
a proposal distribution π in the prediction step in order to ob-
tain the next generation of particles. Intuitively, the better the
proposal distribution approximates the target distribution, the
better is the performance of the filter. For instance, if we were
able to directly draw samples from the target distribution, the
importance weights would become equal for all particles and
the resampling step would no longer be needed. Unfortunately,
in the context of SLAM a closed form of this posterior is not
available in general.
As a result, typical particle filter applications [3, 29] use
the odometry motion model as the proposal distribution. This
motion model has the advantage that it is easy to compute for
3
likelihood
robot position
p(z|x)
p(x|x’,u)
|{z}
L
(i)
Fig. 1. The two components of the motion model. Within the interval L
(i)
the product of both functions is dominated by the observation likelihood in
case an accurate sensor is used.
most types of robots. Furthermore, the importance weights
are then computed according to the observation model p(z
t
|
m, x
t
). This becomes clear by replacing π in Eq. (6) by the
motion model p(x
t
| x
t−1
, u
t−1
)
w
(i)
t
= w
(i)
t−1
ηp(z
t
| m
(i)
t−1
, x
(i)
t
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
(7)
∝ w
(i)
t−1
· p(z
t
|m
(i)
t−1
, x
(i)
t
). (8)
This proposal distribution, however, is suboptimal especially
when the sensor information is significantly more precise than
the motion estimate of the robot based on the odometry,
which is typically the case if a robot equipped with a laser
range finder (e.g., with a SICK LMS). Figure 1 illustrates
a situation in which the meaningful area of the observation
likelihood is substantially smaller than the meaningful area of
the motion model. When using the odometry model as the
proposal distribution in such a case, the importance weights
of the individual samples can differ significantly from each
other since only a fraction of the drawn samples cover the
regions of state space that have a high likelihood under the
observation model (area L
(i)
in Figure 1). As a result, one
needs a comparably high number of samples to sufficiently
cover the regions with high observation likelihood.
A common approach – especially in localization – is to use
a smoothed likelihood function, which avoids that particles
close to the meaningful area get a too low importance weight.
However, this approach discards useful information gathered
by the sensor and, at least to our experience, often leads to
less accurate maps in the SLAM context.
To overcome this problem, one can consider the most recent
sensor observation z
t
when generating the next generation of
samples. By integrating z
t
into the proposal one can focus
the sampling on the meaningful regions of the observation
likelihood. According to Doucet [5], the distribution
p(x
t
| m
(i)
t−1
, x
(i)
t−1
, z
t
, u
t−1
) =
p(z
t
| m
(i)
t−1
, x
t
)p(x
t
| x
(i)
t−1
, u
t−1
)
p(z
t
| m
(i)
t−1
, x
(i)
t−1
, u
t−1
)
(9)
is the optimal proposal distribution with respect to the variance
of the particle weights. Using that proposal, the computation
of the weights turns into
w
(i)
t
= w
(i)
t−1
ηp(z
t
| m
(i)
t−1
, x
(i)
t
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
p(x
t
| m
(i)
t−1
, x
(i)
t−1
, z
t
, u
t−1
)
(10)
∝ w
(i)
t−1
p(z
t
| m
(i)
t−1
, x
(i)
t
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
p(z
t
|m
(i)
t−1
,x
t
)p(x
t
|x
(i)
t−1
,u
t−1
)
p(z
t
|m
(i)
t−1
,x
(i)
t−1
,u
t−1
)
(11)
= w
(i)
t−1
· p(z
t
| m
(i)
t−1
, x
(i)
t−1
, u
t−1
) (12)
= w
(i)
t−1
·
Z
p(z
t
| x
′
)p(x
′
| x
(i)
t−1
, u
t−1
) dx
′
. (13)
When modeling a mobile robot equipped with an accurate
sensor like, e.g., a laser range finder, it is convenient to use
such an improved proposal since the accuracy of the laser
range finder leads to extremely peaked likelihood functions.
In the context of landmark-based SLAM, Montemerlo et
al. [26] presented a Rao-Blackwellized particle filter that uses
a Gaussian approximation of the improved proposal. This
Gaussian is computed for each particle using a Kalman filter
that estimates the pose of the robot. This approach can be used
when the map is represented by a set of features and if the
error affecting the feature detection is assumed to be Gaussian.
In this work, we transfer the idea of computing an improved
proposal to the situation in which dense grid maps are used
instead of landmark-based representations.
B. Efficient Computation of the Improved Proposal
When modeling the environment with grid maps, a closed
form approximation of an informed proposal is not directly
available due to the unpredictable shape of the observation
likelihood function.
In theory, an approximated form of the informed proposal
can be obtained using the adapted particle filter [35]. In this
framework, the proposal for each particle is constructed by
computing a sampled estimate of the optimal proposal given
in Eq. (9). In the SLAM context, one would first have to
sample a set of potential poses x
j
of the robot from the motion
model p(x
t
| x
(i)
t−1
, u
t−1
). In a second step, these samples
need to be weighed by the observation likelihood to obtain
an approximation of the optimal proposal. However, if the
observation likelihood is peaked the number of pose samples
x
j
that has to be sampled from the motion model is high,
since a dense sampling is needed for sufficiently capturing
the typically small areas of high likelihood. This results in a
similar problem than using the motion model as the proposal:
a high number of samples is needed to sufficiently cover the
meaningful region of the distribution.
One of our observations is that in the majority of cases
the target distribution has only a limited number of maxima
and it mostly has only a single one. This allows us to sample
positions x
j
covering only the area surrounding these maxima.
Ignoring the less meaningful regions of the distribution saves a
significant amount of computational resources since it requires
less samples. In the previous version of this work [14], we
approximated p(x
t
| x
(i)
t−1
, u
t−1
) by a constant k within the
interval L
(i)
(see also Figure 1) given by
L
(i)
=
n
x
p(z
t
| m
(i)
t−1
, x) > ǫ
o
. (14)
剩余11页未读,继续阅读
dance_ourfuture
- 粉丝: 0
- 资源: 9
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0