198 Doucet, Godsill and Andrieu
work includes M¨uller (1992), West (1993), Gordon, Salmond
and Smith (1993), Kong, Liu and Wong (1994) and Liu and
Chen (1998).
The main objective of this article is to include in a unified
framework many old and more recent algorithms proposed in-
dependently in a number of applied science areas. Both Liu
and Chen (1998) and Doucet (1997, 1998) underline the central
rˆole of sequential importance sampling in Bayesian filtering.
However, contrary to Liu and Chen (1998) which emphasizes
the use of hybrid schemes combining elements of importance
sampling with Markov Chain Monte Carlo (MCMC), we focus
here on computationally cheaper alternatives. We describe also
how it is possible to improve current existing methods via Rao-
Blackwellisation for a useful class of dynamic models. Finally,
we show how to extend these methods to compute the predic-
tion and fixed-interval smoothing distributions as well as the
likelihood.
The paper is organised as follows. In Section 2, we briefly re-
view the Bayesian filtering problem and classical Bayesian im-
portance sampling is proposed for its solution. We then present
a sequential version of this method which allows us to obtain a
general recursive MC filter: the sequential importance sampling
(SIS) filter. Under a criterion of minimum conditional variance of
the importance weights, we obtain the optimal importance func-
tion for this method. Unfortunately, for most models of applied
interest the optimal importance function leads to non-analytic
importance weights, and hence we propose several suboptimal
distributions and show how to obtain as special cases many of
the algorithms presented in the literature. Firstly we consider
local linearisation methods of either the state space model or
the optimal importance function, giving some important exam-
ples. These linearisation methods seem to be a very promising
way to proceed in problems of this type. Secondly we consider
some simple importance functions which lead to algorithms cur-
rently known in the literature. In Section 3, a resampling scheme
is used to limit practically the degeneracy of the algorithm. In
Section 4, we apply the Rao-Blackwellisation method to SIS
and obtain efficient hybrid analytical/MC filters. In Section 5,
we show how to use the MC filter to compute the prediction and
fixed-interval smoothing distributions as well as the likelihood.
Finally, simulations are presented in Section 6.
II. Filtering via Sequential
Importance Sampling
A. Preliminaries: Filtering for the state space model
The state sequence {x
k
; k ∈N}, x
k
∈R
n
x
, is assumed to be an
unobserved (hidden) Markov process with initial distribution
p(x
0
) (which we subsequently denote as p(x
0
|x
−1
) for no-
tational convenience) and transition distribution p(x
k
|x
k−1
),
where n
x
is the dimension of the state vector. The observa-
tions {y
k
; k ∈N}, y
k
∈R
n
y
, are conditionally independent given
the process {x
k
; k ∈N} with distribution p(y
k
|x
k
) and n
y
is the
dimension of the observation vector. To sum up, the model is a
hidden Markov (or state space) model (HMM) described by
p (x
k
| x
k−1
) for k ≥ 0 (1)
p (y
k
| x
k
) for k ≥ 0 (2)
We denote by x
0:n
4
={x
0
,...,x
n
}and y
0:n
4
={y
0
,...,y
n
}, re-
spectively, the state sequence and the observations up to time n.
Our aim is to estimate recursively in time the distribution
p(x
0:n
| y
0:n
) and its associated features including p(x
n
| y
0:n
)
and expectations of the form
I ( f
n
) =
Z
f
n
(x
0:n
)p(x
0:n
| y
0:n
) dx
0:n
(3)
for any p(x
0:n
| y
0:n
)-integrable f
n
: R
(n +1) ×n
x
→R. A recur-
sive formula for p(x
0:n
| y
0:n
)isgivenby:
p(x
0:n+1
| y
0:n+1
) = p(x
0:n
| y
0:n
)
p(y
n+1
| x
n+1
)p(x
n+1
| x
n
)
p(y
n+1
| y
0:n
)
(4)
The denominator of this expression cannot typically be com-
puted analytically, thus rendering an analytic approach infeasi-
ble except in the special cases mentioned above. It will later be
assumed that samples can easily be drawn from p(x
k
| x
k−1
) and
that we can evaluate p(x
k
| x
k−1
) and p(y
k
| x
k
) pointwise.
B. Bayesian Sequential Importance Sampling (SIS)
Since it is generally impossible to sample from the state pos-
terior p(x
0:n
|y
0:n
) directly, we adopt an importance sampling
(IS) approach. Suppose that samples {x
(i)
0:n
; i =1,...,N} are
drawn independently from a normalised importance function
π(x
0:n
| y
0:n
) which has a support including that of the state
posterior. Then an estimate
c
I
N
( f
n
) of the posterior expectation
I ( f
n
) is obtained using Bayesian IS (Geweke 1989):
c
I
N
( f
n
) =
N
X
i=1
f
n
¡
x
(i)
0:n
¢
˜w
(i)
n
, ˜w
(i)
n
=
w
∗(i)
n
P
N
j=1
w
∗( j )
n
(5)
where w
∗(i)
n
= p(y
0:n
| x
0:n
)p(x
0:n
)/π(x
0:n
| y
0:n
) is the unnor-
malised importance weight. Under weak assumptions
c
I
N
( f
n
)
converges to I ( f
n
), see for example Geweke (1989). However,
this method is not recursive. We now show how to obtain a
sequential MC filter using Bayesian IS.
Suppose one chooses an importance function of the form
π(x
0:n
| y
0:n
) = π(x
0
|y
0
)
n
Y
k=1
π(x
k
| x
0:k−1
, y
0:k
) (6)
Such an importance function allows recursive evaluation in time
of the importance weights as successive observations y
k
become
评论3
最新资源