没有合适的资源?快使用搜索试试~ 我知道了~
optimal filtering(8-11Appendix)
需积分: 0 2 下载量 65 浏览量
2008-12-19
05:36:24
上传
评论
收藏 7.06MB PDF 举报
温馨提示
试读
165页
经典估计理论专著,包含有目前教材没有的研究结果
资源详情
资源评论
资源推荐
CHAPTER
8
APPL/CAT/O/VS IN NONLINEAR
FILTER/lVG
8.1 NONLINEAR FILTERING
So far in this text we have seen that the optimal linear filtering theorems
and algorithms are clean and powerful. The fact that the filter equation and
the performance calculations together with the filter gain calculations are
decoupled is particularly advantageous, since the performance calculations
and filter gain calculations can be performed offline; and as far as the on-line
filter calculations are concerned, the equations involved are no more com-
plicated than the signal model equations. The filtered estimates and the per-
formance measures are simply the means and covariances of the a posteriori
probability density functions, which are gaussian. The vector filtered estimates
together with the matrix performance covariances are clearly sufficient statis-
tics* of these a posteriori state probability densities.
By comparison, optimal
nonlinear filtering is far less precise, and we
must work hard to achieve even a little. The most we attempt in this book is
to see what happens when we adapt some of the linear algorithms to non-
linear environments.
*Sufficient statistics are collections of quantities which uniquely determine a prob-
ability density in its entirety.
194 APPLICATIONS IN NONLINEAR FILTERING
Ch. t
I
>7”
So as not to depart very far from the linear gaussian signal model, in the
first instance we will work with the model
‘k+ 1 = fk(%) + gk(xk)wk
(1.1)
z~ = hk(xk) + ‘uk
(1.2)
where the quantities
Fkxk, Hkxk, and G~ of earlier linear models are replaeed
by
fk(xk), h.(xk), and gk(xk), with ~k(” ), hk(- ) nonlinear (in general) and gk(.)
nonconstant (in general). The subscript on ~~(. ), etc., is included to denote a
possible time dependence. Otherwise the above model is identical to the
linear gaussian models of earlier chapters. In particular, {v~]and {w,] are zero
mean, white gaussian processes, and XOis a gaussian random variable. We
shall assume {v~], {x~], and XOare mutually independent, that E[vkv~] =
R~,
E[wkwj] = Q~,
and XOis N(iO, PO). Throughout the chapter we denote
{
[This means that the
ij component of F. is the partial derivative with respect
to x, of the ith component of~k(. ), and similarly for Hj, each derivative being
evaluated at the point indicated.]
In the next section, approximations are introduced to derive a clearly
suboptimal filter for the signal model above, known as an
extended Kalman
jilter.
The filter equations are applied to achieve quasi-optimal demodulation
of FM (frequency modulation) signals in low noise. A special class of ex-
tended Kalman filters is defined in Sec. 8.3 involving cone-bounded non-
linearities, and upper bounds on performance are derived. In Sec. 8.4, a
more sophisticated “gaussian sum”
nonlinear estimation theory is derived,
where, as the name suggests, the a posteriori densities are approximated by
a sum of gaussian densities. The nonlinear filter algorithms involve a bank of
extended Kalman filters, where each extended Kalman filter keeps track of
one term in the gaussian sum, The gaussian sum filter equations are applied
to achieve quasi-optimal demodulation of FM signals in high noise. Other
nonlinear filtering techniques outside the scope of this text use different means
for keeping track of the a posteriori probability distributions than the gaus-
sian sum approach of Sec. 8.4. For example, there is the point-mass approach
of Bucy and Senne [1], the spline function approach of de Figueiredo and
Jan [2], and the Fourier series expansion approach used successfully in [3],
to mention just a few of the many references in these fields.
Problem 8.1, (Formal Approach to Nonlinear Filtering). Suppose that
Xk+, = f(X/) + g(X~)Wk Zk = h(Xk) + ffk
with {w~),{VA]independent gaussian, zero mean sequences. Show that p(xk+, Ixk, .?k)
is known and, together with p(x~ IZk), detmmines p(x~+ ~IZk) by integration. (This
is the time-update step.) Show that
p(zk+ ~ Ixk+ ~, Zk) is known and, together with
8
Sec. 8.2
THE EXTENDED KA LMAN FILTER
195
P(xk+ I IZ,k), determines p(xk+ 1\Z~+, ) by integration. (This is the measurement-
update step.) A technical problem which can arise is that if g(xk) is singular,
p(xk+ ~[xk, Z~) is not well defined; in this case, one needs to work with characteristic
functions rather than density functions.
8.2
THE EXTENDED KALMAN FILTER
We retain the notation introduced in the last section. The nonlinear
functions Jk(xk), gk(x,), and h~(xk), if sufficiently smooth, can be expanded in
Taylor series about the conditional means fk~ and :k)k. ~ as
fk(xk) = fk(ik)k) + ~,(x, – ~k/k) ‘1”““
gk(xk) = g@,/,) + . . . =
G, + . . .
hk(xk) = Ak(Jk,k_ ,) + H;(xk — fk:k. ,) + , . .
Neglecting higher order terms and assuming knowledge of ik,~ and 2k,k_ ~
enables us to approximate the signal model (1.1) and (1.2) as
x~+l =
F~x~ + G~wk + u~
(2.1)
Z~ = ~kx~ + V~ + ~k
(2.2)
where
Uk and y~ are calculated on line from the equations
L/k = fk(i~,~) — F~2k,k
yk = hk(~,kik- I) — Hi~k/k- I
(2.3)
The Kalman filter for this approximate signal model is a trivial variation of
that derived in earlier chapters. Its equations are as follows:
EXTENDEDKALMANFILTEREQUATIONS:
%k,k= %k,k_, + Lk~zk —
hk(3klk _,)]
(2.4)
‘k+l,k = yk(fk,k)
(2.5)
L~ = Zk,k. ,HkCti 1 ~k = H&~_ ,H~ + R~
(2.6)
Zk,k = &k_, – Xk,k-
,Hk[H~Zk;k _ ,H~ + R~]- ‘H~2~,~- , (2.7)
‘k+ I/k = Fk Ek/kF; ~ GkQkG;
(2.8)
Initialization is provided by ZO,_, = PO, 201., = .iO.
The signt~cance of fk+, ,k and Zk+, ~k. The above extended Kalman filter is
nothing other than a standard and exact Kalman filter for the signal model
(2.1 )-(2.3). When applied to the original signal model (1.1) and (1.2), it is no
longer linear or optimal and the notations 2k+ ,,k and Zk,,_, are now loose
and denote approximate conditional means and covariances, respectively.
796 APPLICATIONS IN NONLINEAR FILTERING
Ch. 8
Coupling of conditional mean, jilter gain, andjilter performance equations.
The equations for calculating the filter gain L~ are coupled to the filter
equations since H~ and F~ are functions of Zk[k.,. The same is true for the
approximate performance measure Zk,~_,. We conclude that,
in general, the
calculation of L~ and Z~,~_, cannot be carried out off Iine.
Of course in any
particular application it may be well worthwhile to explore approximations
which would allow decoupling of the filter and filter gain equations. In the
next section, a class of filters is considered of the form of (2.4) and (2,5),
where the filter gain
L~ is chosen as a result of some off-line calculations. For
such filters, there is certainly no coupling to a covariance equation.
Quality of approximation. The approximation involved in passing from
(1 .1) and (1.2) to (2.1) and (2.2) will be better the smaller are IIx~ – 2,,, l!’
and \Ix~ — ,f~,~_, I\z. Therefore, we would expect that in high signal-to-noise
ratio situations, there would be fewer difficulties in using an extended Kalman
filter. When a filter is actually working, so that quantities trace (X~l,) and
trace (X~,~_,) become available, one can use these as guides to IIx~ – ~~1~\\2
and IIx~ — .2~12-, 112,and this in turn allows review of the amount of approxi-
mation involved. Another possibility for determining whether in a given
situation an extended Kalman filter is or is not working well is to check how
white the pseudo-innovations are, for the whiter these are the more nearly
optimal is the filter. Again off-line Monte Carlo simulations can be useful,
even if tedious and perilous, or the application of performance bounds such
as described in the next section may be useful in certain cases when there exist
cone-bounded conditions on the nonlinearities.
Selection of a suitable co-ordinate basis. We have already seen that for
a certain nonlinear filtering problem—the two-dimensional tracking problem
discussed in Chap. 3, Sec. 4,—one coordinate basis can be more convenient
than others. This is generally the case in nonlinear filtering, and in [4], an even
more significant observation is made. For some coordinate basis selections,
the extended Kalman filter may diverge and be effectively useless, whereas
for other selections it may perform well. This phenomenon is studied further
in [5], where it is seen that V~ = ~~,~_, ZZ)~_,2~,~_ ~ is a Lyapunov function
ensuring stability of the autonomous filter for certain coordinate basis selec-
tions, but not for others.
Variations of the exten&d Kalman @lter. There are a number of varia-
tions on the above extended Kalman filter algorithm, depending on the deriva-
tion technique employed and the assumptions involved in the derivation.
For example, filters can be derived by including more terms in the Taylor
series expansions of ~~(x~) and h~(x~); the filters that result when two terms
are involved are called second order extended Kalman filters. Again, there
1
Sec. 8.2
THE EXTENDED KALMAN FILTER
197
are algorithms (see problems) in which the reference trajectory is improved by
iteration techniques, the resulting filters
being termed iterated extended
Kalman filters. Any one of these algorithms maybe superior to the standard
extended Kalman filter in a particular filtering
application,but there are no
real guidelines here, and each case has to be studied separately using Monte
Carlo simulations. other texts [6, 7] shou]d be consulted for derivations and
examples. For the case when cone-bounded non]inearities are involved in
an extended Kalman filter, it
maywellbe,as shownin [5],that theextended
Kalman filter performs better if the non]inearities in the fi[ter are modified
by tightening the cone bounds. This modification can be conveniently effected
by introducing dither signals prior to the nonlinearities, and compensating
for the resulting bias using a fi]tered version of the error caused by the cone
bound adjustment.
Gaussian sum aPproach. There are non]inear
algorithmswhich involve
collections of extended Kalman fi]ters, and thereby become both more power-
ful and more complex than the
algorithmof this section. In these algorithms,
discussed in a later section, the a posteriori
densityfunction p(.x, IZ,) is
approximated by a sum of gaussian density functions, and assigned to each
gaussian density function is an extended Kalman fijter. In situations where
the estimation error is small, the a posterior
densitycan be approximated
adequately
by one gaussiandensity,and in this ca5ethe gau5siansumfilter
reduces to the extended Kalman filter of this section.
The following theorem gives some further insight into the quality of the
approximations involved in the extended Kalman filter
algorithmand is of
key importance in
demonstratingthe powerof the gau5siansumalgorithms
of later sections. In particular, the theorem shows that under certain condi-
tions, the notion that the errors IIx~ — .i~l~ )12and ]]x~ — ,Q~~., 112have to be
small to ensure that the extended Kalman filter is near optimal can be relaxed
to requiring only that ~~,~ and ~~ ~_ ~ (or their traces) be small. With
y[x – -i, Z] denoting the gaussian density,
the following result can be established:
THEOREM2.1. For the signal model (I .I) and (1 .2) and filter of (2.4)
through (2.8), if
P(x~ Izk.1) = ~[xk — ~k/~-l>
‘k/k- ]1
(2.9)
then for fixed
hk(. ), ~~l~.,, and R~
p(xk ( Zk) ~ ~[xk — 2k/k> %A
uniformly in x~ and z~ as ~~,k. , + 0, Again if
p(xk I‘k) = y[xk – ~k/k7 ‘k/k]
(2.10)
剩余164页未读,继续阅读
yan1971
- 粉丝: 0
- 资源: 2
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0