没有合适的资源?快使用搜索试试~ 我知道了~
1-s2.0-S095915241300111X-main
需积分: 15 2 下载量 110 浏览量
2014-05-22
19:16:31
上传
评论
收藏 808KB PDF 举报
温馨提示
稳健估计(Robust Estimation),测量中也称为抗差估计,正是针对最小二乘法抗粗差的干扰差这一缺陷提出的,其目的在于构造某种估计方法,使其对于粗差具有较强的抵抗能力。自1953年G.E.P.BOX首先提出稳健性(Robustness)的概念,Tukey、Huber、Hampel、Rousseeuw等人对参数的稳健估计进行了卓有成效的研究,经过众多数理统计学家几十年的开拓和耕耘,至今稳健估计已发展 成为一门受到多学科关注的分支学科。
资源推荐
资源详情
资源评论
Journal
of
Process
Control
23 (2013) 1555–
1561
Contents
lists
available
at
ScienceDirect
Journal
of
Process
Control
jo
u
r
nal
homep
age:
www.elsevier.com/locate/jprocont
Short
communication
Robust
derivative-free
Kalman
filter
based
on
Huber’s
M-estimation
methodology
Lubin
Chang
a,∗
,
Baiqing
Hu
a
,
Guobin
Chang
a,b
,
An
Li
a
a
Department
of
Navigation
Engineering,
Naval
University
of
Engineering,
Wuhan,
People’s
Republic
of
China
b
Tianjin
Institute
of
Hydrographic
Surveying
and
Charting,
Tianjin
300000,
People’s
Republic
of
China
a
r
t
i
c
l
e
i
n
f
o
Article
history:
Received
28
November
2012
Received
in
revised
form
8
April
2013
Accepted
23
May
2013
Available online 28 June 2013
Keywords:
Huber’s
M-estimation
Nonlinear
filtering
Robust
regression
Unscented
Kalman
filter
a
b
s
t
r
a
c
t
In
this
study,
a
discrete-time
robust
nonlinear
filtering
algorithm
is
proposed
to
deal
with
the
contami-
nated
Gaussian
noise
in
the
measurement,
which
is
based
on
a
robust
modification
of
the
derivative-free
Kalman
filter.
By
interpreting
the
Kalman
type
filter
(KTF)
as
the
recursive
Bayesian
approximation,
the
innovation
is
reformulated
capitalizing
on
the
Huber’s
M-estimation
methodology.
The
proposed
algo-
rithm
achieves
not
only
the
robustness
of
the
M-estimation
but
also
the
accuracy
and
flexibility
of
the
derivative-free
Kalman
filter
for
the
nonlinear
problems.
The
reliability
and
accuracy
of
the
proposed
algorithm
are
tested
in
the
Univariate
Nonstationary
Growth
Model.
© 2013 Elsevier Ltd. All rights reserved.
1.
Introduction
The
Kalman
filter
(KF)
[1]
is
concerned
with
estimation
of
the
dynamic
state
from
noisy
measurements
in
the
class
of
estima-
tion
problems
where
the
dynamic
and
measurement
processes
can
be
approximated
by
linear
Gaussian
state
space
models.
Unfor-
tunately,
the
performance
of
the
KF
can
be
severely
degraded
when
the
distribution
of
the
true
noise
deviates
from
the
assumed
Gaussian
model,
often
being
characterized
by
heavier
tails
and
gen-
erating
high-intensity
noise
realizations,
named
outliers
[2].
The
contaminated
Gaussian
noise
and
outliers
arise
naturally
in
many
areas
of
engineering.
Heavy
tailed
densities
occur
in
applications
related
to
glint
noise,
air
turbulence,
and
asset
returns
[3].
Outliers
can
appear
due
to
clutter,
glint
noise,
and
model
mismatch
caused
by
e.g.,
linearization
[4].
Therefore,
there
appears
to
be
consider-
able
motivation
for
considering
filters
that
are
robust
to
perform
fairly
well
in
non-Gaussian
environments.
To
handle
the
non-Gaussian
noise,
many
methods
have
been
proposed.
By
minimizing
the
worst-case
estimation
error
averaged
over
all
samples,
the
H
∞
based
KF
can
be
used
to
treat
model-
ing
errors
and
uncertainties
as
unknown-but-bounded
noise
[5,6].
However,
it
breaks
down
in
the
presence
of
randomly
occurring
outliers
since
the
design
matrices
of
the
H
∞
filter,
just
like
the
covariance
matrix
in
the
classical
KF,
cannot
accommodate
well
the
outliers
induced
by
the
thick
tails
of
a
noise
distribution
[13].
∗
Corresponding
author.
E-mail
address:
changlubin@163.com
(L.
Chang).
The
Gaussian
sum
methodology
can
also
yield
a
robust
estimate,
but
at
an
extreme
cost
in
computation.
Specifically,
the
number
of
terms
kept
in
the
Gaussian
sum
grows
exponentially
with
time
[7].
A
more
general
and
versatile
method
is
the
particle
filtering
(PF)
which
approximates
the
minimum
variance
estimate
using
stochastic
simulation
methods.
However,
the
PF
is
also
very
compu-
tationally
intensive
since
it
requires
Monte
Carlo
integration
[8,9].
In
addition,
it
typically
requires
a
delicate
tuning
of
proposal
den-
sities
to
improve
its
convergence
rate.
The
extension
of
the
concept
of
Huber’s
M-estimation
method-
ology
to
the
problem
of
robust
Kalman
filtering
has
been
widely
researched
and
emphasized
over
the
other
ones,
since
it
is
moti-
vated
by
the
maximum
likelihood
estimation,
which
makes
it
more
natural
and
rather
easy
[10–12].
The
Huber’s
M-estimation
methodology
is
essentially
based
on
modifying
the
quadratic
cost
function
in
KF
by
the
robust
convex
Huber
-function,
which
exhibits
l
2
-norm
properties
for
small
residuals
and
l
1
-norm
prop-
erties
for
large
residuals.
Possessing
a
moderate
breakdown
point
(BP),
high
efficiency,
and
a
manageable
computational
complexity,
the
Huber’s
M-estimation
methodology
is
still
a
subject
of
consider-
able
research
interest
in
recent
years
[13–17].
However,
most
of
the
previous
work
is
limited
to
the
linear
case.
Although
these
methods
can
be
directly
extended
to
the
nonlinear
problems
making
use
of
the
extended
Kalman
filter
(EKF),
the
crude
approximation
and
the
cumbersome
derivation
and
evaluation
of
Jacobian
matrices
in
the
EKF
may
degrade
their
performance.
In
recent
years,
there
has
been
renewed
interest
in
the
topic
of
nonlinear
filtering
with
the
discovery
of
the
“sigma-point”
Kalman
filters
(SPKF)
[18].
The
various
sigma
point
algorithms
differ
from
0959-1524/$
–
see
front
matter ©
2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.jprocont.2013.05.004
1556 L.
Chang
et
al.
/
Journal
of
Process
Control
23 (2013) 1555–
1561
each
other
in
their
choice
of
the
sigma
points,
for
instance
the
unscented
Kalman
filter
(UKF)
[19–22],
the
divided
difference
fil-
ter
(DDF)
[23]
and
so
on.
Eliminating
the
cumbersome
derivation
and
evaluation
of
Jacobian
matrices,
these
filters
are
much
easier
to
implement
and
perform
better
than
the
EKF.
Combining
the
M-
estimation
methodology
with
these
derivative-free
filters
has
also
been
researched
[16,17].
However,
these
methods
are
all
based
on
the
concept
of
statistical
linearization
[24]
and
haven’t
made
full
use
of
the
advantages
of
these
derivative-free
filters.
In
this
paper,
we
shall
focus
on
deriving
the
robust
derivative-free
KF
using
Huber’s
M-estimation
methodology
without
linearization
or
statistical
linearization.
Basing
on
the
recursive
Bayesian
interpre-
tation
of
the
KF,
we
point
out
that
the
M-estimation
methodology
used
to
modify
the
quadratic
cost
function
in
KF
only
affects
the
innovation.
To
this
respect,
any
nonlinear
Kalman
type
filter
(KTF)
can
be
robustified
by
modifying
the
innovation
using
M-estimation
methodology.
Then
the
UKF,
taken
as
a
representation
of
the
derivative-free
filters,
is
robustified
using
the
proposed
methodol-
ogy.
The
proposed
robust
derivative-free
algorithm:
(i)
can
handle
the
contaminated
Gaussian
noise
and
outliers
in
the
measurement;
(ii)
exhibits
the
accuracy
and
flexibility
of
the
UKF
for
the
nonlinear
problems;
(iii)
performs
well
under
nominal
conditions
(i.e.,
with
no
contaminated
Gaussian
noise
and
outliers
present);
(iv)
out-
performs
the
conventional
linear
or
statistical
linear
based
robust
KTF.
The
rest
of
the
paper
is
organized
as
follows.
Section
2
presents
the
Bayesian
recursive
interpretation
of
KF.
The
method
of
embedding
the
Huber’s
M-estimation
methodology
into
the
KF
is
discussed
in
Section
3.
Combining
the
Huber’s
M-estimation
methodology
with
the
UKF
is
the
subject
of
Section
4.
Simulations
are
presented
in
Section
5,
and
Section
6
gives
some
concluding
remarks.
2.
Bayesian
regression
The
KF
is
best
known
as
a
solution
to
the
optimal
filtering
problem
when
the
following
linear
state-space
model
subject
to
Gaussian
noise
is
considered:
x
k
=
F
k−1
x
k−1
+
q
k−1
y
k
=
H
k
x
k
+
r
k
(1)
where
x
k
∈
R
n
and
y
k
∈
R
m
denote
the
state
and
measurement
vec-
tors
at
time
step
k,
q
k
and
r
k
are
mutually
independent,
zero-mean
nominal
noise
vectors,
each
independent
across
time,
with
respec-
tive
covariance
matrices
Q
k
and
R
k
.
F
k−1
is
the
dynamic
model
function
and
H
k
is
the
measurement
model
function.
Let
ˆ
x
k|l
(l
=
k
−
1,
k)
denote
the
linear
least
squares
estimate
of
x
k
given
{y
j
,
j
≤
l},
and
let
P
k|l
denote
the
corresponding
error
covariance
matrix.
Now
consider
the
posterior
mean
estimate
of
KF.
ˆ
x
k|k
=
x
k
p(x
k
y
1:k
)dx
k
(2)
where
p(x
k
|y
1:k
)
denotes
the
conditional
density
with
measure-
ments
up
to
time
k.
Making
use
of
Bayes
rule
and
the
fact
that
the
observation
sequence
is
conditionally
independent
given
the
current,
the
posterior
distribution
density
can
be
factored
as
[18]
p(x
k
y
1:k
)
=
p(y
k
x
k
)p(x
k
y
1:k−1
)
p(y
k
y
1:k−1
)
(3)
By
defining
p(x
k
y
1:k−1
)
=
p(
ˆ
x
k
|
k−1
)
and
substituting
(3)
into
(2),
(2)
can
be
rewritten
after
some
algebra
as
ˆ
x
k|k
=
ˆ
x
k|k−1
+
1
p(y
k
y
1:k−1
)
P
k|k−1
p(y
k
|
x
k
)(P
−1
k|k−1
(x
k
−
ˆ
x
k|k−1
))p(
ˆ
x
k|k−1
)dx
k
(4)
In
the
KF
the
prior
state
estimate
(before
the
new
observation
is
received)
is
assumed
to
be
distributed
according
to
a
Gaussian
probability
density
function,
i.e.
p(
ˆ
x
k|k−1
)
=
N(x
k
;
ˆ
x
k|k−1
,
P
k|k−1
)
(5)
Carrying
out
the
differentiation
of
(5)
results
in
∂
∂x
k
p(
ˆ
x
k|k−1
)
=
−P
−1
k|k−1
(x
k
−
ˆ
x
k|k−1
)p(
ˆ
x
k|k−1
)
(6)
Substituting
(6)
into
(4)
results
in
ˆ
x
k|k
=
ˆ
x
k|k−1
−
1
p(y
k
y
1:k−1
)
P
k|k−1
p(y
k
x
k
)
·
∂
∂x
k
p(
ˆ
x
k|k−1
)dx
k
(7)
If
the
observation
is
considered
to
be
independent
and
iden-
tically
distributed,
the
likelihood
density
function
reduces
to
a
function
only
of
the
residuals
between
the
estimate
and
the
obser-
vation,
i.e.
p(y
k
x
k
)
=
p(y
k
−
H
k
x
k
)
(8)
Substituting
(8)
into
(7)
and
integrating
by
parts
results
in
ˆ
x
k|k
=
ˆ
x
k|k−1
+
1
p(y
k
y
1:k−1
)
P
k|k−1
p(
ˆ
x
k
|
k−1
)
·
∂
∂x
k
p(y
k
−
H
k
x
k
)dx
k
(9)
since
∂
∂x
k
p(y
k
−
H
k
x
k
)
=
−H
T
k
∂
∂y
k
p(y
k
−
H
k
x
k
)
(10)
Eq.
(9)
can
be
rewritten
as
ˆ
x
k|k
=
ˆ
x
k|k−1
−
1
p(y
k
y
1:k−1
)
P
k|k−1
p(
ˆ
x
k|k−1
)
·
H
T
k
∂
∂y
k
p(y
k
−
H
k
x
k
)dx
k
(11)
Noticing
that
the
observation
is
considered
to
be
independent
and
identically,
∂
∂y
k
p(
ˆ
x
k|k−1
)
=
∂
∂y
k
p(x
k
y
1:k−1
)
=
0
(12)
Combining
(11)
and
(12)
gives
ˆ
x
k|k
=
ˆ
x
k|k−1
−
1
p(y
k
y
1:k−1
)
P
k|k−1
H
T
k
∂
∂y
k
(p(y
k
−
H
k
x
k
)p(
ˆ
x
k|k−1
))dx
k
(13)
Changing
the
order
of
differentiation
and
integration
yields
ˆ
x
k|k
=
ˆ
x
k|k−1
−
1
p(y
k
y
1:k−1
)
P
k|k−1
H
T
k
∂
∂y
k
p(y
k
y
1:k−1
)
(14)
which
can
be
rewritten
as
ˆ
x
k|k
=
ˆ
x
k|k−1
+
P
k|k−1
H
T
k
−
∂
∂y
k
ln
p(y
k
y
1:k−1
)
(15)
Denote
the
innovation
as
s
k
=
y
k
−
H
k
ˆ
x
k|k−1
with
its
corresponding
covariance
S
k
.
Denote
the
transformed
innovation
as
k
=
S
−1/2
k
s
k
,
so
y
k
=
S
1/2
k
k
+
H
k
ˆ
x
k|k−1
(16)
Combining
(15)
and
(16)
results
in
ˆ
x
k|k
=
ˆ
x
k|k−1
+
P
k|k−1
H
T
k
S
−1/2
k
−
∂
∂
k
ln
p(
k
y
1:k−1
)
(17)
Define
the
influence
function
(
k
)
as
the
negative
likelihood
score
of
the
transformed
innovation
density
(
k
)
=
−
∂
∂
k
ln
p(
k
y
1:k−1
)
(18)
剩余6页未读,继续阅读
资源评论
掌鈊化樰
- 粉丝: 0
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功