106
(2.1) min{~(p): [IDpll ! A} .
In theory D is any given nonsingular matrix, but in our implementation D is a diago-
nal matrix which takes into account the scaling of the problem. In either case, p
lies in the hyperellipsoid
(2.2) E = {p: I!Dpll i A} ,
but if D is diagonal, then E has axes along the coordinate directions and the length
of the ith semi-axis is A/d..
i
We now consider the solution of (2.1) in some generality, and thus the problem
(2.3) min{IIf+Jpll : IIDpll ~ A}
where f ~ R m and J is any m by n matrix. The basis for the Levenberg-Marquardt
method is the result that if p is a solution to (2.3), then p = p(l) for some
I > 0 where
(2.4) p(l) = _(jTj + IDTD)-IjTf .
If J is rank deficient and i = 0, then (2.4) is defined by the limiting process
Dp(0) E lim Dp(l) = -(jD-l)#f .
l÷0 +
There are two possibilities: Either % = 0 and IIDp(0)ll ! A, in which case p(0) is
the solution to (2.3) for which liNeN is least, or % > 0 and IIDp(%)II = A, and then
p(%) is the unique solution to (2.3).
The above results suggest the following iteration.
(2.5) Al$orithm
(a) Given Ak > 0, find %k ~ 0 such that if
(JJJk + %kDkTDk)Pk = -JkTfk '
then either X k = 0 and IIDkPkl I ~ A k, or %k > 0 and llDkPkll =
A k •
(b) If IIF(Xk+P~I 1 < IIF(Xk)II set Xk+ I = Xk+P k and evaluate Jk+l; otherwise
set Xk+ I = x k and Jk+l = Jk"
(c) Choose gk+ I and Dk+ I.
In the next four sections we elaborate on how (2.5) leads to a very robust and
efficient implementation of the Levenberg-Marquardt algorithm.
评论13
最新资源