Levenberg-Marquardt:
There is an artificial assumption, that the second term in the equation (7) could be
approximated by
λ
(k)
D ∆c
(k)
, (10)
where D is a suitable diagonal matrix of scales. It is often chosen as a unity matrix I or
a diagonal of the matrix A
o
. The equation (7) is then transformed into
A
(k)
∆c
(k)
+ λ
(k)
D ∆c
(k)
= −v
(k)
. (11)
The idea belongs to Levenberg. The strategy of modification improved Marquardt and
later Fletcher made the superfinish of it.
The equation (11) may be converted into the form
(A
(k)
+ λ
(k)
D) ∆c
(k)
= −v
(k)
. (12)
If D = diag(A
(k)
), the diagonal of the system matrix, is strongly influenced by a scale
parameter λ. The higher it is, the closer the result is to the stable solution of steepest
descent. For λ = 0, the method approaches the method of Newton- Raphson, which is
less stable and may diverge.
The strategy is based on a comparison of a forecast of the solution for the next
iteration. If the forecast is close to the reality, the lambda may be lowered. If it is bad, λ
should become higher in order to stabilize the process. And it is the role of some heuristic
choice of authors in multipliers 2 or 10. They could be other.
The Levenberg-Marquardt method in Fletcher’s modification [1] for solution of non-
linear least squares problems has been implemented in MATLAB in a simplified version
under the name LMFsolve some time ago (see [2]), and is widely used by the MATLAB
community. The convergence and stability of the function has been strongly influenced
both by the simplification of the code and a bug in application of analytical form of ja-
cobian matrix. This has been a reason why a new version of the function LMFnlsq2 has
been built. It is almost unchanged transcription of the original Fletcher’s FORTRAN
code into MATLAB structures, but the initial part containing option settings, finite dif-
ference evaluation of jacobian matrix and printout module. The new function is stable
and efficient.
Unconstrained optimization
A script named LMFnlsq2test is provided for testing LMFnlsq2. It covers both
unconstrained and constrained minimization problem of Rosenbrock’s function
f(x) = 100 (x
2
− x
2
1
)
2
+ (1 − x
1
)
2
(13)
as a sum of squares of residuals, f(x) = f
2
1
(x) + f
2
2
(x), where f
1
(x) = 10 (x
2
− x
2
1
) and
f
2
(x) = 1 − x
1
. The results of this case of solution are shown in the graphical form in the
left picture of figure 1.
Constrained optimization
An additional condition should be stated in case of a constrained problem. If the
feasible domain were circular with its center at the origin of coordinates and a diameter
r, the condition could be formulated as
x
2
1
+ x
2
2
<= r
2
. (14)
2