In March 2001 [10–13], Zhang et al. proposed a special kind of RNN for the real-time inversion of time-varying matrices,
differing from gradient-based neural networks (GNN) for constant matrix inversion [6,14–16]. To solve for a time-varying
matrix inverse A
1
ðtÞ2R
nn
, the design of the neural network is based on the defining equation, AðtÞXðtÞI ¼ 0, over time
t 2½0; þ1Þ, where AðtÞ2R
nn
is a smoothly time-varying nonsingular matrix, and I denotes an appropriately-dimensioned
identity matrix. As proposed and investigated in [10–13,21–25], the following recurrent neural network [simply termed,
Zhang neural network (ZNN), for ease of presentation] can be used to solve for the time-varying matrix inverse A
1
ðtÞ in real
time t:
AðtÞ
_
XðtÞ¼
_
Aðt ÞXðtÞ
C
FðAðtÞXðtÞIÞ ð1Þ
where
XðtÞ2R
nn
, starting from initial condition Xð0Þ2R
nn
, is the activation state matrix corresponding to the theoretical
inverse A
1
ðtÞ;
the matrix-valued parameter
C 2 R
nn
could simply be
c
I with
c
> 0 2 R;
_
AðtÞ2R
nn
denotes the derivative of matrix AðtÞ with respect to time t [or simply termed, the time derivative of matrix
AðtÞ]; and,
FðÞ : R
nn
! R
nn
denotes an activation-function array of the neural network, of which a simple example is the linear one,
i.e., FðAðtÞXðtÞIÞ¼AðtÞXðtÞI.
As compared to gradient-based recurrent neural networks, the difference and novelty of the ZNN model lie in the follow-
ing facts.
(1) The design of ZNN model (1) is based on the elimination of every entry e
ij
ðtÞ of the matrix-valued error function
EðtÞ¼AðtÞXðtÞI, where i; j 2f1; 2; ...; ng. In contrast, the design of the GNN model
_
XðtÞ¼
c
A
T
FðAXðtÞIÞ ð2Þ
is based on the elimination of the scalar-valued norm-based energy function EðtÞ¼kAXðtÞIk
2
(note that, in such a
GNN design, matrix A could only be constant) [6,14–16].
(2) ZNN model (1) is depicted in an implicit dynamics, i.e., Aðt Þ
_
XðtÞ¼, which coincides well with systems in nature
and in practice (e.g., with analogue electronic circuits and mechanical systems, owing to Kirchhoff’s and Newton’s
laws, respectively [10–13,21–25]). Evidently, with simple operations, the implicit systems can be transformed into
explicit systems [13,30], if necessary. In contrast, GNN model (2) is depicted in an explicit dynamics, i.e.,
_
XðtÞ¼,
which is usually associated with conventional Hopfield-type and/or gradient-based neural networks [6,14–16].
Comparing the implicit and explicit dynamic systems, it can be seen that the former could have a greater ability
to preserve more physical parameters, e.g., even in the coefficient matrix on the left hand side of the system [i.e.,
AðtÞ in (1)].
(3) ZNN model (1) could systematically and methodologically exploit the time-derivative information (or say, trend) of
coefficient matrix AðtÞ during its real-time inverting process. This appears to be an important reason why the neural
state XðtÞ of ZNN model (1) could globally converge to the exact inverse of a time-varying matrix [12]. On the other
hand, GNN model (2) has not exploited this important information of
_
AðtÞ , and thus may not be effective enough on
the time-varying matrix inversion.
(4) By making good use of the time-derivative information of matrix AðtÞ, ZNN model (1) belongs to a predictive approach,
which is more effective on the system convergence to a time-varying theoretical inverse [10–13]. In contrast, GNN
model (2) belongs to a tracking approach, which adapts to the change of matrix AðtÞ in a posterior passive manner.
(5) As shown in the following sections, the neural state XðtÞ computed by ZNN model (1) can globally exponentially con-
verge
to
the theoretical time-varying inverse A
1
ðtÞ. In contrast, GNN model (2) can only generate the approximation
results of such a theoretical inverse A
1
ðtÞ with much larger steady-state errors, thus is less effective on time-varying
matrix inversion.
To the best of the authors’ knowledge, to date, few studies have been published dealing with the simulation of recurrent
neural networks. In this paper, the simulation and verification of ZNN model (1) using MATLAB [31] is investigated. To sim-
ulate such a special implicit dynamic system, four important simulation techniques (as briefed below) are employed.
Kronecker product of matrices is introduced to transform the matrix differential Eq. (1) to a vector differential equation
(VDE). That is, finally, there is a standard ordinary differential equation (ODE) formulation.
MATLAB routine ‘‘ode45” with a mass-matrix property is introduced to simulate the transformed initial-value ODE sys-
tem, where matrix AðtÞ on the left hand side of ZNN model (1) is termed a mass matrix.
Matrix derivatives [e.g.,
_
AðtÞ] are obtained using MATLAB routine ‘‘diff” and symbolic math toolbox [31].
In addition to various implementation errors, different types of activation-function arrays FðÞ are coded and simulated in
order to illustrate the characteristics of ZNN model (1) working in different situations.
2 Y. Zhang et al. / Simulation Modelling Practice and Theory xxx (2009) xxx–xxx
ARTICLE IN PRESS
Please cite this article in press as: Y. Zhang et al., Simulation and verification of Zhang neural network for online time-varying matrix inver-
sion, Simulat. Modell. Pract. Theory (2009), doi:10.1016/j.simpat.2009.07.001