NASA
Technical
Memorandum
101466
ICOMP-89-2
On the Equivalence
of
Gaussian Elimination
and Gauss-Jordan Reduction in
Solving
Linear Equations
Ei89-2C7
10
CSCL
128
Unclas
G3/64
0
Y9C
177
Nai-ban
Two
Wayne State University
Detroit, Michigan
and Institute
for
Computational Mechanics
in
Propulsion
Lewis
Research Center
Cleveland, Ohio
February
1989
LEWIS
RfYARCH CENTER
ICOMP
CASE
WESTERN
RESLRVf
UNIVERSITY
On the Equivalence of Gaussian Elimination and Gauss-Jordan Reduction in Solving Linear Equations
Nai-kuan Tsao*
Wayne State University
Detroit, Michigan
48202
and Institute for Computational Mechanics in Propulsion
Lewis Research Center
Cleveland, Ohio
44135
Abstract
A
novel general approach to round-off error analysis using the error complexity concepts is
described.
This
is applied to the analysis
of
the Gaussian Elimination and the Gauss-Jordan scheme
for solving linear equations. The results show that the two algorithms
are
equivalent
in
terms of
our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be imple-
mented with confidence
if
parallel computers are available.
'This work was supported in part
by
the Space Act Agreement CWG while the author was visiting ICOMP,
NASA
Lewis Research Center.
1.
Introduction
dct(A)
=
In past dccadcs,
the
backward
error
analysis of \Vilkinson[3] has cnablcd us to gain much insight
into thc behavior
of
round-off
error
propagation in numcrical algebraic nlgorithmg. One is required,
naturally, to possess varying degrees of mathematical sophistication
in
order
to
apply his approach
to
the analysis of numerical algorithms on hand.
‘11
‘I2
‘I3
‘21 ‘22 ‘23
‘31 ‘32 ‘33
’l’he algorithms
of
Gaussian I’limination
(GI?)
and Gauss-Jordan reduction (GJ)
arc
well known
in solving linear equations. The numerical stabhty of Gaussian Elimination with partial pivoting
is shown in
[3]
,
and the stability
of
Gauss-Jordan reduction is shown in
[4]
,
all
using Wilkinson’s
approach. ‘Ihc results show that in Gaussian Iilimination the computed solution
x
of
a
given
sys-
tem
Ax
=
b
is
the exact solution
of
some
neighboring
system
(A
+
h)x
=
b
with a reasonable bound
for
E,
whereas
in
Gauss-Jordan reduction thc computed solution
X
is such that each component
of
X
bclongs
to
the exact solution
of
a
diffcrerit ncighboring system.
In this paper we cxtcnd thc crror complexity conccpts
in
123
to division operation and apply
it
to
the analysis of the Gauss-Jordan and Gaussian Elimination algorithms to show from an al-
ternative point
of
view that thc two algorithms arc indccd equivalent in terms
of
our error com-
plexity measures. Thus one can
implement the inherently parallel Gauss-Jordan reduction
algorithm in with confidence
for
parallcl computers.
Some prcliminary rcsults are prcsentcd in
Section
2.
They
are
applied in Scction
3
to the
error
complcxity analysis
of
thc two algorithms for
solving linear system
of
equations.
2.
Some Preliminary Results
Considcr the simple problem
of
evaluating thc
3
by
3
dctcnninmt
A
straightforward algorithm would evaluate
(2.1)
as
c
le
On
the otherhand,
one
can
also
express
(2.1)
as
(2.2)
dct(A)
=
UI
,
‘“22a”33
where
Expanding
(2.2),
we have
Which is equivalent
to
(2.1) once the term
‘21
‘31
Q11~‘12~u13
is canccllcd out. This is
of
course unlikely in
actual
computation due
to
round-off errors.
Thus
the latter approach is
more
likely
to
incur round-off errors during the computational process.
Our approach
in
the error analysis
of
different algorithms designed for the
same
problem strives
to provide such information
as
the number
of
round-off error occurrences as well as the extra terms
created during the computational process
for
easy comparison and at the same time enable
US
to
gain more insight
into
the details
of
how each algorithm
works.
Given a normalized floating-point system with a t-digit base
p
mantissa, the following equations
can be assumed
to
facilitate the error analysis
of
general arithmatic expressions using only
+,
-,
x
,
or
/
operations[3]:
where
3
for rounded operations
IAI
51
+u,
US
for chopped operations
and
x
and
y
are given machine floating-point numbers and
/7(.)
is used to denote the computed
floating-point result
of
the giim arbwment. We shall call
A
the unit
A
-factor.
In general one can apply
(2.3)
repeatedly to a sequence of arithmetic steps, and the computed
result
z
can be expressed as
(2.4)
where each
z,,,
or
zd,
is an exact product of error-free data, and
Ak
stands for the product of
k
pos-
sibly different A-factors. We should emphasize that all common factors between the numerator and
denominator should have been factored out before
z
can be expressed in its final rational form of
(2.4).
Following
[I],
we shall henceforth call such an exact product of error-free data a basic term,
or
simply
a
term. 'I'hus
,I(z,,)
or
A(&)
is then the total number
of
such terms whose sum constitutes
z,,
or
z,,
respectively, and
u(z,,)
or
o(zd,)
gives the possible number of round-off occurrences during
the computational process. We define the following
two
measures:
maximum error complexity:
cumulative error complexity:
i=
I
j=
I