AN INTRODUCTION TO
OPTIMIZATION
SOLUTIONS MANUAL
Fourth Edition
Edwin K. P. Chong and Stanislaw H.
˙
Zak
A JOHN WILEY & SONS, INC., PUBLICATION
1. Methods of Proof and Some Notation
1.1
A B not A not B A⇒B (not B)⇒(not A)
F F T T T T
F T T F T T
T F F T F F
T T F F T T
1.2
A B not A not B A⇒B not (A and (not B))
F F T T T T
F T T F T T
T F F T F F
T T F F T T
1.3
A B not (A and B) not A not B (not A) or (not B))
F F T T T T
F T T T F T
T F
T F T T
T T F F F F
1.4
A B A and B A and (not B) (A and B) or (A and (not B))
F F F F F
F T F F F
T F
F T T
T T T F T
1.5
The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the
truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not
relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on
the other side.
Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition.
2. Vector Spaces and Matrices
2.1
We show this by contradiction. Suppose n < m. Then, the number of columns of A is n. Since rank A is
the maximum number of linearly independent columns of A, then rank A cannot be greater than n < m,
which contradicts the assumption that rank A = m.
2.2
⇒: Since there exists a solution, then by Theorem 2.1, rank A = rank[A
.
.
.b]. So, it remains to prove that
rank A = n. For this, suppose that rank A < n (note that it is impossible for rank A > n since A has
only n columns). Hence, there exists y ∈ R
n
, y 6= 0, such that Ay = 0 (this is be cause the columns of
1
A are linearly dep endent, and Ay is a linear combination of the columns of A). Let x be a solution to
Ax = b. Then clearly x + y 6= x is also a solution. This contradicts the uniqueness of the solution. Hence,
rank A = n.
⇐: By Theorem 2.1, a solution exists. It remains to prove that it is unique. For this, let x and y be
solutions, i.e., Ax = b and Ay = b. Subtracting, we get A(x − y) = 0. Since rank A = n and A has n
columns, then x − y = 0 and hence x = y, which shows that the solution is unique.
2.3
Consider the vectors ¯a
i
= [1, a
>
i
]
>
∈ R
n+1
, i = 1, . . . , k. Since k ≥ n + 2, then the vectors ¯a
1
, . . . , ¯a
k
must
be linearly independent in R
n+1
. Hence, there exist α
1
, . . . α
k
, not all zero, such that
k
X
i=1
α
i
a
i
= 0.
The first component of the above vector equation is
P
k
i=1
α
i
= 0, while the last n components have the form
P
k
i=1
α
i
a
i
= 0, completing the proof.
2.4
a. We first postmultiply M by the matrix
"
I
k
O
−M
m−k,k
I
m−k
#
to obtain
"
M
m−k,k
I
m−k
M
k,k
O
#"
I
k
O
−M
m−k,k
I
m−k
#
=
"
O I
m−k
M
k,k
O
#
.
Note that the determinant of the postmultiplying matrix is 1. Next we postmultiply the resulting product
by
"
O I
k
I
m−k
O
#
to obtain
"
O I
m−k
M
k,k
O
#"
O I
k
I
m−k
O
#
=
"
I
k
O
O M
k,k
#
.
Notice that
det M = det
"
I
k
O
O M
k,k
#!
det
"
O I
k
I
m−k
O
#!
,
where
det
"
O I
k
I
m−k
O
#!
= ±1.
The above easily follows from the fact that the determinant changes its sign if we interchange columns, as
discussed in Section 2.2. Moreover,
det
"
I
k
O
O M
k,k
#!
= det(I
k
) det(M
k,k
) = det(M
k,k
).
Hence,
det M = ±det M
k,k
.
b. We can see this on the following examples. We assume, without loss of generality that M
m−k,k
= O and
let M
k,k
= 2. Thus k = 1. First consider the case when m = 2. Then we have
M =
"
O I
m−k
M
k,k
O
#
=
"
0 1
2 0
#
.
2
Thus,
det M = −2 = det (−M
k,k
) .
Next c onsider the case when m = 3. Then
det
"
O I
m−k
M
k,k
O
#
= det
0
.
.
. 1 0
0
.
.
. 0 1
··· ··· ··· ···
2
.
.
. 0 0
= 2 6= det (−M
k,k
) .
Therefore, in general,
det M 6= det (−M
k,k
)
However, when k = m/2, that is, when all sub-matrices are square and of the same dimension, then it is
true that
det M = det (−M
k,k
) .
See [121].
2.5
Let
M =
"
A B
C D
#
and suppose that each block is k × k. John R. Silvester [121] showed that if at least one of the blocks is
equal to O (zero matrix), then the desired formula holds. Indeed, if a row or column block is zero, then the
determinant is equal to zero as follows from the determinant’s properties discussed Section 2.2. That is, if
A = B = O, or A = C = O, and so on, then obviously det M = 0. This includes the case when any three
or all four block matrices are zero matrices.
If B = O or C = O then
det M = det
"
A B
C D
#
= det (AD) .
The only case left to analyze is when A = O or D = O. We will show that in either case,
det M = det (−BC) .
Without loss of generality suppose that D = O. Following arguments of John R. Silvester [121], we premul-
tiply M by the product of three matrices whose determinants are unity:
"
I
k
−I
k
O I
k
#"
I
k
O
I
k
I
k
#"
I
k
−I
k
O I
k
#"
A B
C O
#
=
"
−C O
A B
#
.
Hence,
det
"
A B
C O
#
=
"
−C O
A B
#
= det (−C) det B
= det (−I
k
) det C det B.
Thus we have
det
"
A B
C O
#
= det (−BC) = det (−CB) .
3
2.6
We represent the given system of equations in the form Ax = b, where
A =
"
1 1 2 1
1 −2 0 −1
#
, x =
x
1
x
2
x
3
x
4
, and b =
"
1
−2
#
.
Using e leme ntary row operations yields
A =
"
1 1 2 1
1 −2 0 −1
#
→
"
1 1 2 1
0 −3 −2 −2
#
, and
[A, b] =
"
1 1 2 1 1
1 −2 0 −1 −2
#
→
"
1 1 2 1 1
0 −3 −2 −2 −3
#
,
from which rank A = 2 and rank[A, b] = 2. Therefore, by Theorem 2.1, the system has a solution.
We next represent the system of equations as
"
1 1
1 −2
#"
x
1
x
2
#
=
"
1 −2x
3
− x
4
−2 + x
4
#
Assigning arbitrary values to x
3
and x
4
(x
3
= d
3
, x
4
= d
4
), we get
"
x
1
x
2
#
=
"
1 1
1 −2
#
−1
"
1 −2x
3
− x
4
−2 + x
4
#
= −
1
3
"
−2 −1
−1 1
#"
1 −2x
3
− x
4
−2 + x
4
#
=
"
−
4
3
d
3
−
1
3
d
4
1 −
2
3
d
3
−
2
3
d
4
#
.
Therefore, a general solution is
x
1
x
2
x
3
x
4
=
−
4
3
d
3
−
1
3
d
4
1 −
2
3
d
3
−
2
3
d
4
d
3
d
4
=
−
4
3
−
2
3
1
0
d
3
+
−
1
3
−
2
3
0
1
d
4
+
0
1
0
0
,
where d
3
and d
4
are arbitrary values.
2.7
1. Apply the definition of | −a|:
| −a| =
−a if −a > 0
0 if −a = 0
−(−a) if −a < 0
=
−a if a < 0
0 if a = 0
a if a > 0
= |a|.
2. If a ≥ 0, then |a| = a. If a < 0, then |a| = −a > 0 > a. Hence |a| ≥ a. On the other hand, |− a| ≥ −a
(by the above). Hence, a ≥ −| − a| = −|a| (by property 1).
4