Linear Algebra and Its Applications (Lax)2nd课后习题答案

所需积分/C币:11 2011-12-25 23:23:25 19.66MB PDF
24
收藏 收藏
举报

Linear Algebra and Application(Peter.D.Lax著)答案 英文版,很好懂的!
19. 15.3 Decomposition of a finite dimensional linear vector space under a linear operator 50 19.15. 4 Computation of elementary divisors and invariant factors 19.15.5 Examples 2 19.16Numerical range 53 docn豆丁 www.docin.com This is a solution manual for the textbook Linear Algebra and Its Applications, 2nd Edition, by Peter Lax (John Wiley Sons, 2007). This version omits the following problems: Exercise 2, 9 of Chapter 8 Exercise 3 of Appendix 3; Exercise problems of Appendix 4, 5, 8 and 11 1 Fundamentals Proof. Suppose 0 and 0 are two zeros of vector addition, then by the definition of zero and commutativity we have0=0+0=0+0′=0. Po0 : For any a=(x1,…,xn)∈K", we have +0=( )+(0,…,0)=(x1+0 So0=(0,.,0) is zero element of classical vector addition 3. Proof. The isomorphism T can be defined as T((al, .,an))=a1+a2+..+an-7-I Prof. Suppose S={s1,…,sn}. The isomorphism T can be defined as T(f)=(f(81),…,∫(Sn)),vf∈ 5. Proof For any p(a)=a1+a2+.+anz"-,we define 丁 T(p)=p(x), where p on the left side of the equation is regarded as a polynomial over R while p(r)on the right side of the equation is regarded as a function defined on S=[sl, ..., Sn. To prove T is an isomorphism, it suffices to prove T'is one-to-one. This is seen through the observation that p(82) om 1 and the vandermonde matrix S T-1 is invertible for distinct s1,s2,……,sn 6. Proof. For any 9,yEY, 2, tEZ and k E K, we have(by commutativity and associative law) (y+z)+(y+z)=(z+y)+(y/+2)=z+(y+(y+x)=z+(y+y/)+x)=z+(z+(y+y) (z+z)+(y+y)=(y+y)+(z+2)∈Y+Z, k(y+x)=ky+kz∈Y+Z So r+Z is a linear subspace of X if Y and Z are 7 Proof. For any c1,2∈Y∩z, since Y and Z are linear subspaces of X,m1+r2∈ Y and1+m2∈Z. Therefore,r1+x2∈Y∩z. For any k E K and a∈Ynz, since Y and Z are linear subspaces of X,kx∈ and kr E Z. Therefore, k EYnZ. Combined, we conclude Y n Z is a linear subspace of X Proof. By definition of zero vector, 0+0=0E 0. For any k E K, k0= k(0+0)=k0+k0. So k0=0E Combined, we can conclude 10 is a linear subspace of X 9. Pmoo. Define y={k11+…+k;:k…,k;∈K}. Then clearly a1=1m1+0x2+…+0r;∈Y Similarly, we can show a2,…,x∈¥. Since for any k,…,k,H,…,k∈K, (h121+…+kx)+(k11+…+k)=(k1+)x1+…+(k+k)∈Y for any k1,…,kj,k∈K, k(k1x1+…+k)=(kk1)1+…+(k);∈Y, we can conclude Y is a linear subspace of X containing 1, ..,Ii. Finally, if Z is any linear subspace of X containing 1, .. Cj, it is clear that Y c Z as Z must be closed under scalar multiplication and vector ddition. Combined, we have proven Y is the smallest linear subspace of X containing 1, .. j 10. Proof. We prove by contradiction. Without loss of generality, assume 21=0. Then 11+022+..+0zj 0. This shows 31,..., a, are linearly dependent, a contradiction. So r1 #0. We can similarly prove 2,…,x≠0 Po0. Suppose Yi has a basis yi,…, yn:. Then it suffices to prove y1,…,3hm;…,班,…, ymm form a basis of X. By definition of direct sum, these vectors span X, so we only need to show they are linearly independent In fact, if not, then 0 has two distinct representations: 0=0+...+0 and 0= >,(aiyi +...+an, 3 for r some a an,,.,ar,.,arm, where not all a; are zero. This is contradictory with the definition of direct sum. So we must have linear independence, which imply ,3hn,…,m,…, g form a basis of X. Consequently, dimX=>dimY, 12. Proof. Fix a basis 1, ...,n of X, any element a E X can be uniquely represented as 2isr ai(a)ri for some aa(x)∈K,i=1,……,n. We define the isomorphism as a→(a1(x),……,an(x). Clearly this isomorphism depends on the basis and by varying the choice of basis, we can have different isomorphisms 13. Poof. For any al,x2∈X,ifr1≡m2,ie.x1-m2∈Y, then a2-x1=-(x1-x2)∈Y,i.e.m2≡m1.This is symmetry. For any a∈X,m-m=0∈Y.Sox≡x. This is reflexivity. Finally,ix1≡r2,m2≡x3,then 1-3=(1- 2)+2-a3EY,1.e 31=23. This is transitivity 14 Proof. For any 1,m2∈X, we can find y∈{xn}n{2} if and only if a1-w∈ Y and o-y∈Y.Then 1 (x1-y)-(x2-3)∈Y So ina)#g if and only if 1=fa2. 15 Proof. If a)= and y)=y], then x-a,y-y'eY So(ar +y)-(a'+y)=(a-a)+(y-yEY This shows a+y=fa'+y. Also, for any k E K, k-k '=k(a -rEy. so kfa=ka=tka'y k{x}. 16 Proof. By theory of polynomials, we have y=a(II(t-t): q(t)is a polynomial of degree <n Then it's easy to see dimy=n-i and dim X/Y=dim X-dim y= j 17 Proof. By Theorem 6, dim X/y=dim X-dim Y=0, which implies X/Y= 0. So X=Y 18 Proof. Define y={(x,0):x∈X1,0∈X2}andY2={(0,x):0∈X1,x∈X2}. Then Yi and y2 are linear subspaces of X10 X2. It is easy to see Yi is isomorphic to X1, Y2 is isomorphic to X2, and Y1nY2=1(0, 0) So by Theorem 7, dim X1X2=dimI+dimY2-dim (YinY2)=dimX1+dimX2-0=dim X1+dimX2 19 Proof. By Exercise 18 and Theorem 6, dim(reX/y=dimY +dim(X/y=dimY +dimX-dimy=dim X Since linear spaces of same finite dimension are isomorphic(by one-to-one mapping between their bases) YOX/Y is isomorphic to X 20 Proof. (a) is not since a: a1 >0 is not closed under the scalar multiplication by -1.(b)is.(c) is not since 1 +12+1=0 and x+22+1=0 imply (a1+a1+(a2+32)+1=-1(d)is(e)is not since r being an integer does not guarantee ra1 is an integer for any rE R Proof. See the textbook's solution, page 279 uality Proof. Suppose el,..., en is a basis of X and suppose 1=2ii aiei. If the underlying field is R, we define a linear function I by setting l(ei)=ai(i=l, .. n) and extending its definition to X via linear combination. If the underlying field is C, we define I similarly by setting l(ei=ai, where ai is the complex conjugate of ai(i= 1, .. n). In either case, we have l(a1)=c12+0, where. is the Euclidean norm of r" or ct To generalize the above result to a general linear vector space X over a field K, we clearly need some notion of norm. This is exactly the starting point of Hahn-Banach Theorem, which claims a similar result for general linear vector spaces, not necessarily finite-dimensional (see Lax [6). So this exercise problem needs extra conditions if we need to go beyond K=R or K=C Proof For any I and l2EY, we have(11+l2) For any k E K,(kl)(y)=k(l(g))=k0=0 for any y E Y. So kL E Y. Combined, we conclude Yis ubspace of X 3. Po0. Since scy,Y-cS-.For“→”,letr1,…, Tm be a maximal linearly independent subset of s. Then S=span(x1,…,xm)andY={∑10:a1,…,am∈K} by Exercise9 of Chapter1. By the definition of annihilator, for any l∈S-andy=∑1a;∈Y, we have )=∑a(x;) SoLEY. By the arbitrariness of L, SCY. Combined, we have S=Y 丁 Proof. Suppose three linearly independent polynomials p1, P2 and P3 are applied to formula(9). Then ml m2 and m must satisfy the linear equations WW p(t)m1(t2)p1(t3)m1 p2(t1)P2(t2)p2(t3 3(t1)p3(t2)p3( 1=m We take p1(t)=1, p2(t)=t and p3(t)=t2. The above equation becomes 1111「m 0 111 」l- 202 a 0 m 9 Then it's easy to see that for a>v1/3, all three weights are positive To show formula(9) holds for all polynomials of degree 6 when a= V3 5, we note for any oddnEN e n dr=0, mip(-a)+m3p(a)=0 since m=m2 and p(-a)=-p(a), and m2p(0)=0 So( 9 ) holds for any a" of odd degree n. In particular, for p(a)=a and p(a)=as. For p(r)=x, we have ∧只之4d5’m1p(t1)+m2p(2)+m3p(t3)=2m4s2 So formula(9)holds for p(a)=x4 when a= v3/5. Combined, we conclude for a √3/5,(9) holds for all polynomials of degree 6 Remark 1. In this erercise problem and exercise 5 below. "Theorem 6 should be corrected to " Theorem Proof. We take pi(t)=l, P2(t)=t, p3(t)=t, and p4(t)=t. Then 1, m2, m3, and m4 solve the following equation: 2 6 6 0 62 b2 2/3 Then m22 a 6 b a 0 23 2b2 2/3 b3 63 0 dot 302 2a2-2b22a3+2ab2 2a2-2b2 2a2b+282a2+2b22a2b-2b 0 2a2-2b2 2a2b-2 2a2b+282 /3 -2a2+22-2a3+2ab2 2a3-2ab2 3 3 So the weights are positive if and only if one of the following two mutually exclusive cases hold 1)b2>,a2<b2,a2> 6. Proof.(from the textbook's solution)(a) Suppose there is a linear relation al1(P)+b2(p)+c23(P)=0 Set p=p(a=(a-52)(a-53). Then P($2)=P(E3=0, PI( 51)#0; so we get from the above relation that 0. Similarly 6=0, c=0 (b)Since dim P2=3, also dim P2=3. Since l1, l2, l3 are linearly independent, the span P? (cl) We define li by setting l1(e)= if j=1 ifj≠1 ind extending 1 to V by linear combination, i.e. l(2i-1ajej): =2israilej)=a1. l2 can be ly. If constructed similarly there exist al,..., an such that all1+..+anln=0, we have 0=a111ei)+.anIn(ei=aj, 3 So l1,..., In are linearly independent. Since dim V=dim V =n,(1,., In) is a basis of V (c2) We define -x2)(- 21=x2 aC-C1-. pi )( 2 7 Proof(from the textbook's solution) l(e has to be zero for =(1,0, -1, 2)and a =(2, 3, 1, 1). These yield two equations for c1,……,c4 c1-c3+2c4=0,2c1+3c2+c3+c4=0. We express CI and c2 in terms of c and c4. From the first equation, C1 C3-2c4. Setting this into the second equation gives C2=-C3+C4 3 Linear Mappings Proof. For any y,y’∈T(X), there exist,x′∈ X such that(x)= y and T(x)=y.Soy+y T(a)+T()=T(a +TET(X). For any k E K, ky= kT(a)=T(ka)E T(X). Combined, we conclude T(X) is a linear subspace of U (b) Proof. Suppose V is a linear subspace of U. For any a, a'ET-(v), there exist 3, y/E V such that T(e)=y andT(x)=y. Since 1(x+x)=(x)+T(x)=y+y′∈V,x+m∈T-1(V). or any k∈K, sInce T(kr)=kr(a)=kyeV, krEr-(V). Combined, we conclude T-(V)is a linear subspace of X 2. Proof.(from the textbook's solution) Suppose we drop the ith equation; if the remaining equations do not determine a uniquely, there is an a that is mapped into a vector whose components except the ith are zero If this were true for all i= 1, ..,m, the range of the mapping a -u would be m-dimensional; but according to Theorem 2, the dimension of the range is n m. Therefore one of the equations may be dropped without losing uniqueness; by induction m-n of the equations may be omitted Alternative solation: Uniqueness of the solution a implies the column vectors of the matrix T=(tii) are linearly independent. Since the column rank of a matrix equals its row rank( see Chapter 4), it is possible to select a subset of n of these equations which uniquely determine the solution Remark 2. The tectbook's solution is a proof that the column rank of a matric equals its row rank. 3. Proof.So(ax+by)=S(r(ar+by)=S(a(x)+b(y)=aS(T(x)+bS(T(y)=asoT(x)+bsoT(y) So SoT is also a linear mapping. Pmof.(R+S)oT(x)=(R+S)(T(x)=R(T(x)+S(T(x)=(RoT+Som)(x)andSo(T+P)(x)= S(+P)(x)=S(T(x)+P(x)=S(T(x)+S(P(x))=(SoT+SoP)(x) 4. Proof. Linearity of s and T is easy to see. For non-commutativity, consider the polynomial 8. Then TS(s)=T(82)=2≠s=S(1)=ST(s).SoST≠TS. Proof. For any a =(31, 2, I3)E X, S(a)=(1, 53,-22) and T(a)=(a3, 2,-21). So it's easy to see S and T are linear. For non-commutativity, note ST(r)= S(3, 2, -1)=(a3,-301, -.2)and Ts(a) T(1, T3, -12)=(2, 23, -11). So ST+TS in general Remark 3. Note the problem does not specify the direction of the rotation, so it is also possible that s() (31,-3, I2)and T(a)=(3, 32, T1). There are total of four choices of(S,T). But the corresponding proofs are similar to the one presented here 5 Proof. TT-I()=T(T-())=a by definition. So TT-=id 6 Proo∫. Suppose T':X→ u is invertible. Then for any y,y'∈U, there exist a unique I∈ X and a unique E X such that T(a)=y and T(a=y. So T(a +a)=T(a)+r(a=y+y' and by the injectivity of T,T-y+y=r+a'=T-y)+r-(y). For any k E K, since T(k )=kr(r)=ky, injectivity of T implies T-(kg)=ka=kr-I(g). Combined, we conclude T-is linear Proof. Suppose T:X-U and S: U-V. First, by the definition of multiplication, ST is a linear map Second, if a E X is such that ST(a)=0EV, the injectivity of S implies T()=OEU and the injectivity of T further implies =0E X. So, ST is one-to-one. For any z E V, there exists y E U such that S(y)=z Also, we can find a E X such that T(a)=y. So ST()=s(y)=z. This shows ST is onto. Combined, we conclude sr is invertible By associativity, we S with T-I and T with S-, we also have(T-S-ST)= id x. Therefore, we can conclude(ST)- T-S Proof. Suppose T: X+ U and S: U-v are linear maps. Then for any given IE v, (ST)L, r) L, ST )=(SL, T)=(TSL, a), Vr E X. Therefore,(ST)'I=T'ST. Let I run through every element of V, we conclude(ST)=TS. Proof. Suppose T and R are both linear maps from X to U. For any given LEU, we have((T+ Ryl, a) (L,(T+R)x)=(, Tx+ Rr)=(L, Ta)+(L, Ra)=(TL, r)+(R'L, a)=((T'+R)L, r),Va E X. Therefore (T+R'L=(T+R'L. Let L run through every element of V, we conclude(T+r)'=T+R Proof. Suppose T is an isomorphism from X to U, then T- is a well-defined linear map. We first show T is an isomorphism from U to X. Indeed, if E U is such that Tl=0, then for any EEX,0= (T'L, a)=(L,Tr). As a varies and goes through every element of X, Ta goes through every element of U. By considering the identification of U with U, we conclude l=0. So Tv is one-to-one. For any given mEX, define I=mr-, then LE U. For any r E X, we have(m, r)=(m, T-I(Tr))=(L, Tr)=(T'L, a) Since s is arbitrary, m= TI and T is therefore onto. Combined, we conclude T is an isomorphism from Uto X' and(T)- is hence well-defined iaV part(),(T-1)Tv=(TT-)'=(idu)=idu and T'(T-I)=(T-IT)'=(idx)=idx". This shows B y=(T) 10

...展开详情
试读 55P Linear Algebra and Its Applications (Lax)2nd课后习题答案
立即下载 低至0.43元/次 身份认证VIP会员低至7折
一个资源只可评论一次,评论内容不能少于5个字
yang05052002 这是拉克斯的《线性代数及其应用》的课后解答。
2014-10-28
回复
aaq1238 一般般吧!!
2014-08-27
回复
学习study 资料不算全
2014-07-02
回复
撸起袖子加油干 大牛的书配上答案,真是太棒了。
2014-06-11
回复
wshaoxin 很好的宝贝,收藏了!
2013-10-26
回复
evogame Linear Algebra and Its Applications 是Peter Lax在线性代数教育领域的名著,是提高科技工作者线性代数水平的重要参考书,也是有限维到无穷维,进一步学习泛函分析的桥梁(同一作者还著有《泛函分析》,也是名作)。这份材料包括了该教材正文部分全部习题的详细解答(除了第8章2.9题),对于学习者特别是自学者的帮助极大。感谢提供解答的ai shu xue。
2012-02-09
回复
您会向同学/朋友/同事推荐我们的CSDN下载吗?
谢谢参与!您的真实评价是我们改进的动力~
上传资源赚积分or赚钱
    最新推荐
    Linear Algebra and Its Applications (Lax)2nd课后习题答案 11积分/C币 立即下载
    1/55
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第1页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第2页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第3页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第4页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第5页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第6页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第7页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第8页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第9页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第10页
    Linear Algebra and Its Applications (Lax)2nd课后习题答案第11页

    试读结束, 可继续读6页

    11积分/C币 立即下载 >