### Proper orthogonal decomposition and its applications.pdf 收藏

POD原理解析与详细证明，SVD/KLD/PCA三者的等价性证明。
ORTHOGONAL DECOMPOSITION 529 The objective of the pod is to find a set of basis vectors that satisfies the following extreme value problem min&(D=Eflx=x(0) 2) t.中中;=6i,j=1,2, where x()=2i yi oi (l<m). In order to obtain the same form of expressions for the mean-square errors by using the three different POd methods, the centralization on the processing data is assumed, i.e., the expectation of the random vector x is zero The three POD methods are introduced in the following three sections respectively 2.1. THE PRINCIPAL COMPONENT ANALYSIS (PCA) The PCa is a statistical technique and the idea behind the Pca is quite old. The earliest descriptions of the technique were given by Pearson(1901)and Hotelling(1933). The purpose of the Pca is to identify the dependence structure behind a multivariate stochastic observation in order to obtain a compact description of it. The PCa can be seen equivalently as either a variance maximization technique or a least-mean-squares technique The central idea of the PCa is to reduce the dimensionality of a data set which consists of a large number of interrelated variables, while retaining as much as possible the variation present in the data set. This is achieved by transforming the original variables to a new set of variables, the principal components, which are uncorrelated and are ordered so that the first few retain most of the variation present in all of the original variables There exist different versions on the description of the PCa [20-22]. In order to enable the style of the performance of the three Pod approaches to be consistent, the method of realizing the pod based on the PCa is given as follows Suppose that xe r is a random vector, and y1, y2,..., ym ER are the 1st, 2nd, ,mth principal components respectively. In terms of the requirement of the PCA, let the first principal component y, be a linear combination of each element of the original random vector. 1.e y dili=a1x where a1=(a11, a21,., am1) is a constant vector. The variance of y, is s2,=V(y1)=E{(1-E{y1})2}=E{(x1x-E{xx})(x1x-E{xx}) MiEx-ex(x-e(x))a1 (5) ∑=E{(x-E{x)(x-E{x) where >x is the m x covariance matrix corresponding to the random vector x and E()is he expectation of x. From the knowledge of linear algebra,Lx e Rmxm is a semi-definite matrix . Let ai=a1/all,1.e, ai ai=1. Thus s],=1ai 2rai. It is apparent that the maximum of s will not be achieved for a finite ar. so a normalization constraint must 530 Y C. LIANG ET AL be imposed. The most convenient constraint is aid1=1. The problem of finding the first principal component is transformed to a conditional extreme value problem maxs2,=xi∑x1 s t 101 Introducing the Lagrangian multiplier A1 gives L(1,/1 )=m∑31+A1(1-xx1) Differentiating with respect to a1 yields 0L(x1,A1) A1D)1 Letting the right-hand side of the above equation be zero, we have 1 It can be seen that the solutions /, and a, of the extreme value problem are the eigenvalue and the corresponding eigenvector of the covariance matrix 2 ux, respectively. Note that 1, so a1 must be as large as possible. Thus /1 must be selected as the maximum eigenvalue of 2 Now let us find the second principal component. Let di?xi=d2x i=1 where a2=(a12, a22,., am2). The variance of y2 is s2=V(2)=E(V2-E{y2})} E((aix-Ea2xa2x-Efa2x)' E{(x-E{x})(x-E{x)}2 (10) To find the a which enables the maximum s 2 to be attained. a normalization constraint a2a2 =1 is necessary. The second principal component y2 must be uncorrelated with the first principal component y1, thus 0= cov(V1, V2)=E((ix-Eaix)(a2x-Ea2x) Using equation(11) and the symmetry of x, we have a2Zxa1=0. Note that a1 is an eigen vector of∑、,thus12x1=0 If1=0, because1≥12≥…≥Am≥0,then1=A2=…=m=0,ie, all the eigenvalues are the same Note that y is a real symmetry matrix, therefore there exists an orthogonal matrix PERXm, such that T (12) ORTHOGONAL DECOMPOSITION 531 Premultiplying equation (12) by P and postmultiplying the result by p give (13 Thus (14) especially, cov(x,x)=v(x)=E{(x1-E(x})}=0 (15) It means that the value of each random variable xi(i=1, 2, ., m) is centralized at its expectation, so it can be considered as a constant but not a random variable The values of x i(i= 1, 2, .. m) can be replaced completely by their expectations If M1>0, then a201=0 must hold, i.e., a2 is orthogonal to a1. Thus, the problem of finding the second component can be transformed into the following extreme value problem maX S, =a2>xa2 S t 202 (16) To solve the conditional extreme value problem, the Lagrangian multipliers n2 and u are introduced and the lagrangian function is written as x2+2(1-x2x2)+ Differentiation with respect to a2 gives L(x2,22,u)=2( A2Ⅰ)∞2+1 a2 Let the right-side of the above equation be zero, i.e ∑-2)x2+x1=0 Multiplying the two sides of equation(17) by ai gives ∑x-121)22+ 2x1∑x2+u=0 Because of the symmetry of 2 and the fact that a, is an eigenvector of E, we have +L=0 thus u=0. From equation(17) it follows that (18) 532 Y C. LIANG ET AL Once again,a2 is an eigenvector of >x. Owing to the same reason tha order to reach the maximum variation of y2, we can only take a2 to be the eigenvector corresponding to the second eigenvalue of >x. Then the variance of y2 is the second eigenvalue of∑x The remaining principal components can be found in a similar manner. In general, the ith principal component of x is Vi=aix and sy=v()=hi, where hi is the ith largest eigenvalue of Ex, and ai is the corresponding eigenvector. As stated above, it can be shown that for the third, the fourth,. ., and the lth principal components, the vectors of coefficients a3,4…, are the eigenvectors of∑ corresponding toλ3,A4,…,A1, the third he fourth, . and the lth largest eigenvalues respectively To sum up, the objective function for finding the optimal basis vectors in the PCa is equivalent to maX s t. ad:=s Then when the first I principal components are used to approximate the original random vector, the mean-square error is 62()=E{x-x()2}=E Via ∑E{2}, 20) i=l+1 where x=2i=1 y;ai, x(0)=2i1 iai Note that E(yi=Aix=a!E(x)=0, therefore, E{y2}=E{(-E{y})2 (21) Then the mean-square error is e2(1) (22) i=l+1 i=l+1 In fact, the original random variables can be expressed exactly by all principal components Suppose that all of the principal components yi(i=1, 2, ., m)are found, i.e., we have y1=x(=1,2,…,m) 23) Premultiplying equation(23)on the two sides by ai gives yiai=;aix(i= 1, 2, ., m) Summation of the equation on the two sides from 1 to m yields Di where ai di is an m x m matrix. Denoting that (k=1,2,…,m) the element of B) is b(i=dik aik. Let B=2K-1 B, then the element of B is bi i=2k-1 aik ajk=di, thus B=I. In fact, from )(1,a ak ap ORTHOGONAL DECOMPOSITION 533 follows that bii= Zk=1 dikOjk = dii, thi Vi di, where ai(i= 1, 2,..., m)are the eigenvectors of x corresponding to the eigenvalues of 2 descending order. Now, the proper orthogonal decomposition of the sampled vector is completed using the PCA. The orthonormal basis vectors are found and the mean-square error of the approximate expression for the original random data is given 2. 2. THE KARHUNEN-LOEVE DECOMPOSITION(KLD) During the 1940s, Karhunen and Loeve independently developed a theory regarding optimal series expansions of continuous-time stochastic processes [22, 24. Their results extend the pca to the case of infinite-dimensional spaces, such as the space of continuous-time functions. The KLD analysis uses single-parameter functions instead of vectors, two-parameter functions for representing autocorrelation instead of matrices. The KLD can be easily extended for discrete-time processes. In terms of optimality, the partial KLD has the same optimal properties of least-squares reconstruction and variance maximization as the pca The discrete KLD is stated as follows . Let xe Rm be a random vector, and oi) 1 be a set of orthonormal basis vectors in Rm, then there exist yi i xsuch that yφz=中y. (25) x()=∑y+∑b(≤m (26) =l+1 where bi(i=l+1,., m)are constants. It can be easily verified that bi=0(i=l+1,., m) after the centralization to the samples, i.e., after the processing on the random vector x such that Ex=0. Let 4x(0)=x-x(D=21+1(vi-bioi, where x and x(I)are random vectors, thus Ax()is also a random vector. In order to examine the quality of the expression of x we choose the mean square error as a measure, i.e e2(1)=E{‖x()2}=E (y-b)φ E (y-b) (27) l+1 i=l+1 enable 22(0) to be the minimum, the derivative of 22(0) with respect bi(i=l+1, 1+2, ., m)is calculated, which yields ab (1)=-2E{y-bn} etting the right-hand side of the above equation be zero, we have b;=E{yi}(i=1+1,1+2, 28 534 Y C. LIANG ET AL It can be seen that bi=0 after the centralization to the samples, then x(()=2i-1 vi i (<m)is the required form of the POD. to keep the generalization for the derivation, substituting equation(28 )into equation(27) gives EDi-ei Pi e(x-ex)(x-E(x)i d∑=tr(nm-∑x①m-1) (29) where Ex=E(x-E(x(x-ex) is the covariance matrix of x and l+1 Pm]E r Then the Kld problem is transformed as a conditional extreme value problem ∑φ∑xφ; (30) s t where i,j=l+1, I +2,., m. Introducing Lagrangian multipliers uii (i,j=l+1, 7+2,…,m) gives L=∑∑x中 ∑t;(φ}中;-6;) l+1j=l+1 Differentiation with respect to i on the two sides of the above equation yields OL aoi Cx1-中m-11) where ui=(un+li, u+2i,., umi)(i=l+1, I+2,., m). Writing the above equation in a matrix form gives aL 2(∑ ①n-1Um-t) 0中m-1 where Um-1=(u+1, uI+2,., um). Letting the right-hand side of the above equation be zero gives ∑xm-1=①m-1Um-1 (31) It can be seen that all the orthonormal basis vectors satisfying equation(30) must satisfy equation (31), where there are no special constraints to m-I and Um-l. Next, let us prove that all m-1 satisfying equation (31)can be formed by the eigenvectors of Lx, and Um-Lis the diagonal matrix that consists of the corresponding eigenvalues of The above conclusion is proved as follows Multiplying equation (31)by m-i yields 32 ORTHOGONAL DECOMPOSITION 535 Note that y i = ix, so Um- in equation(32)is the covariance matrix of the vector formed by the last m-l elements of the random vector y after the transformation y =gx. Thus Um- is a semi-definite matrix with dimensions of (m-l)x(m-D). Let the diagonal matrix formed by the eigenvalues of Um-I be m-I, and the square matrix formed by the corresponding eigenvectors yYm-I Performing the transformation z=Ym-1y gives yl,u,p 33 Substituting equation(32)into equation (33) yields 1m-1=(①m=1Ym-)yx(m=1m-1) (34) It can be seen that the diagonal elements of Am-I are the m-I eigenvalues of Ex, and the eigenvectors corresponding to the eigenvalues form (m-1Ym-D)mx(m-n). Denote the eigenvector matrix by *-1, thus Then the mean-square error is e2(1)=tr(m-∑3更m-1)=tr(m-1①#∑、∮册-1①m-1) Φ#∑滑-1m-1m-D=tr( ∑k (35) where入k(s=1,2, l)are the eigenvalues corresponding to the columns of dim-1 Once x is mapped onto the(m-l)-dimensional subspace spanned by m-l eigenvectors of Ex, further application of an orthonormal transformation would not change the mean-square error. Therefore, dm-I and Um- in equation (31)can be chosen simply as the matrices formed by the eigenvectors and eigenvalues of >x respectively. Let the descending order of the eigenvalues of x be 21, 12,.,Am, and the corresponding eigenvectors be P1, 2,...,m. It can be seen that in order to enable the minimum value problem to hold, the othonormal basis vectors can be selected as the eigenvectors of Ex, and the mean-square error to approximate x by using the first I basis vectors is& (0=21+1 n 2.3. THE SINGULAR-VALUE DECOMPOSITION (SVD) Klema and Laub  indicated that the svd was established for real-square matrices in the 1870s by beltrami and Jordan, for complex square matrices in 1902 by autonne, and for general rectangular matrices in 1939 by Eckart and Young. The Svd can be viewed as the extension of the eigenvalue decomposition for the case of non-square matrices. As far as the proper orthogonal decomposition is concerned, the Svd can also be seen as an extension for non-symmetric matrices. Because the Svd is much more general than the eigenvalue decomposition and intimately relates to the matrix rank and reduced-rank least-squares approximation, it is a very important and fundamental working tool in many areas such as matrix theory, linear systems, statistics, and signal analysis [25-29 The third method to realize the pod is the svd, which uses the singular-value decomposition to find the basis vectors satisfying the POd requirement in the sample space The process for realizing the pod by using the svd is stated as follows. The basic concept 536 YC. LIANG ET AL is the same as that which appeared in most references, such as [25-29, but we try to use statements which are easy to keep the description of the three POd methods consistent Suppose that n samples x1, x2,..., Xn are given where x iE Rm(i=1, 2,. n, n). Consider the samples to be more than enough such that n> m. Let X=( hen XER, and XX"Xm is an m x m semi-definite matrix. Let the eigenvalues of Xx be arranged in decreasing order as +1 An=0. In the svd of matrices .a Mili= 1, 2 ,., m)are called the singular values of X. let the eigenvectors of XX with eigenvaluesλ1,A2,…,mbeU1,U2,…,Um Define V=[V1,v2 where V1=(u1, U2 ., U,), V2=(Ur+1, Ur+2., Um) and the subscript r is the index of the smallest positive eigenvalue of XX. Then the matrix V is an m xm orthonormal matrix and we have ⅹXTv=V 36) Premultiplying equation(36) by V gives v1,V2XX[vn,2≈/>20 (37) where d U,=XIV where ∑1=diag( 1 2 we have UU1=(XT∑)xv1∑=∑1∑?∑1 (38 From equation (38)it can be seen that the columns of the matrix U1 are mutually orthogonal. Denoting U1=(u1,u2,…,Lu) according to the basis extension theorem in vector space, there exist n-r orthonormal vectors in R" and they are orthogonal to the columns of U1. let the n-r orthonormal vectors be ur+1, ur +2,. un. In the singular-value decomposition, u1,u2,..., um and U1, U2,..., Um are called left and right singular vectors of X corresponding to eigenvalues

...展开详情

Smartian_  Proper orthogonal decomposition and its applications.pdf 10积分/C币 立即下载
1/18     10积分/C币 立即下载 ＞