Tensor Decompositions and applications

所需积分/C币:49 2017-12-27 14:10:41 822KB PDF
收藏 收藏

关于张量在HSI中的应用 一种TD分解 widespread use of multisensor technology and the emergence of big data sets have highlighted the limitations of standard flat-view matrix models and the necessity to move toward more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable
TENSOR DECOMPOSITIONS AND APPLICATIONS 457 (CP)3890 and Tucker226 tensor decompositions can be considered to be higher- order generalizations of the matrix singular value decomposition(SVD)and principal component analysis(PCA). In section 3 we discuss the CP decomposition, its con nection to tensor rank and tensor border rank, conditions for uniqueness, algorithms and computational issues, and applications. The Tucker decomposition is covered in section 4 where we discuss its relationship to compression, the notion of n-rank algorithms and computational issues, and applications. Section 5 covers other decom positions, including INDSCAL 38, PARAFAC2 92, CANDELINC 39, DEDICOM 93, and PARATUCK2 100, and their applicatiOns. Section 6 provides informlatiOn about soft ware for tensor computations. We summarize our findings in section 7 2. Notation and Preliminaries. In this review. we have tried to remain as con- sistent as possible with terminology that would be familiar to applied mathematicians and with the terminology of previous publications in the area of tensor decomposi tions. The notation used here is very similar to that proposed by Kiers 122 .Other standards have been proposed as well; see Harshman 94 and Harshman and Hong The order of a tensor is the number of dimensions, also known as ways or modes. 3 Vectors(tensors of order one)are denoted by boldface lowercase letters, e. g, a. Matri ces(tensors of order two) are denoted by boldface capital letters, e.g., A. Higher-order tensors (order three or higher) are denoted by boldface Euler script letters, e.g., i Scalars are denoted by lowercase letters, e. g,a The ith entry of a vector a is denoted by ai, element(i,j)of a matrix A is denoted by aij. and eleinent (i,j, k )of a third-order tensor is denoted by aijk Indices typica lly range from I to their capital version, e.g., i=1,., /. The nth element in a sequence is denoted by a superscript in parentheses, e. g. A enote the nth matrix in a sequence Subarrays are formed when a subset of the indices is fixed. For matrices, these are the rows and columns. A colon is used to indicate all elements of a mode. Thus, the ]th column of A is denoted by a j, and the ith row of a matrix A is denoted by ai:. Alternatively, the jth column of a matrix, a: i, may be denoted more compactly Fibers arc thc highcr -ordcr analogue of matrix rows and columns. a fiber is defined by fixing every index but one. A matrix column is a mode-1 fiber and a matrix row is a. mode-2 fiber. Third-order tensors have column, row, and tube fibers denoted by x jk, xi: k, and xij: respectively; see Figure 2. 1 When extracted from the tensor, fibers are always assumed to be oriented as column vectors Slices are two-dimensional sections of a tensor, defined by fixing all but two indices. Figure 2. 2 shows the horizontal, lateral, and frontal slides of a third-order tensor X, denoted by Xi X:j:, and X::k, rcspcctivcly. Altcrnativcly, the kth frontal slice of a third-order tensor, X:: k, Ilay be denoted Imore compactly as Xk The norm of a tensor E Rlxl2X-xIN is the square root of the sum of the squares of all its elements, i.e x|=1∑∑…∑ This is analogous to the matrix Frobenius norm, which is denoted A for a matrix 3In some fields. the order of the tensor is referred to as the runk of the tensor. Iu much of the literature and this review, however, the term rank means something quite different; see section 3.I 458 TAMARA G KOLDA AND BRETT W. BADER (a)Mode-1(column)fibers: x:ik (b) Mode-2(row)fibers: xi: k (c)Mode-3(tube)fibers: xi Fig 2.1 Fibers of a 3rd-order tensor. (a) Horizontal slices: Xi (b) Lateral slices: X (c)Frontal slices: X k(or Xr: ig-2.2 Slices of a 3rd-c order tensor A. The inner product of two same- sized tensors a,y∈R4×12x…× v is the sum of the products of their entries, i.e (x,9)=∑∑…∑1…1 1=1i2=1 It follows immediately that(x,C)=‖xC‖ 2.l. Rank-One tensors.AnN- way tensor3∈R×h2×… XIN is rank ome if it can be written as the outer product of N vectors, i.e =alloa The symbol"o' represents the vector outer product. This means that each element of the tensor is the product of the corresponding vector elements (1)(2) ZN for all 1 s in <1, Figure 2 illustrates x=ao boc, a third-order rank-one tensor 2. 2. Symmetry and Tensors. a tensor is called cubical if every mode is the same sizc,i..,CRIXIXIX.XI 49. A cubical tensor is called supersymmetric(though TENSOR DECOMPOSITIONS AND APPLICATIONS 459 X Fig 2.3 Ran k: one third-order tensor, ao hoc. The(i, 3,k: element of. is given, by Tiik= ib, ck this term is challenged by Comon et al. 49, who instead prefer just"symmetric")if its clcmcnts romain constant undcr any permutation of the indices. For instance three-way tensor n ∈R×× is supersymmetric if Tigh=Cak j=i ih=ihi=Thi;=Thji for all i,j, k= Tensors can be(partially) symmetric in two or more modes as well. For example, a three-way tensor e R XIxk is symmetric in modes one and two if all its frontal slices are symmetric, 1. e, r allk- K Analysis of supersymmetric tensors, which can be shown to be bijectively related to homogeneous polynomials, predates even the work of Hitchcock [106 105, which was mentioned in the introduction; see 5049 for details 2.3. Diagonal Tensors. A tensor X R1Xl2x.XIN is diagonal if vi, ioiN+0 only if i1 =i2=..=iN. Figure 2illustrates a cubical tensor with ones along the superdiagonal Fig. 2. 4 Three-way tensor of size I XI I with ones along the superdiagonal 2.4. Matricization: Transforming a Tensor into a Matrix. Matricization, also known as unfolding or fattening, is the process of reordering the elements of an N-way array into a matrix. For instance, a 2 x3x 4 tensor can be arranged as a 6 x 4 matrix or a matrix, and so on. In this review, we consider only the special case of mode-n matricization because it is the only form relevant to our discussion. A more general treatment of matricization can be found in Kolda 3 34 The mode-n matricization of a tensor C∈R1×2×× N is denoted by Xin) and arranges thc modc-n fibers to be 460 TAMARA G KOLDA AND BRETT W. BADER the columns of the resulting matrix. Though conceptually simple, the formal notation is clunky. Tensor element (i1, i2, .. iN) maps to matrix element(in, j), where +∑(k-1)Jwi The concept is easier to understand using all exalllple. Let the frontal slices of E 14710 13161922 25811 X2=14172023 36912 15182124 Then the three mode-n unfoldings are 1471013161922 581114172023 3691215182124 3131415 456161718 8 9192021 101112222324 12345 9101112 1314151617 21222324 Different papers sometimes use different orderings of the columns for the mode-n unfolding; compare the above with 63 and 122. In general, the specific permutation of columns is not important so long as it is consistent across related calculations; see 134 for further details Last, we note that it is a lso possible to vectorize a tensor. Once again the ordering of the elements is not important so long as it is consistent. In the example above, the vectorized version is 1 vcc(℃ 24 2.5. Tensor Multiplication: The n-Mode Product. Tensors can be multiplied together, though obviously the notation and symbols for this are much more complex than for matrices. For a full treatment of tensor multiplication see, e. g, Bader and Kolda 16. Here we consider only the tensor n-mode product, i. e multiplying a tensor by a matrix(or a vector) in mode n Them-mole( matrix) product of a tensor x∈Rh1×12×…x1 with a matrix U∈ R XIn is denoted by I× U and is of sizc 1,1x…xLn-1×J×Ln+1×…×IN Element wise. we hlave (xC×n,U)1n-1 ∑ TENSOR DECOMPOSITIONS AND APPLICATIONS Each mode-n fiber is multiplied by the matrix U. The idea can also be expressed in terms of unfolded tensors 9=x×nUY(n)=UX(m) The n-modc product of a tensor with a matrix is rclatcd to a changc of basis in the case when a tensor defines a multilinear operator As an example, let 2 be the tensor defined above in(2. 1)and let U=[2461 Then the product y=x×1U∈R2×4x2is 4976103 130157184211 2864100136 172208244280 a few facts regarding n-mode matrix products are in order. For distinct modes in a series of multiplications, the order of the multiplication is irrelevant,i.e X×mA×nB=℃×nB×mA(m≠n) If the modes are the same, then ×nA×B=x×n(BA) Then-mode( vector) product of a tensor∈R1×2x…× KIN with a vector v∈Rn is denoted by x xⅴ. Thc rosult is of order m1,i.C., the sizc is l1×…×ln-1 IN. Elementwise (X×nv) The idea is to compute the inner product of each mode-n fiber with the vector v For example, let a be as given in(2. 1)and define v=[1 4.Then 70190 80200 90210 When it comes to mode-n vector multiplication, precedence matters because the order of the intermediate results changes. In other words ×ma×nb=(×ma)×n-1b=(×nb)× m a for m<n. Scc 16 for furthor discussion of concepts in this subsection 2.6. Matrix Kronecker, Khatri-Rao, and Hadamard Products. Several nlatrix products are important, in the sections that follow, so we briefly define them here The Kronecker product of matrices A E RX. and B E RAXL is denoted by A Q B. The result is a matrix of size(IK)x(L) and defined by a1b a12B a21B C22B B A∞B aIlB 12B B a1⑧b1an⑧b b aj bl-I arbi] 462 TAMARA G KOLDA AND BRETT W. BADER The Khatri-Rao product 200 is the "matching columnwise Kronecker product Given matrices AE/XK and B E R XK, their Khatri-Rao product is denoted by AOB. The result is a matrix of size(I )x K defined by A⊙B=a1b1a2②b2 8b] If a and b are vectors, then the Khatri-Rao and Kronecker products are identical b=a⊙b The Hadamard product is the elementwise matrix product. Given matrices A and B, both of size I x, their Hadamard product is denoted by A*B. The result is also of size 1x and defined by a11b11a12b 1J01J Q21021a22022 A水B an1bn1a12b12…a1brJ These matrix products have properties 227 200 that will prove useful in our discussions (A凶B)(CD)=AC⑧BD (A⑧B)t= At&bi A⊙B⊙C=(A⊙B)⊙C=A⊙(B⊙C), (A⊙B)(A⊙B)=AA*BB, (2 (A⊙B)=(AA)*(BB)(A⊙B) Horc a dcnotcs thc Moorc-Pcnrosc pscudoinvcrsc of A 84 As all example of the utility of the Kronecker product, Consider the following Ietx∈R1xh2 x.XIN and a(n)∈ TRJnXIn for all n∈{,,N}.Then, for any n∈{1,…,N}, we have g-x×1A(1)×2A A AX(n)(A⑧ A 7+1) ⑧A A See 134 for a proof of this property 3. Tensor Rank and the CANDECOMP/PARAFAC Decomposition. In 1927, Hitchcock, 106 proposed the idea of the polyadic forll of a tensor, i.e., express ing a tensor as the sum of a finite number of rank-one tensors: and in 1944 Cattell 40 41 proposcd idcas for parallcl proportional analysis and the idca of multiplc axes or analysis(circumstances, objects, and features). The concept finally became popu lar after its third introduction, in 1970 to the psy chometrics community, in the form of CANDECOMP (canonical decomposition) by Carroll and Chang 38 and PARAFAC (parallel factors) by Harshman 90. We refer to the CANDECOMP/ pARaFaC de composition as CP, per Kiers 122. Mocks 166 independently discovered CP in the context of brain imaging and called it the topographic components model. Table 3.I summarizes the diffcrcnt names for the Cp dccomposition TENSOR DECOMPOSITIONS AND APPLICATIONS 463 Table 3. I Sone of the many names for the CP decomposition ame Polyadic forin of a tensor Hitchcock, 1927 105 PARAFAC (parallel factors Harshman, 197090 CANDECOMP or CAND(canonical decomposition) Carroll and Chang, 1970 38 Topographic components model Mocks, 1988166 CP(CANDECOMP/ PARAFAC) Kiers, 2000[122 11 Fig 3.I CP decomposition of a three-way array. The CP decomposition factorizes a tensor into a sum of component rank-one tensors. For example, given a third-order tensor ERIXJxk, we wish to write it as (3 where R is a positive integer and a,∈R!,b∈R, and c∈ IR for r=1,,f Elementwise,[(1)is written as xbr( kr for i=1,…,I.j=1,…,Jk=1,,K This is illustrated in Figure 3. 1 The factor matrices rcfcr to the combination of the vectors from the rank-onc cOIlpOnlents, i.e., A=a1 a2..arI and likewise for B and C. USing these definitions, (3.1)may be written in matricized form(one per mode; see section 2. 4) (3.2) x(1)≈A(C⊙B) (2)≈B(C⊙A)T x(3)≈C(B⊙A) Recall that o denotes the Khatri-Rao product from section 2. 6 The three-way model is sometimes written in terms of the frontal slices of x(see Figure 2.2) AnRB, Where D)= diag(ck for k=1,., K Analogous equations can be written for the horizontal and lateral slices. In general though, slicewise expressions do not easily extend beyond three dimensions. Following Kolda 134(see also Kruskal 141); the CP model can be concisely expressed as xC≈A,B,C=∑ arobroc 464 TAMARA G KOLDA AND BRETT W. BADER It is often useful to assume that the columns of A, B, and C are normalized to length one with the weights absorbed into the vector AE RR so that (33 x≈A;A,B,C=∑anob We have focused on the three-way case because it is widely applicable and suf- ficient for many needs. For a general Nth-order tensor, E R1Xl2X-XIN, the CP decomposition is x≈1A;A①),A2…,A(N]=∑Aa(1)oa2)o…oaM where入∈ IRR anda(n}∈ R In xr for nm=1,,N. In this case, the mode-n matri sized version is given by C AA(AC A A(1)T wherc 4=diag(入 3. I. Tensor Rank. The runk of a tensor a, denoted rank(), is defined is the smallest number of rank-one tensors(see section 2. 1) that generate as their sum 105141. In other words, this is the smallest number of components in an exact CP decomposition,where"exact"means that there is equality in(3. 1) Ilitchcock [105 first proposed this definition of rank in 1927, and Kruskal [141 did so independently 50 years later. For the perspective of tensor rank from an algebraic complexity point of view, see [107 37 13024 and references therein. An exact CP decomposition with R= rank() components is called the rank decomposition The definition of tensor rank is an exact analogue to the definition of matrix rank but the properties of matrix and tensor ranks are quite different. One difference is that the rank of a real-valued tensor may actually be different over R and C. For exa.mple, let be a tensor whose front al slices a re defined by andⅩ This is a tensor of rank three over R and rank two over C. The rank decomposition over R is 2=A, B, Cll, where 101 10 A B 011, and C=110 whereas the rank decomposition over C has the following factor matrices instead B 1「11 d C This example comes from Kruskal 142 and the proof that it is rank three over R and the methodology for computing the factors can be found in Ten Berge 212. Se also Kruskal [143 for further discussion nother major difference between matrix and tensor rank is that(except in special cascs such as thc cxample abovc) thcrc is no straightforward algorithm to dctcrminc

试读 46P Tensor Decompositions and applications
立即下载 低至0.43元/次 身份认证VIP会员低至7折
    关注 私信 TA的资源
    Tensor Decompositions and applications 49积分/C币 立即下载
    Tensor Decompositions and applications第1页
    Tensor Decompositions and applications第2页
    Tensor Decompositions and applications第3页
    Tensor Decompositions and applications第4页
    Tensor Decompositions and applications第5页
    Tensor Decompositions and applications第6页
    Tensor Decompositions and applications第7页
    Tensor Decompositions and applications第8页
    Tensor Decompositions and applications第9页
    Tensor Decompositions and applications第10页
    Tensor Decompositions and applications第11页
    Tensor Decompositions and applications第12页
    Tensor Decompositions and applications第13页
    Tensor Decompositions and applications第14页


    49积分/C币 立即下载 >