arXiv:0803.0811v3 [cs.NA] 8 Jan 2009

1

Subspace Pursuit for Compressive Sensing Signal

Reconstruction

Wei Dai and Olgica Milenkovic

Department of Electrical and Computer Engineering

University of Illinois at Urbana-Champaign

Abstract—We propose a new method for reconstruction of

sparse signals with and without noisy perturbations, termed the

subspace pursuit algorithm. The algorithm has two important

characteristics: low computational complexity, comparable to

that of orthogonal matching pursuit techniques when applied

to very sparse signals, and reconstruction accuracy of the same

order as that of LP optimization methods. The presented analysis

shows that in the noiseless setting, the proposed algorithm can

exactly reconstruct arbitrary sparse signals provided that the

sensing matrix satisﬁes the restricted isometry property with a

constant parameter. In the noisy setting and in the case that

the signal is not exactly sparse, it can be shown that the mean

squared error of the reconstruction is upper bounded by constant

multiples of the measurement and signal perturbation energies.

Index Terms—Compressive sensing, orthogonal matching pur-

suit, reconstruction algorithms, restricted isometry property,

sparse signal reconstruction.

I. INTRODUCTION

Compressive sensing (CS) is a sampling method closely

connected to transform coding which has been widely used

in modern communication systems involving large scale data

samples. A transform code converts input signals, embedded

in a high dimensional space, into signals that lie in a space

of signiﬁcantly smaller dimensions. Examples of transform

coders include the well known wavelet transforms and the

ubiquitous Fourier transform.

Compressive sensing techniques perform transform cod-

ing successfully whenever applied to so-called compressible

and/or K-sparse signals, i.e., signals that can be represented by

K ≪ N signiﬁcant coefﬁcients over an N -dimensional basis.

Encoding of a K-sparse, discrete-time signal x of dimension

N is accomplished by computing a measurement vector y that

consists of m ≪ N linear projections of the vector x. This

can be compactly described via

y = Φx.

Here, Φ represents an m × N matrix, usually over the ﬁeld

of real numbers. Within this framework, the projection basis

is assumed to be incoherent with the basis in which the signal

has a sparse representation [1].

Although the reconstruction of the signal x ∈ R

N

from the

possibly noisy random projections is an ill-posed problem, the

This work is supported by NSF Grants CCF 0644427, 0729216 and the

DARPA Young Faculty Award of the second author.

Wei Dai and Olgica Milenkovic are with the Department of Electrical and

Computer Engineering, University of Illinois at Urbana-Champaign, Urbana,

IL 61801-2918 USA (e-mail: weidai07@ uiuc.edu; milenkov@uiuc.edu).

strong prior knowledge of signal sparsity allows for recovering

x using m ≪ N projections only. One of the outstanding

results in CS theory is that the signal x can be reconstructed

using optimization strategies aimed at ﬁnding the sparsest

signal that matches with the m projections. In other words,

the reconstruction problem can be cast as an l

0

minimization

problem [2]. It can be shown that to reconstruct a K-sparse

signal x, l

0

minimization requires only m = 2 K random pro-

jections when the signal and the measurements are noise-free.

Unfortunately, the l

0

optimization problem is NP-hard. This

issue has led to a large body of work in CS theory and practice

centered around the design of measurement and reconstruction

algorithms with tractable reconstruction complexity.

The work by Donoho and Candès et. al. [1], [3], [4], [5]

demonstrated that CS reconstruction is, indeed, a polynomial

time problem – albeit under the constraint that more than

2K measurements are used. The key observation behind these

ﬁndings is that it is not necessary to resort to l

0

optimization

to recover x from the under-determined inverse problem; a

much easier l

1

optimization, based on Linear Programming

(LP) techniques, yields an equivalent solution, as long as the

sampling matrix Φ satisﬁes the so called restricted isometry

property (RIP) with a constant parameter.

While LP techniques play an important role in designing

computationally tractable CS decoders, their complexity is

still highly impractical for many applications. In such cases,

the need for faster decoding algorithms - preferably operating

in linear time - is of critical importance, even if one has

to increase the number of measurements. Several classes of

low-complexity reconstruction techniques were recently put

forward as alternatives to linear programming (LP) based

recovery, which include group testing methods [6], and al-

gorithms based on belief propagation [7].

Recently, a family of iterative greedy algorithms received

signiﬁcant attention due to their low complexity and simple

geometric interpretation. They include the Orthogonal Match-

ing Pursuit (OMP), the Regularized OMP (ROMP) and the

Stagewise OMP (StOMP) algorithms. The basic idea behind

these methods is to ﬁnd the support of the unknown signal

sequentially. At each iteration of the algorithms, one or several

coordinates of the vector x are selected for testing based

on the correlation values between the columns of Φ and

the regularized measurement vector. If deemed sufﬁciently

reliable, the candidate column indices are subsequently added

to the current estimate of the support set of x. The pursuit

algorithms iterate this procedure until all the coordinates in

- 1
- 2
- 3

前往页