`
1
-magic : Recovery of Sparse Signals
via Convex Programming
Emmanuel Cand`es and Justin Romberg, Caltech
October 2005
1 Seven problems
A recent series of papers [3–8] develops a theory of signal recovery from highly incomplete
information. The central results state that a sparse vector x
0
∈ R
N
can be recovered from a
small number of linear measurements b = Ax
0
∈ R
K
, K N (or b = Ax
0
+ e when there is
measurement noise) by solving a convex program.
As a companion to these papers, this package includes MATLAB code that implements this
recovery procedure in the seven contexts described below. The code is not meant to be cutting-
edge, rather it is a proof-of-concept showing that these recovery procedures are computationally
tractable, even for large scale problems where the number of data points is in the millions.
The problems fall into two classes: those which can be recast as linear programs (LPs), and
those which can be recast as second-order cone programs (SOCPs). The LPs are solved using
a generic path-following primal-dual method. The SOCPs are solved with a generic log-barrier
algorithm. The implementations follow Chapter 11 of [2].
For maximum computational efficiency, the solvers for each of the seven problems are imple-
mented separately. They all have the same basic structure, however, with the computational
bottleneck being the calculation of the Newton step (this is discussed in detail below). The code
can be used in either “small scale” mode, where the system is constructed explicitly and solved
exactly, or in “large scale” mode, where an iterative matrix-free algorithm such as conjugate
gradients (CG) is used to approximately solve the system.
Our seven problems of interest are:
• Min-`
1
with equality constraints. The program
(P
1
) min kxk
1
subject to Ax = b,
also known as basis pursuit, finds the vector with smallest `
1
norm
kxk
1
:=
X
i
|x
i
|
that explains the observations b. As the results in [4, 6] show, if a sufficiently sparse x
0
exists such that Ax
0
= b, then (P
1
) will find it. When x, A, b have real-valued entries, (P
1
)
can be recast as an LP (this is discussed in detail in [10]).
• Minimum `
1
error approximation. Let A be a M × N matrix with full rank. Given
y ∈ R
M
, the program
(P
A
) min
x
ky − Axk
1
1
- 1
- 2
前往页