ROBUST SPARSE RECOVERY FOR COMPRESSIVE SENSING IN IMPULSIVE NOISE
USING `
P
-NORM MODEL FITTING
Fei Wen
1
, Peilin Liu
1
, Yipeng Liu
2
, Robert C. Qiu
1
, Wenxian Yu
1
1
Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
2
School of Electronic Engineering, University of Electronic Science and Technology of China, China
{wenfei, liupeilin, wxyu}@sjtu.edu.cn, yipengliu@uestc.edu.cn, rcqiu@sjtu.edu.cn
ABSTRACT
This work considers the robust sparse recovery problem in
compressive sensing (CS) in the presence of impulsive mea-
surement noise. We propose a robust formulation for sparse
recovery using the generalized `
p
-norm with 0 < p < 2 as
the metric for the residual error under `
1
-norm regularization.
An alternative direction method (ADM) has been proposed to
solve this formulation efficiently. Moreover, a smoothing s-
trategy has been used to derive a convergent method for the
nonconvex case of p < 1. The convergence conditions of the
proposed algorithm for both the convex and nonconvex cas-
es have been provided . Numerical simulations demonstrated
that the new algorithm can achieve state-of-the-art robust per-
formance in highly impulsive noise.
Index Terms— Compressive sensing, robust sparse re-
covery, alternating direction method, `
p
-norm data-fitting
1. INTRODUCTION
Compressive sensing (CS) allows us to acquire sparse sig-
nals at a significantly lower rate than the classical Nyquist
sampling [1]. The CS theory states that if a signal x ∈ R
n
is sparse, only a small number of linear measurements y =
Ax ∈ R
m
(m < n) of the signal suffice to accurately re-
construct it, A ∈ R
m×n
is the sensing matrix. Taking the
inevitable measurement noise into consideration, the com-
pressed measurements can be modeled as
y = Ax + n
where n ∈ R
m
is additive measurement noise.
In the CS setting, the recovery of x from y is generally ill-
posed since m < n. However, provided that x is sparse and
A satisfies some stable embedding conditions [2], x can be
reliably recovered with an error upper bounded by the noise
strength. To reconstruct x, the BPDN and LASSO formula-
tions [3], [4] are of the most popular, e.g., LASSO
min
x
1
µ
kAx − yk
2
+ kxk
1
(1)
This work was supported in part by the National Natural Science Foun-
dation of China (NSFC) under grants 61401501, 61472442 and 61171171.
where µ > 0 is a regularization parameter.
As in BPDN, LASSO and many other variants, the `
2
-
norm data-fitting model, which is optimal for Gaussian noise
in the maximum likelihood sense, is the most widely used
one. However, in practical applications, the measurement
noise may be of different kinds or combinations. Impulsive
noise in measurements may come from missing data in the
measurement process, transmission problems [5]–[7], fault-
y memory locations [8], buffer overflow [9], and has been
raised in many image and video processing works [10]–[13].
In these cases, the `
2
-norm model is inefficient as it is highly
sensitive to outliers in the observations.
Recently, various robust formulations have been proposed
for CS to suppress the outliers in measurements. In [14],
the Lorentzian-norm has been employed as the metric for the
residual error. In [15], the `
1
-norm has been used as the data-
fitting model to obtain a robust formulation as
min
x
1
µ
kAx − yk
1
+ kxk
1
. (2)
It has been shown in [15] that, when the measurements con-
tain impulsive noise, the `
1
-loss can result in dramatically bet-
ter performance compared with the `
2
-one. Subsequently, the
Huber penalty function has been used to design robust formu-
lation for sparse recovery in [16].
In this work, we use the generalized `
p
-norm, 0 < p <
2, as the loss function for the residual error to propose the
following robust formulation
min
x
1
µ
kAx − yk
p
p
+ kxk
1
. (3)
When 0 < p < 1, k·k
p
p
is the `
p
quasi-norm defined in a simi-
lar manner as the case of p ≥ 1, i.e., kvk
p
p
=
P
m
i=1
|v
i
|
p
. The
intuition behind utilizing `
p
-norm loss function is that, com-
pared with the quadratic function, it is a less rapidly increas-
ing function when p < 2, and, accordingly, is less sensitive to
large outliers, especially when p is small.
Except for the special case of p = 1, the problem (3)
has still not been well addressed. When 1 < p < 2, it can
be solved by traditional convex optimization methods such as
4643978-1-4799-9988-0/16/$31.00 ©2016 IEEE ICASSP 2016