没有合适的资源?快使用搜索试试~ 我知道了~
Compressed Sensing Theory and Applications
5星 · 超过95%的资源 需积分: 9 288 下载量 197 浏览量
2013-01-17
21:22:14
上传
评论 2
收藏 26.92MB PDF 举报
温馨提示
试读
552页
Compressed Sensing Theory and Applications
资源推荐
资源详情
资源评论
1
Introduction to Compressed
Sensing
Mark A. Davenport
Stanford University, Department of Statistics
Marco F. Duarte
Duke University, Department of Computer Science
Yonina C. Eldar
Technion, Israel Institute of Technology, Department of Electrical Engineering
Stanford University, Department of Electrical Engineering (Visiting)
Gitta Kutyniok
University of Osnabrueck, Institute for Mathematics
In recent years, compressed sensing (CS) has attracted considerable attention
in areas of applied mathematics, computer science, and electrical engineering
by suggesting that it may be possible to surpass the traditional limits of sam-
pling theory. CS builds upon the fundamental fact that we can represent many
signals using only a few non-zero coefficients in a suitable basis or dictionary.
Nonlinear optimization can then enable recovery of such signals from very few
measurements. In this chapter, we provide an up-to-date review of the basic
theory underlying CS. After a brief historical overview, we begin with a dis-
cussion of sparsity and other low-dimensional signal models. We then treat the
central question of how to accurately recover a high-dimensional signal from a
small set of measurements and provide performance guarantees for a variety of
sparse recovery algorithms. We conclude with a discussion of some extensions
of the sparse recovery framework. In subsequent chapters of the book, we will
see how the fundamentals presented in this chapter are extended in many excit-
ing directions, including new models for describing structure in both analog and
discrete-time signals, new sensing design techniques, more advanced recovery
results, and emerging applications.
1.1 Introduction
We are in the midst of a digital revolution that is driving the development and
deployment of new kinds of sensing systems with ever-increasing fidelity and
resolution. The theoretical foundation of this revolution is the pioneering work
of Kotelnikov, Nyquist, Shannon, and Whittaker on sampling continuous-time
band-limited signals [162, 195, 209, 247]. Their results demonstrate that signals,
1
2 Chapter 1. Introduction to Compressed Sensing
images, videos, and other data can be exactly recovered from a set of uniformly
spaced samples taken at the so-called Nyquist rate of twice the highest frequency
present in the signal of interest. Capitalizing on this discovery, much of signal
processing has moved from the analog to the digital domain and ridden the wave
of Moore’s law. Digitization has enabled the creation of sensing and processing
systems that are more robust, flexible, cheaper and, consequently, more widely
used than their analog counterparts.
As a result of this success, the amount of data generated by sensing systems
has grown from a trickle to a torrent. Unfortunately, in many important and
emerging applications, the resulting Nyquist rate is so high that we end up with
far too many samples. Alternatively, it may simply be too costly, or even physi-
cally impossible, to build devices capable of acquiring samples at the necessary
rate [146, 241]. Thus, despite extraordinary advances in computational power, the
acquisition and processing of signals in application areas such as imaging, video,
medical imaging, remote surveillance, spectroscopy, and genomic data analysis
continues to pose a tremendous challenge.
To address the logistical and computational challenges involved in dealing
with such high-dimensional data, we often depend on compression, which aims
at finding the most concise representation of a signal that is able to achieve
a target level of acceptable distortion. One of the most popular techniques for
signal compression is known as transform coding, and typically relies on finding
a basis or frame that provides sparse or compressible representations for signals
in a class of interest [31, 77, 106]. By a sparse representation, we mean that for
a signal of length n, we can represent it with k n nonzero coefficients; by a
compressible representation, we mean that the signal is well-approximated by
a signal with only k nonzero coefficients. Both sparse and compressible signals
can be represented with high fidelity by preserving only the values and locations
of the largest coefficients of the signal. This process is called sparse approxima-
tion, and forms the foundation of transform coding schemes that exploit signal
sparsity and compressibility, including the JPEG, JPEG2000, MPEG, and MP3
standards.
Leveraging the concept of transform coding, compressed sensing (CS) has
emerged as a new framework for signal acquisition and sensor design. CS enables
a potentially large reduction in the sampling and computation costs for sensing
signals that have a sparse or compressible representation. While the Nyquist-
Shannon sampling theorem states that a certain minimum number of samples
is required in order to perfectly capture an arbitrary bandlimited signal, when
the signal is sparse in a known basis we can vastly reduce the number of mea-
surements that need to be stored. Consequently, when sensing sparse signals we
might be able to do better than suggested by classical results. This is the fun-
damental idea behind CS: rather than first sampling at a high rate and then
compressing the sampled data, we would like to find ways to directly sense the
data in a compressed form — i.e., at a lower sampling rate. The field of CS grew
out of the work of Cand`es, Romberg, and Tao and of Donoho, who showed that
Introduction to Compressed Sensing
3
a finite-dimensional signal having a sparse or compressible representation can
be recovered from a small set of linear, nonadaptive measurements [3, 33, 40–
42, 44, 82]. The design of these measurement schemes and their extensions to
practical data models and acquisition systems are central challenges in the field
of CS.
While this idea has only recently gained significant attraction in the signal
processing community, there have been hints in this direction dating back as far
as the eighteenth century. In 1795, Prony proposed an algorithm for the estima-
tion of the parameters associated with a small number of complex exponentials
sampled in the presence of noise [201]. The next theoretical leap came in the early
1900’s, when Carath´eodory showed that a positive linear combination of any k
sinusoids is uniquely determined by its value at t = 0 and at any other 2k points
in time [46, 47]. This represents far fewer samples than the number of Nyquist-
rate samples when k is small and the range of possible frequencies is large. In the
1990’s, this work was generalized by George, Gorodnitsky, and Rao, who studied
sparsity in biomagnetic imaging and other contexts [134–136, 202]. Simultane-
ously, Bresler, Feng, and Venkataramani proposed a sampling scheme for acquir-
ing certain classes of signals consisting of k components with nonzero bandwidth
(as opposed to pure sinusoids) under restrictions on the possible spectral sup-
ports, although exact recovery was not guaranteed in general [29, 117, 118, 237].
In the early 2000’s Blu, Marziliano, and Vetterli developed sampling methods
for certain classes of parametric signals that are governed by only k param-
eters, showing that these signals can be sampled and recovered from just 2k
samples [239].
A related problem focuses on recovery of a signal from partial observation of
its Fourier transform. Beurling proposed a method for extrapolating these obser-
vations to determine the entire Fourier transform [22]. One can show that if the
signal consists of a finite number of impulses, then Beurling’s approach will cor-
rectly recover the entire Fourier transform (of this non-bandlimited signal) from
any sufficiently large piece of its Fourier transform. His approach — to find the
signal with smallest `
1
norm among all signals agreeing with the acquired Fourier
measurements — bears a remarkable resemblance to some of the algorithms used
in CS.
More recently, Cand`es, Romberg, Tao [33, 40–42, 44], and Donoho [82] showed
that a signal having a sparse representation can be recovered exactly from a
small set of linear, nonadaptive measurements. This result suggests that it may
be possible to sense sparse signals by taking far fewer measurements, hence the
name compressed sensing. Note, however, that CS differs from classical sampling
in three important respects. First, sampling theory typically considers infinite
length, continuous-time signals. In contrast, CS is a mathematical theory focused
on measuring finite-dimensional vectors in R
n
. Second, rather than sampling the
signal at specific points in time, CS systems typically acquire measurements in
the form of inner products between the signal and more general test functions.
This is in fact in the spirit of modern sampling methods which similarly acquire
4 Chapter 1. Introduction to Compressed Sensing
signals by more general linear measurements [113, 230]. We will see throughout
this book that randomness often plays a key role in the design of these test
functions. Thirdly, the two frameworks differ in the manner in which they deal
with signal recovery, i.e., the problem of recovering the original signal from the
compressive measurements. In the Nyquist-Shannon framework, signal recovery
is achieved through sinc interpolation — a linear process that requires little
computation and has a simple interpretation. In CS, however, signal recovery is
typically achieved using highly nonlinear methods.
1
See Section 1.6 as well as
the survey in [226] for an overview of these techniques.
CS has already had notable impact on several applications. One example is
medical imaging [178–180, 227], where it has enabled speedups by a factor of
seven in pediatric MRI while preserving diagnostic quality [236]. Moreover, the
broad applicability of this framework has inspired research that extends the
CS framework by proposing practical implementations for numerous applica-
tions, including sub-Nyquist sampling systems [125, 126, 186–188, 219, 224, 225,
228], compressive imaging architectures [99, 184, 205], and compressive sensor
networks [7, 72, 141].
The aim of this book is to provide an up-to-date review of some of the impor-
tant results in CS. Many of the results and ideas in the various chapters rely
on the fundamental concepts of CS. Since the focus of the remaining chapters
is on more recent advances, we concentrate here on many of the basic results in
CS that will serve as background material to the rest of the book. Our goal in
this chapter is to provide an overview of the field and highlight some of the key
technical results, which are then more fully explored in subsequent chapters. We
begin with a brief review of the relevant mathematical tools, and then survey
many of the low-dimensional models commonly used in CS, with an emphasis
on sparsity and the union of subspaces models. We next focus attention on the
theory and algorithms for sparse recovery in finite dimensions. To facilitate our
goal of providing both an elementary introduction as well as a comprehensive
overview of many of the results in CS, we provide proofs of some of the more
technical lemmas and theorems in the Appendix.
1.2 Review of Vector Spaces
For much of its history, signal processing has focused on signals produced by
physical systems. Many natural and man-made systems can be modeled as linear.
Thus, it is natural to consider signal models that complement this kind of linear
structure. This notion has been incorporated into modern signal processing by
modeling signals as vectors living in an appropriate vector space. This captures
1
It is also worth noting that it has recently been shown that nonlinear methods can be used in
the context of traditional sampling as well, when the sampling mechanism is nonlinear [105].
剩余551页未读,继续阅读
dengcy028
- 粉丝: 4
- 资源: 44
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
- 1
- 2
- 3
- 4
- 5
- 6
前往页