is rather natural to look for such adaptive discretizations also in the compressed sensing frame-
work, because one looks for solutions with only few degrees of freedom (associated with the
nonzero entries in u). One might hope to construct methods that slowly increase the support
of the iterates towards the one of the sparsest solution, solving only (low-dimensional) linear
problems on the support of the iterates in each step. Usually such adaptive methods are guided
by a-posteriori error analysis and as we shall see below, a simple a-posteriori approach to `
1
-
minimization indicates that the supremum of A
∗
(Au−f) should be made small, which is exactly
what the inverse scale space achieves (examine for instance the right-hand side in (1.9) and the
convergence analysis in [BGO
+
06, BFO
+
07] reviewed in the next section).
Our motivations suggest that the inverse scale space method applied to the compressed
sensing setup will provide an efficient and adaptive approach when carefully analyzed and im-
plemented, and this is indeed one of our main findings in this paper. We will show that there
exists a discrete set of time steps t
k
at which the primal solution of the inverse scale space is
changing and where u
k
= u(t
k
) can be computed by solving low-dimensional systems of linear
equations with nonnegativity constraints (which can often be even omitted in practice). The
evolution of the dual variable is characterized as linear between the discrete time steps and hence
can be computed explicitly as well. Moreover, the discrete time steps can be obtained from the
supremum norm of the dual variable (respectively the residuum). Finally, the behaviour of p
also provides the key to the adaptive refinement, i.e. the small index sets on which A has to
be considered and the linear equations need to be solved. In addition to the setup in (1.2) we
shall also consider the regularized problem of minimizing the functional in (1.3). This yields an
inverse scale space flow of the form
∂
t
p(t) = −αp(t) + A
∗
(f − Au(t)), p(t) ∈ ∂J(u(t)), (1.10)
which has not yet been investigated in detail in literature, but for which we can obtain analogous
results to the case α = 0.
The convergence behavior of the explicit discretization of the inverse scale space flow, also
known as linearized Bregman, has been studied before by Osher et al. in [OMD
+
10] where it
was discovered that the primal variable alternates between quickly converging to a new solution
and stagnating, i.e. staying constant for long times. This fact was used to determine the
stagnation time and immediately ‘kick’ the primal variable to a time where the next change of
solution happens. Hence, ‘kicking’ is the first work to use the discrete nature of the inverse scale
space flow. However, opposed to the method we propose in this paper, linearized Bregman is a
discretization of (1.9). By analyzing the behavior of the subgradient we will be able to solve the
(continuous) flow (1.9) exactly, without discretization of the underlying differential equation. The
times at which the solution changes can be calculated exactly such that a stagnation is entirely
avoided. Any change in the continuous inverse scale space flow solution happens instantaneously
at discrete times.
Besides linearized Bregman with kicking our approach is also related to the technique of
orthogonal matching pursuit (OMP) (cf. [PRK93, MZ93, TG07]), which is a greedy method to
solving the sparse approximation problem and iteratively adds components to the support of u
whose correlation to the current residual is maximal. To be more precise the pseudo code for
OMP is given in Algorithm 1 below. Although the motivation of OMP is very different than the
one of inverse scale space methods, we will see that the first iteration of OMP coincides with
the first step of our proposed approach to solve (1.9). We will discuss the differences to OMP
in more detail throughout the paper and will furthermore provide a numerical comparison in
Section 5.
3