Monday, November 24, 2008

CS: Computing Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization.

Anatoli Juditsky  and  Arkadii Nemirovski have just released the code associated with their paper entitled: On Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization. To remind you what this is about, let me quote again the abstract of the paper: 
We propose novel necessary and sufficient conditions for a sensing matrix to be "$s$-good" -- to allow for exact $\ell_1$-recovery of sparse signals with $s$ nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect $\ell_1$-recovery (nonzero measurement noise, nearly $s$-sparse signal, near-optimal solution of the optimization problem yielding the $\ell_1$-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse $\ell_1$-recovery and to efficiently computable upper bounds on those $s$ for which a given sensing matrix is $s$-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties.
 The code is located here:


Arkadii Nemirovski also tells me that the code doesn't rely on the MOSEK commercial code anymore.

I note that they mention the following with regards to the other computational approach of  Alexandre d'Aspremont and Laurent El Ghaoui (Testing the Nullspace Property using Semidefinite Programming):
Besides, [11] proposes an efficiently computable upper bound on γ^_s(A) based on semidefinite relaxation; this bound is essentially different from our, and it could be interesting to find out if one of these bounds is “stronger” than the other.
First of all, let's call γ^_s(A, \beta)  the Juditsky-Nemirovski constant or JN constant in order to differentiate it from the Restricted Isometry Constant. Second, making their code available is sure to provide much in the way of comparing these bounds in the future. 

Why is this matter important to the Compressive Sensing community ? 

As explained earlier, currently much of the compressive sensing hardware development is "contrained" in trying to map the hardware acquisition process into measurement matrices that are known to fullfill the Restricted Isometry Property (take for instance Roummel Marcia and Rebecca Willett's coded aperture architecture in Compressive Coded Aperture Superresolution Image Reconstruction - additional material can be found here while the slides are here -). And so while it is a very interesting exercise on its own, it could be argued that in order to be more universal, we need to have tools for people to check if their own measurement matrices or multiplexing algorithms or PDE discretization schemes have the possibility of reconstructing a sparse solution using Linear Programming techniques. In other words, we now have a new tool for evaluating underdetermined systems that arise in many areas of science and engineering. 


Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, Last photo taken by Phoenix on Mars.

No comments:

Printfriendly