Tuesday, October 09, 2007

Compressed Sensing: Reweighted L1 and a nice summary on Compressed Sampling


In a previous entry, I noted that Emmanuel Candès had presented a new iterative scheme at IMA that was sure to improve the current L1 reconstruction techniques used in Compressed Sensing. The scheme was called reweighted L1. It is now the subject of a technical report: Enhancing sparsity by reweighted L1 minimization by Emmanuel Candès, Mike Wakin and Stephen Boyd

It details the technique but also features a discussion on the reconstruction with overcomplete dictionaries, a comparison with the l2 based FOCUSS algorithm and a brief comparison with Chartrand's work. I noted two items from this paper which I am sure will be developed later:

  • The potential for the analysis-based reconstruction being superior to the synthesis based l1 recovery (they were shown to be different in the case of overcomplete bases see [1],[2])
  • The potential usefulness of nonconvex penalties in sparse signal recovery

It also looks like there is an issue coming up of IEEE Signal Processing Magazine on Compressed Sensing in which Candes and Wakin have written the following: People hearing without listening: an introduction to compressive sampling.

This is a well written introduction to Compressed Sensing with a particular attention to explaining clearly the Restricted Isometric Property. There is also a good summary of the Analog to Information (A/I) work with a mention on how reweighted L1 being used in that context. Much of it had already been shown on the IMA videos, now it is on paper. The most fascinating part of the A/I attempts is the reason why Compressed Sensing is really needed: the ability to sample at a much higher rate than currently feasible.
Whereas it may not be possible to digitize an analog signal at a very high rate rate, it may be quite possible to change its polarity at a high rate. The idea is then to multiply the signal by a pseudo-random sequence of plus and minus ones, integrate the product over time windows, and digitize the integral at the end of each time interval. This is a parallel architecture and one has several of these random multiplier-integrator pairs running in parallel using distinct or event nearly independent pseudo-random sign sequences.
When reading this, I could not but stop thinking how it could be used in optical interferometry.

References:
[1] M. Elad, P. Milanfar, and R. Rubinstein, "Analysis Versus Synthesis in Signal Priors", Inverse Problems. Vol. 23, no. 3, pages 947-968, June 2007.

[2] J.-L. Starck, M. Elad, and D.L. Donoho, "Redundant Multiscale Transforms and their Application for Morphological Component Analysis", the Journal of Advances in Imaging and Electron Physics, Vol. 132, pp. 287-348, 2004.

No comments:

Printfriendly