Friday, November 02, 2007

Compressed Sensing: Toeplitz, Radar, SAR, Bayesian Approach, Non Convex Minimization, Angle Preserving RIP




There is a new batch of papers showing up on the Compressed Sensing Repository at Rice.

* Toeplitz-structured compressed sensing matrices by Waheed Bajwa, Jarvis Haupt, Gil Raz, Stephen Wright, and Robert Nowak. In the current Compressed Sensing Literature, random matrices used to produce CS measurements are made by sampling each element out of a specific probability distribution. This paper shows that one can build Toeplitz-structured matrices with entries drawn independently from probability distributions that yield good CS measurement matrices. The abstract reads:
The problem of recovering a sparse signal x ∈ Rn from a relatively small number of its observations of the form y = Ax ∈ Rk, where A is a known matrix and k ≪ n, has recently received a lot of attention under the rubric of compressed sensing (CS) and has applications in many areas of signal processing such as data compression, image processing, dimensionality reduction, etc. Recent work has established that if A is a random matrix with entries drawn independently from certain probability distributions then exact recovery of x from these observations can be guaranteed with high probability. In this paper, we show that Toeplitz-structured matrices with entries drawn independently from the same distributions are also sufficient to recover x from y with high probability, and we compare the performance of such matrices with that of fully independent and identically distributed ones. The use of Toeplitz matrices in CS applications has several potential advantages: (i) they require the generation of only O(n) independent random variables; (ii) multiplication with Toeplitz matrices can be efficiently implemented using fast Fourier transform, resulting in faster acquisition and reconstruction algorithms; and (iii) Toeplitz-structured matrices arise naturally in certain application areas such as system identification.
I wonder how building these matrices would fit into the work of Benjamin Recht, Maryam Fazel, Pablo Parrilo on minimum rank approximation ( Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization )

* In Situ Compressive Sensing (short version) by Lawrence Carin, Dehong Liu, and Ya Xue. The abstract reads:
It has been demonstrated that the fields observed at a point in a complex propagation medium may be viewed as a linear combination the fields from the same source, as radiated into a homogeneous environment. In the context of compressive sensing (CS), the former fields represent the data v and the latter u. Exploiting the compressible nature of most fields, as a function of position and frequency, one may use CS to infer u from v, with the number of required measurements for v small relative to those typically required for u. This has been referred to as in situ CS, because the complex propagation medium is itself exploited to perform the compressive measurements. The basic concept has been demonstrated for the problem of a source radiating in the presence of parallel plates, with encouraging results demonstrated. The linkages between CS and the superresolution properties of time-reversal [8] have been made, and it has been informally asserted that media that are appropriate for timereversal will likely also be appropriate for in situ CS. There is significant fundamental research required to formalize the requirements on the propagation media to optimize the performance of in situ CS.
* Exact reconstructions of sparse signals via nonconvex minimization. by Rick Chartrand is a useful complement to Iteratively reweighted algorithms for compressive sensing by Rick Chartrand and Wotao Yin, that was the subject of a previous entry. The abstract reads:

Several authors have shown recently that is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum l1 norm among those having the given measurements. We show that by replacing the l1 norm with the lp norm with p less than 1, exact reconstruction is possible with substantially fewer measurements. We give a theorem in this direction, and many numerical examples, both in one complex dimension, and larger scale examples in two real dimensions.

* There is also: Stable sparse approximations via nonconvex optimization by Rayan Saab, Rick Chartrand and Özgür Yilmaz.

We present theoretical results pertaining to the ability of lp minimization to recover sparse and compressible signals from incomplete and noisy measurements. In particular, we extend the results of Candes, Romberg and Tao [1] to the p less than 1 case. Our results indicate that depending on the restricted isometry constants (see, e.g., [2] and [3]) and the noise level, lp minimization with certain values of p less than 1 provides better theoretical guarantees in terms of stability and robustness than l1 minimization does. This is especially true when the restricted isometry constants are relatively large.
The paper takes on the stable recovery issue of the signal in the presence of noise with the Lp approach.

*Compressive sampling for signal detection by Jarvis Haupt and Robert Nowak.

Compressive sampling (CS) refers to a generalized sampling paradigm in which observations are inner products between an unknown signal vector and user-specified test vectors. Among the attractive features of CS is the ability to reconstruct any sparse (or nearly sparse) signal from a relatively small number of samples, even when the observations are corrupted by additive noise. However, the potential of CS in other signal processing applications is still not fully known. This paper examines the performance of CS for the problem of signal detection. A generalized restricted isometry property (GRIP) is introduced, which guarantees that angles are preserved, in addition to the usual norm preservation, by CS. The GRIP is leveraged to derive error bounds for a CS matched filtering scheme, and to show that the scheme is robust to signal mismatch.
This approach introduces GRIP which is a restricted isometric property that also preserves angles. I wonder how it will influence results in manifold learning. I can think of many different problems where this is of interest.

* Detecting signal structure from randomly-sampled data by Frank Boyle, Gerald Fudge, Jarvis Haupt and Robert Nowak. The abstract reads:
Recent theoretical results in Compressive Sensing (CS) show that sparse (or compressible) signals can be accurately reconstructed from a reduced set of linear measurements in the form of projections onto random vectors. The associated reconstruction consists of a nonlinear optimization that requires knowledge of the actual projection vectors. This work demonstrates that random time samples of a data stream could be used to identify certain signal features, even when no time reference is available. Since random sampling suppresses aliasing, a small (sub-Nyquist) set of samples can represent high-bandwidth signals. Simulations were carried out to explore the utility of such a procedure for detecting and classifying signals of interest.
* Bayesian compressive sensing by Shihao Ji, Ya Xue, and Lawrence Carin (I covered this one here) and Bayesian Compressive Sensing and Projection Optimization by Shihao Ji, Lawrence Carin,

The abstract reads:
This paper introduces a new problem for which machine-learning tools may make an impact. The problem considered is termed “compressive sensing”, in which a real signal of dimension N is measured accurately based on KN real measurements. This is achieved under the assumption that the underlying signal has a sparse representation in some basis (e.g., wavelets). In this paper we demonstrate how techniques developed in machine learning, specifically sparse Bayesian regression and active learning, may be leveraged to this new problem. We also point out future research directions in compressive sensing of interest to the machine learning community.
This approach is indeed very interesting as they seem to have faster convergence in the reconstruction stage than the best greedy algorithm. They also claim to have sparser results as well.


* In a previous entry, we were shown that one could do a better job with radars. The paper is now out:
Compressive radar imaging by Richard Baraniuk and Philippe Steeghs.

The abstract reads:
We introduce a new approach to radar imaging based on the concept of compressive sensing (CS). In CS, a low-dimensional, nonadaptive, linear projection is used to acquire an efficient representation of a compressible signal directly using just a few measurements. The signal is then reconstructed by solving an inverse problem either through a linear program or a greedy pursuit. We demonstrate that CS has the potential to make two significant improvements to radar systems: (i) eliminating the need for the pulse compression matched filter at the receiver, and (ii) reducing the required receiver analog-to digital conversion bandwidth so that it need operate only at the radar reflectivity’s potentially low “information rate” rather than at its potentially high Nyquist rate. These ideas could enable the design of new, simplified radar systems, shifting the emphasis from expensive receiver hardware to smart signal recovery algorithms.

This is the other paper, along with the work of Carin (see above on In-situ Compressed Sensing by Lawrence Carin, Dehong Liu, and Ya Xue) that solves an integral equation using strictly CS. It is aimed at reducing the amount of electronics that goes into the processing stage of radars in order to be able to deal with objects of different complexity.

* Fast encoding of synthetic aperture radar raw data using compressed sensing by Sujit Bhattacharya, Thomas Blumensath, Bernard Mulgrew, and Mike Davies.

The abstract reads:
Synthetic Aperture Radar (SAR) is active and coherent microwave high resolution imaging system, which has the capability to image in all weather and day-night conditions. SAR transmits chirp signals and the received echoes are sampled into In-phase (I) and Quadrature (Q) components, generally referred to as raw SAR data. The various modes of SAR coupled with the high resolution and wide swath requirements result in a huge amount of data, which will easily exceed the on-board storage and downlink bandwidth of a satellite. This paper addresses the compression of the raw SAR data by sampling the signal below Nyquist rate using ideas from Compressed Sensing (CS). Due to the low computational resources available onboard satellite, the idea is to use a simple encoder, with a 2D FFT and a random sampler. Decoding is then based on convex optimization or uses greedy algorithms such as Orthogonal Matching Pursuit (OMP).

It looks like they are using OMP because they deal with images that are inherently large. Since current reconstruction l1 reconstruction capabilities are limited one has to resort to greedy algorithms. One wonders how the Lp-L2 of Chartrand (see above) would fare....


Reference: Rice Compressed Sensing Repository
Image Credit: NASA/JPL/Space Science Institute, Rings of Saturn taken on June 12, 2007 by the Cassini probe.

No comments:

Printfriendly