Monday, June 01, 2009

CS: More Benchmarks, A fast algorithm for solutions to underdetermined systems, Adaptive compressed image sensing, Sparse Online Learning and a talk



Following up on the entry on promoting a larger set of compressive sensing benchmarks, with the intent on testing the newer reconstruction solvers, I received some additional response:
Kudos to these two groups!

As mentioned in Deanna Needell's thesis entitled Topics in Compressed Sensing, randomness can help in certain linear algebra algorithms as in the Randomized Kaczmarz algorithm. Maybe others could join to become a new random projection Lapack. As it turns out, here is one : A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations by Mark Tygert. The abstract reads:
We introduce a randomized algorithm for computing the minimal-norm solution to an underdetermined system of linear equations. Given an arbitrary full-rank matrix Am×n with m \lt n, any vector bm×1, and any positive real number " less than 1, the procedure computes a vector xn×1 approximating to relative precision " or better the vector pn×1 of minimal Euclidean norm satisfying Am×n pn×1 = bm×1. The algorithm typically requires O(mn log(sqrt(n)/\epsilon)+m^3) floating-point operations, generally less than the O(m^2 n) required by the classical schemes based on QR-decompositions or bidiagonalization. We present several numerical examples illustrating the performance of the algorithm.

One aspect of Compressive Sensing is the potentiality for performing manifold procesing. Here is a new paper on the subject: Optimal Image Alignment with Random Measurements by Effrosyni Kokiopoulou, Daniel Kressner and Pascal Frossard. The abstract reads:
We consider the problem of image alignment using random measurements. More specifically, this paper is concerned with estimating a transformation that aligns a given reference image with a query image, assuming that not the images themselves but only random measurements are available. According to the theory behind compressed sensing, random projections of signal manifolds nearly preserve pairwise Euclidean distances when the reduced space is sufficiently large. This suggests that image alignment can be performed effectively based on a sufficient number of random measurements. We build on our previous work in order to show that the corresponding objective function can be decomposed as the difference of two convex functions (DC). Thus, the optimization problem becomes equivalent to a DC program that can be solved by an outer-approximation cutting plane method, which always converges to the globally optimal solution.

There are also two papers on adaptive CS and dictionary learning:

Adaptive compressed image sensing based on wavelet modeling and direct sampling by Shai Dekel, Shay Deutsch and Amir Averbuch . The abstract reads:
We present Adaptive Direct Sampling (ADS), an algorithm for image acquisition and compression which does not require the data to be sampled at its highest resolution. In some cases, our approach simplifies and improves upon the existing methodology of Compressed Sensing (CS), by replacing the ‘universal’ acquisition of pseudo-random measurements with a direct and fast method of adaptive wavelet coefficient acquisition. The main advantages of this direct approach are that the decoding algorithm is significantly faster and that it allows more control over the compressed image quality, in particular, the sharpness of edges.
Also Giuseppe Paleologo reminded me of this paper Sparse Online Learning via Truncated Gradient by John Langford, Lihong Li and Tong Zhang. The abstract reads:
We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties:
1. The degree of sparsity is continuous—a parameter controls the rate of sparsification from no sparsification to total sparsification.
2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online-learning guarantees.
3. The approach works well empirically.
We apply the approach to several data sets and find for data sets with large numbers of features, substantial sparsity is discoverable.

Finally, Yilun Wang, a Doctoral Candidate at Rice, will make a presentation entitled: Enhanced Compressed Sensing using Iterative Support Detection
Monday, June 22, 2009
10:00 AM to 12:00 PM
3076 Duncan Hall

The abstract: I present a new compressive recovery algorithm, aiming to simultaneously achieve low measurement requirement and fast recovery. This algorithm alternates between detecting partial support information of the original signal and solving a resulting truncated L1 minimization problem by throwing detected components out of the L1 norm. I generalize Null Space Property to so called Truncated Null Space Property and exploit it for theoretical analysis of this truncated L1 minimization algorithm with iterative support detection. Numerical experiments indicate its advantages over many other state of the art algorithms such as the iterative reweighted L1 minimization algorithm (IRL1) and the iterative reweighted least squares algorithm (IRLS).

No comments:

Printfriendly