Wednesday, November 07, 2007

Compressed Sensing: Extending Reed-Solomon codes.

Mehmet Akçakaya and Vahid Tarokh have written a series of papers where they use the theoretical framework of algebraic coding/decoding to do Compressive sampling. The measurement matrices are what they call Vandermonde frames that generalize Reed-Solomon codes using Vandermonde matrices. Reconstruction of the signal can be computed using many well known Reed-Solomon type decoding algorithms. In the noiseless case, they have better recovery results (in regards to sparsity) than previously.


Shannon Theoretic Limits on Noisy Compressive Sampling by Mehmet Akçakaya, Vahid Tarokh. The abstract reads:
In this paper, we study the number of measurements required to recover a sparse signal in ${\mathbb C}^M$ with $L$ non-zero coefficients from compressed samples in the presence of noise. For a number of different recovery criteria, we prove that $O(L)$ (an asymptotically linear multiple of $L$) measurements are necessary and sufficient if $L$ grows linearly as a function of $M$. This improves on the existing literature that is mostly focused on variants of a specific recovery algorithm based on convex programming, for which $O(L\log(M-L))$ measurements are required. We also show that $O(L\log(M-L))$ measurements are required in the sublinear regime ($L = o(M)$).

On Sparsity, Redundancy and Quality of Frame Representations by Mehmet Akçakaya, Vahid Tarokh. The abstract reads:

We consider approximations of signals by the elements of a frame in a complex vector space of dimension N and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal r given that such representations exist. In this case, we explicitly construct a frame, referred to as the Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using O(N2) operations, as long as the number of non-zero coefficients in the sparse representation of r is N for some 0    0.5, thus improving on a result of Candes and Tao [4]. The noisy sparse representation problem is to find sparse representations of a signal r satisfying a distortion criterion. In this case, we establish a lower bound on the trade-off between the sparsity of the representation, the underlying distortion and the redundancy of any given frame.
A Frame Construction and A Universal Distortion Bound for Sparse Representations by Mehmet Akçakaya, Vahid Tarokh.The abstract reads:
We consider approximations of signals by the elements of a frame in a complex vector space of dimension N and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal r given that such representations exist. In this case, we explicitly construct a frame, referred to as the Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using O(N^2) operations, as long as the number of non-zero coefficients in the sparse representation of r is N for some 0    0.5. It is known that   0.5 cannot be relaxed without violating uniqueness. The noisy sparse representation problem is to find sparse representations of a signal r satisfying a distortion criterion. In this case, we establish a lower bound on the trade-off between the sparsity of the representation, the underlying distortion and the redundancy of any given frame.

4 comments:

Unknown said...

For k-sparse signal, in noiseless case, it seems that a deterministic measurement matrix with probability-1 signal's reconstruction is found, and the so called Vandermonde matrix not only is easy to construct but also has the idea least rows number,that is 2k rows, in compressive sensing theory. So, can it say that the deterministic problem for compressive sensing is completely solved(of couse for strictly sparse signal)?

A likely problem is that the decoding algorithm in Mehmet Akcakaya's paper is of course only suitable for integer coefficients including both the matrix and the signal. But maybe this is not even a problem since every thing is quantized in digital systems.

I am not really sure about above comprehension. Excuse me,igor, Could you explain a little more about Mehmet Akcakaya's work? Thanks in advance, and expecting your explanation.

Igor said...

Dear Wei,

There are currently at least two other lines of research yielding deterministic compressed sensing measurement matrices, that of Mark Iwen (http://nuit-blanche.blogspot.com/2007/08/cs-is-not-just-compressed-sampling-nor.html)
and Piotr Indyk as Muthu Muthukrishnan is pointing out:
http://mysliceofpizza.blogspot.com/2007/10/deterministic-compressed-sensing.html

Does that answer your question ?


Igor.

Anonymous said...

Hi,

I think it is fair to say that the 2k x n Vandermonde matrix completely solves the k-sparse recovery problem "as posed". However, I would interpret the above as saying that the formulation itself was not entirely satisfactory. The problem is that representing the entries of a Vandermonde matrix require Omega(k) bits of precision. Which means that once you move from the world of reals into the world of bits, the Vandermonde encoding blows up the representation size by a factor of k.

To avoid this problem, one would need to assume, e.g., that the entries of the encoding matrix are small integers (say, representable using O(log n) bits).

Hope this helps.

Cheers,

Piotr

Igor said...

Thank you Piotr.

Do you have any comments on this other entry:

http://nuit-blanche.blogspot.com/2007/11/compressed-sensing-algorithms-for.html

Cheers,

Igor.

Printfriendly