Friday, November 28, 2008

CS: Compressive Sampling of Non-Negative Signals, A Question, a meeting and two comments.

When reference [2] came out a year and eight days ago, we hinted on the subject. More recently, a month ago, we saw a connection between Linear Programming and Non-Negative Matrix Factorization (NMF). In that entry, it was also recalled some of the empirical findings of folks like Gabriel Peyre i.e. that NMF did not seem to need a sparseness constraint to yield a sparse solution. Well it looks like Paul O’Grady and Scott Rickard are making the connection between CS and NMF more obvious in: Compressive Sampling of Non-Negative Signals. The abstract reads:
Traditional Nyquist-Shannon sampling dictates that a continuous time signal be sampled at twice its bandwidth to achieve perfect recovery. However, It has been recently demonstrated that by exploiting the structure of the signal, it is possible to sample a signal below the Nyquist rate and achieve perfect reconstruction using a random projection, sparse representation and an l_1-norm minimisation. These methods constitute a new and emerging theory known as Compressive Sampling (or Compressed sensing). Here, we apply Compressive Sampling to non-negative signals, and propose an algorithm—Non-negative Underdetermined Iteratively Reweighted Least Squares (NUIRLS) —for signal recovery. NUIRLS is derived within the framework of Non-negative Matrix Factorisation (NMF) and utilises Iteratively Reweighted Least Squares as its objective, recovering non-negative minimum l_p-norm solutions, 0  p  1. We demonstrate that—for sufficiently sparse non-negative signals—the signals recovered by NUIRLS and NMF are essentially the same, which suggests that a non-negativity constraint is enough to recover sufficiently sparse signals.



The associated talk is here. Following up on an e-mail exchange, Paul  also made his code available and added:
...I have placed the code for NUIRLS on my website at:

The main aim of my MLSP08 paper is to demonstrate that sparse norms are not always necessary for the special case where CS is entirely non-negative. Furthermore, in December I will present a paper [1] at a workshop in England where NMF is performed in the CS domain---recovering sparse coefficients AND a sparse basis from a compressively sampled matrix. 
I can't wait to see the paper [1]. I then replied asking him:
Is it your experience that NUIRLS is fast ?
Paul kindly responded with:
My experience with standard NMF is that it is fast, further extensions such as adding sparseness constraints to convolutive models make it slow, when you include reweighting into the objective---as is the case for NUIRLS---it becomes very very slow. The graphs in the MLSP paper took a number of weeks. luckily though, we can use standard NMF updates if the signal is sufficiently sparse.
Much of the geometric argument [3] has the positivity constraint built into it and it looks to me that the general use of the argument based on the Restricted Isometry Property (RIP) is there to avoid the positivity argument of the signal. We also know that RIP, on top of being difficult to prove, is not a tight constraint. We also know that the positivity constraint can be shown to be a powerful one [5] and that the geometric argument can be expanded to non-sparse signals [4]. So my question is:

 Is CS prominently about Positivity or Sparsity ?

If it is about positivity, would message passing algorithms be faster than current algorithms ?

The Workshop in England is near Oxford and is hosted by the IMA. It will feature many talks on CS. I have added the date to the calendar. One can register for the meeting until December 8th. I am adding the link to the NUIRLS algorithm to the Big Picture section on code reconstruction.

About yesterday's entry, Piotr Indyk made the following comment on my characterization of their work/table:




A quick comment: Anna's table actually contains *all* best known (to us) bounds for sparse recovery that work for arbitrary signals. I.e., it is not limited to schemes based on RIP-1 principle, but it also contains schemes based on usual RIP and Euclidean width properties. You can tell that by the last column (RIP-1 property leads to approximation bounds in the l1 norm, while other properties yield mixed norm or l2 norm bounds).
Thanks Piotr for the correction!

In this entry, I mentioned a dimensionality reduction toolbox with many different techniques created by Laurens van der Maaten. Alas, the link is not working anymore. Taraz Buck noted the following:

The license allows for it, so I've posted a copy of v0.4b on my site:

Taraz also added on his site:
The license allows for redistribution non-commercially, as you can see in the file's includedReadme.txt. If you have a newer version or his site comes back up, please let me know.
Thank you Taraz!


[1] Paul D. O'Grady and Scott T. Rickard. Non-negative Matrix Factorisation of Compressively Sampled Non-Negative Signals. In Proceedings of the 8th IMA International Conference on Mathematics in Signal Processing , December 2008, Cirencester UK.




2 comments:

Anonymous said...

laurens' new site is at:

http://ticc.uvt.nl/~lvdrmaaten/Laurens_van_der_Maaten/Home.html

Igor said...

Thanks for the update. I'll put it in the old post.

Igor.

Printfriendly