Monday, November 03, 2008

CS: Introduction to Compressive Sensing, Identification of Matrices having a Sparse Representation


There is now a video of Richard Baraniuk showing his latest introduction to Compressed Sensing at Microsoft Research on August 4, 2008. Please be aware, that the address does not seem to work with Firefox, Chrome or Opera (I tried), it only works with Internet Explorer. The original link that can be seen from any browser is here. It eventually yields the IE only presentation.

The presentation is getting better and better, or I am just rediscovering new points everytime I see it. In the matched filter sequence, Rich reminds us that the random projection classification work with a nearest neighbor concept (since we are using an embedding projection). I just read a new paper on NN and SVM classifier entitled: In Defense of Nearest-Neighbor Based Image Classification by Oren Boiman, Eli Shechtman and Michal Irani. The abstract reads:

State-of-the-art image classification methods require an intensive learning/training stage (using SVM, Boosting, etc.) In contrast, non-parametric Nearest-Neighbor (NN) based image classifiers require no training time and have other favorable properties. However, the large performance gap between these two families of approaches rendered NNbased image classifiers useless. We claim that the effectiveness of non-parametric NNbased image classification has been considerably undervalued. We argue that two practices commonly used in image classification methods, have led to the inferior performance of NN-based image classifiers: (i) Quantization of local image descriptors (used to generate “bags-of-words”, codebooks). (ii) Computation of ‘Image-to-Image’ distance, instead of ‘Image-to-Class’ distance. We propose a trivial NN-based classifier – NBNN, (Naive-Bayes Nearest-Neighbor), which employs NNdistances in the space of the local image descriptors (and not in the space of images). NBNN computes direct ‘Image-to-Class’ distances without descriptor quantization. We further show that under the Naive-Bayes assumption, the theoretically optimal image classifier can be accurately approximatedby NBNN. Although NBNN is extremely simple, efficient, and requires no learning/training phase, its performance ranks among the top leading learning-based image classifiers. Empirical comparisons are shown on several challenging databases (Caltech-101,Caltech-256 and Graz-01).

This is interesting because, to a certain extent, the random projection used the face recognition studies is also studied with regards to the nearest class not the nearest neighboring face. Hence, one could imagine a close relationship between random projections and other features normally found in Machine Learning such as SIFT. While we are on the subject of Machine Learning, Andrew Ng's course on the subject at Stanford is now on Youtube.

These two links are now on the CSVideos page.

I also found this paper on the interwebs:

We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as Basis Pursuit. Of particular interest is the dictionary of time-frequency shift matrices and its role for channel estimation and identification in communications engineering. We present recovery results for Basis Pursuit with the time-frequency shift dictionary and various dictionaries of random matrices.


Finally, here are the stats for the first saturday morning cartoon. I was surprised to see a peakin the traffic to the website a day after its publication. Anyway, it fits the rule of thumb that in the first two days you see half of the eventual traffic over the week of the publication.





No comments:

Printfriendly