## Friday, November 30, 2012

### MindMapping the Nuit Blanche world

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Accelerating Particle Filter using Randomized Multiscale and Fast Multipole Type Methods

Here is another way of using randomization:

Accelerating Particle Filter using Randomized Multiscale and Fast Multipole Type Methods by Gil Shabat, Yaniv Shmueli, Amit Bermanis and Amir Averbuch. The abstract reads:
We present a multiscale based method that accelerates the Particle Filter computation. Particle Filter is a powerful method that tracks the state of a target based on non-linear observations. Unlike the conventional way that calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses the particle subset to recover the density function for all the rest of the particles. As often happens, the computational effort is substantial especially when tracking multiple objects takes place. The proposed algorithm reduces signiﬁcantly the computational load. By using the Fast Gaussian Transform, the particle selection step changes from O(k2n log n) operations to O(n + k log n) operations where n is the number of particles and k the selected subset size. We demonstrate our method on both simulated and on real data such as tracking in videos sequences.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, November 29, 2012

### Fast Marginalized Block SBL Algorithm - implementation -

Zhilin just mentioned it on his blog: Fast Marginalized Block SBL Algorithm by Benyuan Liu, Zhilin Zhang, Hongqi Fan, Zaiqi Lu, Qiang Fu. The abstract reads:
The performance of sparse signal recovery can be improved if both sparsity and correlation structure of signals can be exploited. One typical correlation structure is intra-block correlation in block sparse signals. To exploit this structure, a framework, called block sparse Bayesian learning (BSBL) framework, has been proposed recently. Algorithms derived from this framework showed promising performance but their speed is not very fast, which limits their applications. This work derives an efficient algorithm from this framework, using a marginalized likelihood maximization method. Thus it can exploit block sparsity and intra-block correlation of signals. Compared to existing BSBL algorithms, it has close recovery performance to them, but has much faster speed. Therefore, it is more suitable for recovering large scale datasets.

The attendant code to recreate the examples featured in this paper is here.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, November 28, 2012

### Around the blogs in 80 summer hours

It's mostly a hardware hacking edition today. Compressive sensing has opened up a whole area of phase space where we can experiment new concepts. It is therefore important to also follow any of the rapid prototyping technologies that can help make some of the CS hardware a reality. The video at the top shows quite clearly that one can do something better than the raster mode for back-scatter X-ray, but I ain't gonna try it, no sir:-).

Thanks to silicon being cheaper, some of the miicrontrollers out there are getting to be very interesting. Case in point, I went to a Meet Up yesterday on the Flyport Platform and how it can be integrated in a number of interesting projects. It was organized by G-Media one of the company that distributes this hardware in France. Anyway, for those of you reading this blog, my interest stems from the rapid prototyping capability that this hardware can offer. I was not disappointed. They provided some small workshops around three project. The first one was a small weather station that I mentioned earlier. It looked deceptively simple to set up (no soldering required). Next there was a workshop on how Flyport could be communicating with the Enocean line of product. Again, no soldering and the ability to be in touch with hardware that are using their environment to harvest energy (the sensors do not require batteries). The three energy harvesting technologies rely on solar but also motion, and finally on the Peltier effect.. I wonder when somebody is goign to come up with a pressure harvesting technology. Finally, they sell a wifi card that can interface with the NXT Mindstorm robot.so you could (and we did) move the robot with any smartphone as the robot now becomes a webserver. More information can be found here. The other news was that OpenPicus now offers a Flyport GPRS/3G so that you really prototype hardware that can go a long way (and no soldering). wow.just wow.

I kept on asking about low resolution cameras that one could hook up to a microcontroller, evidently, I got many blank stare but I am accustomed to them :) There is the possibility of using three photodiodes at most on these boards, so we had have to drive some sort of modulator to have some CS camera, meh, I think we can do better than that.. They also mentioned a Geiger Counter project. Having two of those and one could start doing some muon tomography !

For those of you who read French, the presentation is here. Going back to the other blogs we generally cover:

Greg reports on a different type of hardware hack with the Through-Wall Imaging Radar, MIT Lincoln Laboratory Journal

Vladimir let us know that the inventor of the Bayer filter, Bryce Bayer, has died. If you recall this entry, you'll notice that as CMOS is getting smaller, the Bayer pattern is becoming a mixing pattern thanks to the diffraction limit. If there is mixing, compressive sensing techniques are not far.

Albert Theuwissen published a report on the Symposium on Microoptical Imaging and Projection

On the more "theoretical" side of things:
Danny has some interesting Miscellaneous updates
Bob  talks about the general inverse problem with MAP estimation and continues his investigation in Music genre recognition with compressive sampling, pt. 2
MathBlogging features a new Mathematical Instruments: Peter Cameron’s Blog
Larry has a guest post: The Density Cluster Tree: A Guest Post
Christian talks about MCMC convergence assessment
Alex has a list of γ2 norm problems
Terry talks about The closed graph theorem in various categories
Andrej presents the NIPS papers in a nicer way, wow!
Alexandre

Finally, the NYT had a piece on Deep Learning, Yann Le Cun will be on NPR tomorrow. From the Compressive Sensing standpoint, Deep Learning is really just one way to produce better dictionary learning.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, November 27, 2012

### A Meeting Review and a New Capability at InView Corp.

Pierre just wrote down a review of the SIGMA meeting he attended last week.

Bob made a mention of a new capability at Inview Corp.

A new workstation designed for Compressive Sensing researchers
InView Technology Corporation, the world leader in Compressive Sensing (CS) imaging products, has launched the first in a series of CS Workstations. The InView220 provides CS researchers with the ability to easily prototype and test novel CS algorithms on a complete hardware imaging platform. Researchers are provided with unprecedented ability to apply novel measurement bases to a high-speed, high-definition spatial light modulator, and to process the light levels measured from Shortwave Infrared (SWIR) detectors.
The major components of the InView220 are an imaging Sensor, which includes a 1920x1080 digital micromirror device and a 32-diode detector array, and a Sensor Controller, which is preloaded with the InView Development Environment (IDE) software
For more check the InView Corp. site out.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Statistical physics, Optimization, Inference and Message-Passing algorithms

If you recall how message passing algorithm are chainging our approach to compressive sensing, you might be interested in the following announcement sent to me by Lenka Zdeborova:

Dear Colleague,
It is our pleasure to announce that in September 30 -- October 11, 2013 the Les Houches center for theoretical physics will be hosting an autumn school "Statistical physics, Optimization, Inference and Message-Passing algorithms" organized by Florent Krzakala, Federico Ricci-Tersenghi, Lenka Zdeborova and Riccardo Zecchnia.
The aim of this two-weeks school is to teach in a widely accessible manner about Statistical physics, Optimization, Inference and Message-Passing algorithms. Lectures will be given both by physicists and by researchers from computer science, machine learning and signal processing. The goal is to present methods and applications from several different angles and help the participants to cross bridges between different disciplines. The school is aimed not only for PhD students, postdocs but also for confirmed researchers who wish to understand better the presented topics. More information and a list of confirmed lecturers can be found on the workshop website:
All students and researchers wishing to participate should apply on the website by April 30, 2013. According to the number of applicants they will be informed in June about subsequent steps.
Please forward the link to the website to all students, postdocs and researcher that might be interested.
With best regards and looking forward to see many of you next October in Les Houches
The organizers
The planned program looks like:

First week - FUNDAMENTALS: 5 sets of long lectures (titles tentative, ~6h to 7h each)
** Cris Moore (Santa Fe Institute): The nature of computation
** Andrea Montanari (Stanford Univ.): Signal processing, sparse inference and message passing
** Marc Mezard (ENS Paris): Cavity method: message passing from a physics perspective.
** Giorgio Parisi (La Sapienza Roma): Replica theory and spin glasses
** Devavrat Shah (MIT): Inference and machine learning
Second week - APPLICATIONS: several sets of lectures (titles tentative, ~3h each)
** Riccardo Zecchina (Politecnico Torino): Stochastic optimization or inverse dynamical problems
** Manfred Opper (Berlin Univ.): Expectation-propagation, topics in inference
** Rudiger Urbanke (EPFL): Error correcting codes
** Amin Coja-Oghlan (Frankfurt Univ.): Rigorous results for constraint satisfaction problems
** Possible other topics include (lecturers to be confirmed): compressed sensing; tomography, inference in biological systems; inverse Ising model; image processing, rigorous results related to the cavity and replica method, message passing beyond BP etc..

Thanks Lenka !

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, November 26, 2012

### COSMOS: sparse modeling software - implementation -

Ignacio Ramirez, who did his PhD with Guillermo Sapiro, just sent me the following:

Hello Igor,
.....
The reason I'm writing to you is regarding a sparse modeling software that I just released. The package is called COSMOS and is currently in its alpha version.
The interesting thing about it, which is the core of my PhD, [Thesis: Second Generation Sparse Models ] is that it works with automatic model selection criteria, so that things such as desired sparsity in the solutions, or dictionary sizes, are automatically chosen based on, for example, information theoretic principles such as MDL, AIC or BIC.
It also contains some interesting variants on traditional dictionary learning methods, such as bounded and total variation based atom learning.
Please check it out at http://iie.fing.edu.uy/~nacho/cosmos/
I have other stuff that may be of interest to your blog. Please visit http://iie.fing.edu.uy/~nacho (especially the Software and Publications sections) to see them.
Regards,
Ignacio.

COSMOS stands for Codelength Oriented Sparse Modeling System and will be featured in both the big picture in compressive sensing as well as the Matrix Factorization Jungle page.. Thanks Ignacio !

PS: By the way I am impressed that one can retrieve the shadow of the pedestrians!

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Sunday, November 25, 2012

### Sunday Morning Insight: So what is missing in Compressive Imaging and Uncertainty Quantification ?

Paul Graham in one of his essay recently mentioned the following when it came to finding start-up ideas.

"Once you're living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you're really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won't be obvious is that they're startup ideas. So if you want to find startup ideas, don't merely turn on the filter "What's missing?" Also turn off every other filter, particularly "Could this be a big company?" There's plenty of time to apply that test later. But if you're thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones."

If you read this blog, you are already on the edge of knowledge, in a rapidly moving field and to a certain extent you are already living in the future.

So what is missing ?

Compressive sensing is a way of gathering data efficiently while knowing full well that, what you generally acquire, follows a power law of some sorts. With that in mind, people pushed the idea of concentration of measure results farther to more complex objects like matrices and then will eventually cross over to tensors (especially if we can leverage the matrix approach featured in the QTT format)

In a way, compressive sensing is successful because the framework has a wide appeal beyond signal processing. Indeed, if you read again what the analysis operator approach does, it is nothing more than solving a differential equation subjected to some measurements. The co-sparsity parameter represents  sources and/or the inhomogeneous part of these partial differential equations that are themselves constrained in the loss function.

All is well but there are dark clouds.

The last time I mentioned Uncertainty Quantification in the blog, it was to say that while a Polynomial Chaos approach could follow the traditional compressive sensing framework, in all likelihood, given the Donoho-Tanner phase transition,  you probably had to go through the extra complexity of wavelet chaos in order to find a winner. If you have ever dealt with polynomial series expansions, you know that all kinds of problems come from the coefficient thresholding we expect in compressive sensing. Even if some coefficients expansions are small they still matter ... at least empirically. It's known as the Gibbs phenomenon

Similarly, if you are part of the compressive sensing group on LinkedIn, you have seen that seemingly small questions lead to not so favorable answers for compressive imaging. In this instance, you realize that the only way to reconstruct a very nice image through some of the most advanced compressive sensing reconstruction algorithm is to ... cheat. You first have to acquire an image, threshold its series expansion and then acquire the compressive measurements from that thresholded version. When you do that, you get indeed near perfect reconstructions but it doubtful there is a CS camera that can do that.

The underlying reason for this disillusion in both CS imaging and UQ is that while CS works fine for sparse signals, it is not all that great for unknowns that are merely compressible, i.e. not sparse. In short, if you hope to do well because the object of interest is only compressible in terms of some eigenfunctions expansion, you might make a mistake. In both UQ or images, what is missing is a result for compressible signals and it just looks like the one we have is just not good enough.

As Paul said earlier "What is missing ?"

I think two ideas could take us out of this slump.

One: Maybe our sampling scheme for the eigenfunction expansion is chosen too early and we ought to rethink it in light of the work on infinite-dimensional compressive sensing (see [1] [2]).

Two: The second idea revolves around learning the right analysis operator in both imaging applications and UQ.

Unlike the traditional approach in compressive imaging that relied on an eigenfunction expansion (which eventually led us to this slump), the analysis approach on the other hand, goes for what is obvious. The zeros are on one side of the homogeneous equation fulfilled by the field of interest. They are not epsilons, just zeros, the stuff that makes something sparse.

In imaging applications, you traditionally acquire the eigenfunctions expansion of a 2D projection of the plenoptic function. The TV-norm, a special case of analysis operator, is successful because the PSF of most cameras is degenerate. What if the PSF were not degenerate, what sorts of analysis operator approach should be used ?

In UQ, you try to estimate how changes in the coefficients of the transfer equation will affect the solution of said equation. The question being answered is: What are the coefficient phase space of interest that is consistent with the measurements ?

Both problematic eventually rejoin in their need to develop an analysis approach of the following sort:

min || L(f) - c || subject to Ax = b

Where L represents the discretization of field transport operator (with unknown coefficients in UQ), c the boundary conditions or sources, A represents the hardware sensing mechanism and b the measurements.

Thank you Leslie Smith for the fruitful discussion.

Reference:

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, November 23, 2012

### Compressive Sensing Solvers in Python and the ones that ought be in Python, R,....

Due to being featured on a the SciPyTip twitter stream yesterday, and prompted by the question of Jaidev, I went through the list of compressive solvers and tried to find the ones that were developed using Python. What developers need to realize is that the vast majority of solvers in the compressive sensing big picture page are in Matlab mostly because those are research prototypes. Python codes are somewhat rare because the field is fluctuating fast. The initial solvers were a 1000 slower than some that appeared four years later
If you are a developer, and want to be a hipster when it comes to providing some of the best algorithms out there in compressive sensing (i.e.l finding the sparsest solution to underdetermined linear systems of equations), then this blog entry is for you.
Here is a list of current solvers written in Python. As noted yesterday, while L1 solvers are OK to see "if compressive sensing work" on small problems, they become slow when it comes to larger ones. For that you really need to look at the AMP solvers I mentioned yesterday. Hence I am also adding another list of solvers that ought to be implemented because:they are among the most optimal and their coding complexity is very much reduced compared to earlier solvers.

An element of the compressive sensing framework relies on the building of dictionaries from training signals, for that there is:
Please also note that a C++ implementation of several solvers in KL1P (incuding AMP solvers). Here are the list of algorithms that ought to be implemented in Python or other languages:
GAMP mostly because this is an AMP solver that is continually supported by several researchers, Turbo AMP for imaging seems to do weill for images, SL0 because it is very simple in terms of coding complexity and BSBL as it seems to provide good results even when the signals are not sparse. If you do implement any of these algorithms please do let me know and I'll feature them on the blog and the big picture.
Image Credit: NASA/JPL/Space Science Institute
W00077074.jpg was taken on November 21, 2012 and received on Earth November 23, 2012. The camera was pointing toward SATURN at approximately 1,136,886 miles (1,829,640 kilometers) away, and the image was taken using the CB2 and CL2 filters.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, November 22, 2012

### Approximate Message Passing and Compressive Sensing

Happy Thanksgiving y'all !
The Approximate Message Passing algorithm, an instance of belief propagation in graphical models, has taken the field of compressive sensing by storm. This can be witnessed in the increasing number of implementations listed at the end of the reconstruction solvers' list of the compressive sensing big picture page. It is not surprising as the algorithm is very fast and that compressive sensing is really in need of fast solvers. With this in mind and because of its simplicity, several folks are trying to implement it as close as possible to silicon through either an FPGA or a VLSI implementation [1,2] while others are using it to deal with problematic that could barely be handled with previous generation solvers [3,4,5]. All this is well, but reading the more applied papers [3,4], Felix reminds us of some of the caveats:
Evidently, this promise comes with the caveat that message-passing algorithms are speciﬁcally designed to solve sparse-recovery problems for Gaussian matrices while methods such as SPGL1 are versatile and solve BP for any A and b as long one is willing to spend a sufﬁcient number of iterations to bring the residual down.
I also wonder if AMP will gain any additional insights from advances beyond belief propagation ?
[1] Implementing the Approximate Message Passing (AMP) Algorithm on a GPU by Lukas Cavigelli, Pascal Alexander Hager. The abstract reads:
We consider the recovery of sparse signals from a limited number of noisy observations using the AMP algorithm. In this paper, we present two fast implementations of this algorithm that run on a CPU and on a GPU and which can either be used for arbitrary unstructured measurement matrices or take advantage of the structure of a DCT matrix to give an even faster implementation. Our results show that for small problem sizes, the CPU based implementation is the fastest, but for large problem sizes, a GPU based implementation has the highest throughput.
Sparse signal recovery finds use in a variety of practical applications, such as signal and image restoration and the recovery of signals acquired by compressive sensing. In this paper, we present two generic VLSI architectures that implement the approximate message passing (AMP) algorithm for sparse signal recovery. The first architecture, referred to as AMP-M, employs parallel multiply-accumulate units and is suitable for recovery problems based on unstructured (e.g., random) matrices. The second architecture, referred to as AMP-T, takes advantage of fast linear transforms, which arise in many real-world applications. To demonstrate the effectiveness of both architectures, we present corresponding VLSI and FPGA implementation results for an audio restoration application. We show that AMP-T is superior to AMP-M with respect to silicon area, throughput, and power consumption, whereas AMP-M offers more flexibility.
[3] Accelerated large-scale inversion with message passing by Felix J. Herrmann. The summary reads:
To meet current-day challenges, exploration seismology increasingly relies on more and more sophisticated algorithms that require multiple paths through all data. This requirement leads to problems because the size of seismic data volumes is increasing exponentially, exposing bottlenecks in IO and computational capability. To overcome these bottlenecks, we follow recent trends in machine learning and compressive sensing by proposing a sparsity-promoting inversion technique that works on small randomized subsets of data only. We boost the performance of this algorithm signiﬁcantly by modifying a state-of-the-art `1-norm solver to beneﬁt from message passing, which breaks the build up of correlations between model iterates and the randomized linear forward model. We demonstrate the performance of this algorithm on a toy sparse-recovery problem and on a realistic reverse-time-migration example with random source encoding. The improvements in speed, memory use, and output quality are truly remarkable.
[4] APPROXIMATE MESSAGE PASSING MEETS EXPLORATION SEISMOLOGY by Felix J. Herrmann. The abstract reads:
Data collection, data processing, and imaging in exploration seismology increasingly hinge on large-scale sparsity promoting solvers to remove artifacts caused by efforts to reduce costs. We show how the inclusion of a “message term“ in the calculation of the residuals improves the convergence of these iterative solvers by breaking correlations that develop between the model iterate and the linear system that needs to be inverted. We compare this message-passing scheme to state-of-the-art solvers for problems in missing-trace interpolation and in dimensionality-reduced imaging with phase encoding.
[5] Compressive Video Sampling with Approximate Message Passing Decoding by JIANWEI MA,GERLIND PLONKA, M. YOUSUFF HUSSAINI. The abstract reads:
In this paper, we apply compressed sensing to video compression. Compressed sensing (CS) techniques exploit the observation that one needs much fewer random measurements than given by the Shannon-Nyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a transform domain). In the CS framework, we can achieve sensing, compression and denoising simultaneously. We propose a fast and simple online encoding by application of pseudo-random downsampling of the two-dimensional fast Fourier transform to video frames. For off-line decoding, we apply a modiﬁcation of the recently proposed approximate message passing (AMP) algorithm. The AMP method has been derived using the statistical concept of ’state evolution’, and it has been shown to considerably accelerate the convergence rate in special CS-decoding applications. We shall prove that the AMP method can be rewritten as a forward-backward splitting algorithm. This new representation enables us to give conditions that ensure convergence of the AMP method and to modify the algorithm in order to achieve higher robustness. The success of reconstruction methods for video decoding also essentially depends on the chosen transform, where sparsity of the video signals is assumed. We propose to incorporate the 3D dual-tree complex wavelet transform that possesses sufﬁciently good properties of directional selectivity and shift invariance while being computationally less expensive and less redundant than other directional 3D wavelet transforms.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, November 21, 2012

### Small-sample brain mapping: sparse recovery on spatially correlated designs with randomization and clustering

Functional neuroimaging can measure the brain’s response to an external stimulus. It is used to perform brain mapping: identifying from these observations the brain regions involved. This problem can be cast into a linear supervised learning task where the neuroimaging data are used as predictors for the stimulus. Brain mapping is then seen as a support recovery problem. On functional MRI (fMRI) data, this problem is particularly challenging as i) the number of samples is small due to limited acquisition time and ii) the variables are strongly correlated. We propose to overcome these difficulties using sparse regression models over new variables obtained by clustering of the original variables. The use of randomization techniques, e.g. bootstrap samples, and hierarchical clustering of the variables improves the recovery properties of sparse methods. We demonstrate the benefit of our approach on an extensive simulation study as well as two publicly available fMRI datasets.
The video presentation is here

GaelAlexandre and I had a small discussion on this issue of RIP being a good condition against which to test any design matrix for sparse recovery (in their case, their measurement matrix is really the fMRI images put in columns)  but what came out of this interesting discussion was the following paper for which I have only an abstract (see below). In it, they tried to make a connection between the scattering transform and fMRI activity, may be some work ought to be undertaken with SIFT or FREAK as a way to find out which one of those models connects better to actual brain activity. And looking at the latest capabilities, then maybe we could go into the dream reconstruction business....wow, just wow....

The scattering transform is a hierarchical signal transformation that has been designed to be robust to signal deformations. It can be used to compute representations with invariance or tolerance to any transformation group, such as translations, rotations or scaling. In image analysis, going beyond edge detection, its second layer captures higher order features, providing a fine-grain dissection of the signal. Here we use the output coefficients to fit blood oxygen level dependent (BOLD) signal in visual areas using functional magnetic resonance imag- ing. Significant improvement in the prediction accuracy is shown when using the second layer in addition to the first, suggesting biological relevance of the features extracted in layer two or linear combinations thereof.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, November 20, 2012

### ReProCS: Recursive Robust PCA or Recursive Sparse Recovery in Large but Structured Noise - implementation -

We've talked about ReProCS before but here iit looks like the analysis is more complete. I also like the fact that we can address the fact that noise lies in a small dimensional manifold.

This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. We also show how the algorithm and its guarantees extend to the undersampled measurements' case.

## Monday, November 19, 2012

### Variational Properties of Value Functions

Variational Properties of Value Functions by Aleksandr Aravkin, James Burke, and Michael Friedlander..The abstract reads:

Abstract. Regularization plays a key role in a variety of optimization formulations of inverse problems. A recurring question in regularization approaches is the selection of regularization pa- rameters, and its e ect on the solution and on the optimal value of the optimization problem. The sensitivity of the value function to the regularization parameter can be linked directly to the Lagrange multipliers. In this paper, we fully characterize the variational properties of the value functions for a broad class of convex formulations, which are not all covered by standard Lagrange multiplier theory. We also present an inverse function theorem that links the value functions of di erent regularization formulations (not necessarily convex). These results have implications for the selection of regularization parameters, and the development of specialized algorithms. We give numerical examples that illustrate the theoretical results.

The software used is SPGL1 and the example provided in the paper is here.. SPGL1 can now be forked on GitHub.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Fast Earth Mover's Distance (EMD) implementation

As the subject of Earth Mover's distance is heating up see for instance Learning Manifolds in the Wild and Sparse Recovery for Earth Mover Distance, I came across this FastEMD algorithm implementation. From the page:

Fast Earth Mover's Distance (EMD) Code
(C++ and Matlab and Java wrappers)
The code efficiently computes the Earth Mover's Distance (EMD) between two histograms or sparse histograms (signatures). The EMD is also known as Mallows, 1st Wasserstein, Monge-Kantorovich, Match and Transporatation distances. The approach was described in the paper:
"Fast and Robust Earth Mover's Distances" [, ].
EMD-HAT (a better definition of EMD for non-normalized histograms) was presented in the paper:
"A Linear Time Histogram Metric for Improved SIFT Matching" [, ].
One of the demos (demo_FastEMD4) includes a C++ implementation of the CIEDE2000 color distance. The CIEDE2000 C++ code is an adaption of Prof. Gaurav Sharma's Matlab code (used with permission). Other demos include comparison of David Lowe's SIFT descriptors, simple 1d histogram comparison and grayscale image comparison.
Quadratic Chi (QC) - code that computes the new Quadratic Chi histogram distances (proposed at ECCV 2010) very fast.

## Sunday, November 18, 2012

### You know what hurts ?

Nuit Blanche is only ranked 2,230,439th on the interweb

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, November 16, 2012

### Astounding Compression and Beyond Belief Propagation

You thought you'd go home for the week-end with no homework. Too bad you started reading this entry (via Richard Green, Alejandro Weinstein)

Compressing 35GB of Data in 35 Pages of Numbers by Philon Nguyen
Usual information theoretical results show a logarithmic compression factor of value spaces to digital binary spaces using p-adic numbering systems. The following paper discusses a less commonly used case. It applies the same results to the difference space of bijective mappings of $n$-dimensional spaces to the line. It discusses a method where the logarithmic compression factor is provided over the Hamming radius of the code. An example is provided using the 35GB data dump of the Wikipedia website. This technique was initially developed for the study and computation of large permutation vectors on small clusters.

Combinatorial Spaces And Order Topologies by Philon Nguyen
An archetypal problem discussed in computer science is the problem of searching for a given number in a given set of numbers. Other than sequential search, the classic solution is to sort the list of numbers and then apply binary search. The binary search problem has a complexity of O(logN) for a list of N numbers while the sorting problem cannot be better than O(N) on any sequential computer following the usual assumptions. Whenever the problem of deciding partial order can be done in O(1), a variation of the problem on some bounded list of numbers is to apply binary search without resorting to sort. The overall complexity of the problem is then O(log R) for some radius R. A logarithmic upper-bound for finite encodings is shown. Also, the topology of orderings can provide efficient algorithms for search problems in combinatorial spaces. The main characteristic of those spaces is that they have typical exponential space complexities. The factorial case describes an order topology that can be illustrated using the combinatorial polytope . When a known order topology can be combined to a given formulation of a search problem, the resulting search problem has a polylogarithmic complexity. This logarithmic complexity can then become useful in combinatorial search by providing a logarithmic break-down. These algorithms can be termed as the class of search algorithms that do not require read and are equivalent to the class of logarithmically recursive functions.

Iterative Decoding Beyond Belief Propagation by Shiva Kumar Planjery, Shashi Kiran Chilappagari, Bane Vasic, David Declercq, Ludovic Danjean. The abstract reads:

At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efﬁciently decoded by belief propagation (BP). The BP is an inference algorithm which operates on a graphical model of a code, and lends itself to low-complexity and high-speed implementations, making it the algorithm of choice in many applications. It has unprecedentedly good error rate performance, so good that when decoded by the BP, LDPC codes approach theoretical limits of channel capacity. However, this capacity approaching property holds only in the asymptotic limit of code length, while codes of practical lengths suffer abrupt performance degradation in the low noise regime known as the error ﬂoor phenomenon. Our study of error ﬂoor has led to an interesting and surprising ﬁnding that it is possible to design iterative decoders which are much simpler yet better than belief propagation! These decoders do not propagate beliefs but a rather different kind of messages that reﬂect the local structure of the code graph. This has opened a plethora of exciting theoretical problems and applications. This paper introduces this new paradigm.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Around the blogs in 80 summer hours

One person reading this entry must be doing so under his/her summer season. First and foremost, we have two summaries of two different meetings:
Thank you Laurent and Anna for these summaries.

Anna also featured her Lecture 15 on CS
Bob presented the: ACM MIRUM video of my GTZAN dataset analysis and ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies
Suresh talked about  On the elections, Nate Silver, and lessons for data mining and Data, Dimensions and Geometry oh my !
Math Blogging featured Mathematical Instruments: cp’s mathem-o-blog
Larry
Emmanuel has some 1-bit news that we mentioned earlier.
Danny talked about Raspberry Pi (we agree these little critters are gonna make Big Data bigger)

Albert talked about Curved CCD sensor and more at Vision Stuttgart 2012
Martin talked about Lund University HumLab eye tracking equipped classroom/lab.

Doyung talked about Belief Propagation for music recommendations with Mapreudce And Giraph
Brian suggested to us How to invest in machine vision

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !