## Friday, March 30, 2012

### Around the blogs in 80 hours and Random Thoughts

Looking at the blogs, I found:
For those of you not following, this is a photo taken by Don Petit, an astronaut on the international space station as the european ATV is about to dock. wow.

In other news, if you recall the video of Elaine Mardis two weeks ago on Next-Generation Sequencing Technologies she made the comment that some company came up with a new way of sequencing DNA, she also mentioned that it was vaporware since there wasn't any results using this technique.

Two weeks laterthe technology has been used to get some sequencing data. That field in instrumentation seems to surely grow fast enough, but I was blown away by another example of personalized medicine featured in Examining His Own Body, Stanford Geneticist Stops Diabetes in Its Tracks. Obviously, for everybody to have that, we'll need the cloud and the smarts for interpreting all this.

Credit: NASA, Slide by John Wright

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Experimental compressive phase space tomography

Lei Tian just sent me the following:

Dear Igor,

I'm Lei, a graduate student working in Dr. Barbastathis' lab (http://3doptics.mit.edu/) at MIT.  Your blog is becoming my must-read everyday since I started working on projects related to compressive imaging. I appreciate a lot of your efforts to manage such a good platform for this vibrant community.
We have just published a work in optics express titled "Experimental compressive phase space tomography" (http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-8-8296), which I thought might interest you.
In this work, we showed that using the tomographic formalism (i.e. phase space tomography) commonly used in statistical optics community, a partially spatially coherent source can be reconstructed from a heavily under-sampled measurement. The reconstruction method was based on the matrix completion theory. This method can be used given the fact that the partially coherent source can be represented with high accuracy using a small number of independent coherence modes (analogous to the Karhunen-Loève theorem). This enables to formulate the recovery problem to a low-rank matrix recovery problem.

Best,
-Lei
Thanks Lei !

Phase space tomography estimates correlation functions entirely from snapshots in the evolution of the wave function along a time or space variable. In contrast, traditional interferometric methods require measurement of multiple two–point correlations. However, as in every tomographic formulation, undersampling poses a severe limitation. Here we present the first, to our knowledge, experimental demonstration of compressive reconstruction of the classical optical correlation function, i.e. the mutual intensity function. Our compressive algorithm makes explicit use of the physically justifiable assumption of a low–entropy source (or state.) Since the source was directly accessible in our classical experiment, we were able to compare the compressive estimate of the mutual intensity to an independent ground–truth estimate from the van Cittert–Zernike theorem and verify substantial quantitative improvements in the reconstruction.

### Sparse Measurement Matrices: What are they good for ? (Part deux)

With regards to sparse measurement matrices [ Sparse Measurement Matrices: What are they good for ?], Phil Schniter reminded of this contribution (the first version was 2010) where, according to the authors we have

"The ﬁrst deterministic construction of compressed sensing measurement matrices with an order-optimal number of measurements."

The paper is  LDPC Codes for Compressed Sensing by Alexandros Dimakis, Roxana Smarandache,  and Pascal VontobelThe construction is in Corollary 18:

Let dv, dc ∈ Z>0. Consider a measurement matrix HCS ∈ {0, 1} m×n whose Tanner graph is a (dv, dc)-regular bipartite graph with Ω(log n) girth. This measurement matrix succeeds in recovering a randomly supported k = αn sparse vector with probability 1 − o(1) if α is below some threshold function α'(dv, dc, m/n).

Also of related interest is Sparse Binary Matrices of LDPC codes for Compressed Sensing by Weizhi Lu ,Kidiyo Kpalma , Joseph Ronsin. but the authors do not seem to be aware of the LDPC codes for CS paper.mentioned above.

ThankPhil !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, March 29, 2012

### EM-GM-AMP: An Algorithm for Sparse Reconstruction (implementation)

"who has the patience to tweak yet another CS algorithm!"

is one of the motivation for the new algorithm devised by Jeremy Vila and Phil Schniter as featured in Expectation-Maximization Gaussian-Mixture Approximate Message Passing, The abstract reads:
When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefﬁcients can have a profound affect on recovery mean-squared error (MSE). If this distribution was apriori known, one could use efﬁcient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of signiﬁcantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal—according to the learned distribution—using AMP. In particular, we model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments conﬁrm the state-of-the-art performance of our approach on a range of signal classes.

From the web page:

EM-GM-AMP: An Algorithm for Sparse Reconstruction

The Expectation-Maximization Gaussian-Mixture Approximate Message Passing (EM-GM-AMP) is an algorithm designed to recover a signal  from (possibly) noisy measurements . One particularly interesting case is when the measurements are undersampled (i.e. ). With sufficient signal sparsity and a well-conditioned mixing matrix, signal recovery is possible.
EM-GM-AMP assumes a sparsity promoting i.i.d. Gaussian-Mixture prior: , and i.i.d. additive Gaussian noise prior: . It then uses the Generalized Approximate Message Passing (GAMP) algorithm to evaluate the means (MMSE estimates) and variances of the posterior . After running GM-AMP, we then find the Expectation-Maximization (EM) updates of the parameters  and  and repeat GM-AMP until convergence. (See publications below for details.)
The EM-GM-AMP MATLAB function attempts to recover a signal through measurements observed from the above linear model. In addition, EM-GM-AMP has some optional inputs to improve computation time/accuracy. Currently, EM-GM-AMP supports only real signals. Examples of the function's usage can be seen in the Examples section.
EM-GM-AMP is a more "flexible" version of EM-Bernoulli-Gaussian AMP. It allows for the GM prior to learn a broader class of signal priors, such as the Bernoulli-Rademacher signals. In this fashion, we aim to leverage the empirical pdf of an arbitrary signal to aid in its recovery.

The implementation is available here and is listing on the compressive sensing big picture page.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Sparse Measurement Matrices: What are they good for ?

In GraphLab workshop, Why should you care ? I listed a series of measurements matrices that are sparse as they may hold the key for faster reconstruction algorithms once you use Cloud computing. The underlying idea is that by being sparse, the solvers would use that property and the  GraphLab framework to sparsify the computations and optimally put them on several CPUs. With that in mind, I listed the following ensembles:
Let us note that these families of matrices involve matrices with both types of elements: 0/1 and real numbers. On the numerical side, here the obvious pros for these measurement matrices:

Sparse measurement matrices are indeed useful when it comes to numerical issues but the real question remains: Is there any other part of the process for which these matrices are useful ? How do they instantiate in real hardware and why would you map a sparse measurement matrix to a particular sensor or measurement process ?

Whenever you think of compressive sensing, on the acquisition part, you have to think about its main function: Multiplexing. Hence the reasoning for sparse measurement matrices resides in asking what are the pros and cons of sparse multiplexing, here is a list:

Cons:
• In the case of coded aperture or any DMD based system like the single pixel camera: Less luminosity is detected with each measurement yielding (Poisson) noise issues.
• Coefficients must generally be of 0 or 1 in order to have an immediate application (some families of sparse measurement matrices fulfill that condition however)
• If the multiplexing technology is not accurate enough, any (multiplicative noise) error on one coefficients will result immediately on larger errors.
Pros

• Storage of the measurement matrix is reduced thereby enabling instances of these mixing matrices on some embedded sensor without much design for accessing coefficients of the matrix,
• If the multiplexing technology is accurate enough, there are fewer errors possible on the coefficients (and therefore less multiplicative noise)
• Potentially less measurements
• Some families of sparse measurement matrices have 0/1 coefficients and therefore can have an immediate application
• Measurements are less likely to have clipping issues thereby enabling low dynamic range sensors
• By reducing the number of elements being multiplexed, one keeps the linearity of the function being investigated and keep automated mixing operation to a low count. Let us take the example of the Pooling design ([1][2]), fewer elements being mixed means a smaller likelihood of nonlinearity between reagents and fewer robotic mixing operations

References:
[1] QUAPO : Quantitative Analysis of Pooling in High-Throughput Drug Screening a presentation by Raghu Kainkaryam (a joint work with Anna Gilbert, Paul Shearer and Peter Woolf).

## Wednesday, March 28, 2012

### A Postdoc position at MIT

Piotr Indyk asked me to post the following:

Applications are invited for a Postdoctoral Research Assistant position for the MIT-Shell-Draper Lab research project "Applications of compressive sensing and sparse structure exploitation in hydrocarbon reservoir exploration and surveillance"
The goal of the project is to develop novel compressive sensing algorithms for geoscience problems in the area of hydrocarbon field exploration and surveillance. The appointment is initially for one-year, with the possibility of renewal for up to 3 years. The appointment should start either during the summer (the preferred option) or the fall of 2012.
Duties and Responsibilities:
• Provide expertise on and contribute to the development of compressive
• sensing and sparse recovery algorithms for geoscience applications
• Help MIT faculty in coordination of research projects, including periodic
• reporting and documentation as required by the program timeline
• Frequent travel to Shell facilities in Houston

Minimum Qualifications
• -Ph.D. in Computer Science, Electrical Engineering, Mathematics or related
• disciplines

Preferred Qualifications
• Expertise in sparse recovery,  compressive sensing, streaming/sketching,
• machine learning and statistical inference algorithms
• Experience and/or interest in research projects of interdisciplinary nature
• Programming experience (MATLAB)

Applications (including CV and three reference letters) should be submitted to
https://postdoc.csail.mit.edu/searches/sparsityp-search/
ideally by April 15, 2012. However, applications will be considered until the position is filled.

Thanks Piotr.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, March 27, 2012

### This Week in Compressive Sensing

Felix Krahmer gives a talk at MIT on Compressed sensing bounds via improved estimates for Rademacher chaos  At Johns Hopkins, there is a project on a biomorphic Asynchronous Time-based Imaging Sensor (ATIS). But if you want to know more about compressive sensing, today we have also some pretty interesting papers. Enjoy!

First, when I met  Yonina  at MIA2012, she mentioned that this paper would be out. If you recall, this is a big controversy:, at least to the Science journalists who don't understand how Science really works: Designing and using prior data in Ankylography: Recovering a 3D object from a single diffraction intensity pattern by Eliyahu Osherovich, Oren Cohen, Yonina C. Eldar, Mordechai Segev  The abstract reads:
We present a novel method for Ankylography: three-dimensional structure reconstruction from a single shot diffraction intensity pattern. Our approach allows reconstruction of objects containing many more details than was ever demonstrated, in a faster and more accurate fashion

In this paper, we look at combinatorial algorithms for Compressed Sensing from a different perspective. We show that  certain combinatorial solvers  are in fact recursive  implementations of convex relaxation methods for solving compressed sensing, under the assumption  of sparsity for  the projection matrix.  We extend the notion of sparse binary projection matrices to sparse real-valued ones. We prove that, contrary to their binary counterparts, this class of  sparse-real  matrices has the  Restricted Isometry Property. Finally, we generalize  the voting mechanism (employed in combinatorial algorithms) to  notions of isolation/alignment and present the required solver for real-valued sparse projection matrices based on such isolation/alignment mechanisms.
Combinatorial selection and debiasing ? I have also seen these here, even though the big idea here is gradient sketching: Sublinear Time, Approximate Model-based Sparse Recovery For All by Anastasios KyrillidisVolkan Cevher. The abstract reads:

We describe a probabilistic, {\it sublinear} runtime, measurement-optimal system for model-based sparse recovery problems through dimensionality reducing, {\em dense} random matrices. Specifically, we obtain a linear sketch $u\in \R^M$ of a vector $\bestsignal\in \R^N$ in high-dimensions through a matrix $\Phi \in \R^{M\times N}$ $(M less than N)$. We assume this vector can be well approximated by $K$ non-zero coefficients (i.e., it is $K$-sparse). In addition, the nonzero coefficients of $\bestsignal$ can obey additional structure constraints such as matroid, totally unimodular, or knapsack constraints, which dub as model-based sparsity. We construct the dense measurement matrix using a probabilistic method so that it satisfies the so-called restricted isometry property in the $\ell_2$-norm. While recovery using such matrices is measurement-optimal as they require the smallest sketch sizes $\numsam= O(\sparsity \log(\dimension/\sparsity))$, the existing algorithms require superlinear runtime $\Omega(N\log(N/K))$ with the exception of Porat and Strauss, which requires $O(\beta^5\epsilon^{-3}K(N/K)^{1/\beta}), ~\beta \in \mathbb{Z}_{+},$ but provides an $\ell_1/\ell_1$ approximation guarantee. In contrast, our approach features $O\big(\max \lbrace \sketch \sparsity \log^{O(1)} \dimension, ~\sketch \sparsity^2 \log^2 (\dimension/\sparsity) \rbrace\big)$ complexity where $L \in \mathbb{Z}_{+}$ is a design parameter, independent of $\dimension$, requires a smaller sketch size, can accommodate model sparsity, and provides a stronger $\ell_2/\ell_1$ guarantee. Our system applies to "for all" sparse signals, is robust against bounded perturbations in $u$ as well as perturbations on $\bestsignal$ itself.

Magnetic Resonance Imaging (MRI) is one of the fields that the compressed sensing theory is well utilized to reduce the scan time significantly leading to faster imaging or higher resolution images. It has been shown that a small fraction of the overall measurements are sufficient to reconstruct images with the combination of compressed sensing and parallel imaging. Various reconstruction algorithms has been proposed for compressed sensing, among which Augmented Lagrangian based methods have been shown to often perform better than others for many different applications. In this paper, we propose new Augmented Lagrangian based solutions to the compressed sensing reconstruction problem with analysis and synthesis prior formulations. We also propose a computational method which makes use of properties of the sampling pattern to significantly improve the speed of the reconstruction for the proposed algorithms in Cartesian sampled MRI. The proposed algorithms are shown to outperform earlier methods especially for the case of dynamic MRI for which the transfer function tends to be a very large matrix and significantly ill conditioned. It is also demonstrated that the proposed algorithm can be accelerated much further than other methods in case of a parallel implementation with graphics processing units (GPUs).

The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the $\ell_2$-data error objective and the $\ell_1$-norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the center of attention over the last decade in order to improve learning performance. To this end, we alter the selection process of LASSO to explicitly leverage combinatorial sparsity models (CSMs) via the combinatorial selection and least absolute shrinkage (CLASH) operator. We provide concrete guidelines how to leverage combinatorial constraints within CLASH, and characterize CLASH's guarantees as a function of the set restricted isometry constants of the sensing matrix. Finally, our experimental results show that CLASH can outperform both LASSO and model-based compressive sensing in sparse estimation.

Compressed sensing (CS) studies the recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior. This paper is focused on the CS problem with quantized measurements. There have been research results dealing with different scenarios including a single/multiple bits per measurement, noiseless/noisy environment, and an unsaturated/saturated quantizer. While the existing methods are only for one or more specific cases, this paper presents a framework to unify all the above mentioned scenarios of the quantized CS problem. Under the unified framework, a variational Bayesian inference based algorithm is proposed which is applicable to the signal recovery of different application cases. Numerical simulations are carried out to illustrate the improved signal recovery accuracy of the unified algorithm in comparison with state-of-the-art methods for both multiple and single bit CS problems.

Analysis of Sparse MIMO Radar by Thomas Strohmer, Benjamin Friedlander. The abstract reads:
We consider a multiple-input-multiple-output radar system and derive a theoretical framework for the recoverability of targets in the azimuth-range domain and the azimuth-range-Doppler domain via sparse approximation algorithms. Using tools developed in the area of compressive sensing, we prove bounds on the number of detectable targets and the achievable resolution in the presence of additive noise. Our theoretical findings are validated by numerical simulations.

Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms by Amir Beck, Yonina C. Eldar . The abstract reads:

This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinate-wise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparse-simplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms are of coordinate descent type. The theoretical convergence of these methods and their relations to the derived optimality conditions are studied. The algorithms and results are illustrated by several numerical examples.

Spread spectrum magnetic resonance imaging by Gilles Puy, Jose P. Marques, Rolf Gruetter, Jean-Philippe Thiran, Dimitri Van De Ville, Pierre Vandergheynst, Yves Wiaux. The abstract reads:
We propose a novel compressed sensing technique to accelerate the magnetic resonance imaging (MRI) acquisition process. The method, coined spread spectrum MRI or simply s2MRI, consists of pre-modulating the signal of interest by a linear chirp before random k-space under-sampling, and then reconstructing the signal with non-linear algorithms that promote sparsity. The effectiveness of the procedure is theoretically underpinned by the optimization of the coherence between the sparsity and sensing bases. The proposed technique is thoroughly studied by means of numerical simulations, as well as phantom and in vivo experiments on a 7T scanner. Our results suggest that s2MRI performs better than state-of-the-art variable density k-space under-sampling approaches

We present an ordinary differential equations approach to the analysis of algorithms for constructing $l_1$ minimizing solutions to underdetermined linear systems of full rank. It involves a relaxed minimization problem whose minimum is independent of the relaxation parameter. An advantage of using the ordinary differential equations is that energy methods can be used to prove convergence. The connection to the discrete algorithms is provided by the Crandall-Liggett theory of monotone nonlinear semigroups. We illustrate the effectiveness of the discrete optimization algorithm in some sparse array imaging problems.
In this paper, we investigate the fundamental performance limits of data gathering with compressive sensing (CS) in wireless sensor networks, in terms of both energy and latency. We consider two scenarios in which n nodes deliver data in centralized and distributed fashions, respectively. We take a new look at the problem of data gathering with compressive sensing from the perspective of in-network computation and formulate it as distributed function computation. We propose tree-based and gossip based computation protocols and characterize the scaling of energy and latency requirements for each protocol. The analytical results of computation complexity show that the proposed CS-based protocols are efﬁcient for the centralized fashion. In particular, we show the proposed CS-based protocol can save energy and reduce latency by a factor of (p n log n m ) when m = O(√n log n) in noiseless networks, respectively, where m is the number of random projections for signal recovery. We also show that our proposed protocol can save energy by a factor of (pnmplog n) compared with the traditional transmission approach when m = O(√nlog n) in noisy networks. For the distributed fashion, we show that the proposed gossip-based protocol can improve upon the scheme using randomized gossip, which needs fewer transmissions. Finally, simulations are also presented to demonstrate the effectiveness of our proposed protocols.
In this paper, we study data gathering with compressive sensing from the perspective of in-network computation in random networks, in which n nodes are uniformly and independently deployed in a unit square area. We formulate the problem of data gathering to compute multiround random linear function. We study the performance of in-network computation with compressive sensing in terms of energy consumption and latency in centralized and distributed fashions. For the centralized approach, we propose a tree-based protocol for computing multiround random linear function. The complexity of computation shows that the proposed protocol can save energy and reduce latency by a factor of (√n= log n) for data gathering comparing with the traditional approach, respectively. For the distributed approach, we propose a gossip-based approach and study the performance of energy and latency through theoretical analysis. We show that our approach needs fewer transmissions than the scheme using randomized gossip.

Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection by Xiaohan WangYuantao GuLaming Chen. The abstract reads:

A recursive algorithm named Zero-point Attracting Projection (ZAP) is proposed recently for sparse signal reconstruction. Compared with the reference algorithms, ZAP demonstrates rather good performance in recovery precision and robustness. However, any theoretical analysis about the mentioned algorithm, even a proof on its convergence, is not available. In this work, a strict proof on the convergence of ZAP is provided and the condition of convergence is put forward. Based on the theoretical analysis, it is further proved that ZAP is non-biased and can approach the sparse solution to any extent, with the proper choice of step-size. Furthermore, the case of inaccurate measurements in noisy scenario is also discussed. It is proved that disturbance power linearly reduces the recovery precision, which is predictable but not preventable. The reconstruction deviation of $p$-compressible signal is also provided. Finally, numerical simulations are performed to verify the theoretical analysis.

In this paper, we consider compressed sensing (CS) of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. An efficient algorithm, called zero-point attracting projection (ZAP) algorithm, is extended to the scenario of block CS. The block version of ZAP algorithm employs an approximate $l_{2,0}$ norm as the cost function, and finds its minimum in the solution space via iterations. For block sparse signals, an analysis of the stability of the local minimums of this cost function under the perturbation of noise reveals an advantage of the proposed algorithm over its original non-block version in terms of reconstruction error. Finally, numerical experiments show that the proposed algorithm outperforms other state of the art methods for the block sparse problem in various respects, especially the stability under noise.

As one of the recently proposed algorithms for sparse system identification, $l_0$ norm constraint Least Mean Square ($l_0$-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of $l_0$-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents all-around and throughout theoretical performance analysis of $l_0$-LMS for white Gaussian input data based on some reasonable assumptions. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between $l_0$-LMS and some previous arts and the sufficient conditions for $l_0$-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a large range of parameter setting.

A new sparse signal recovery algorithm for multiple-measurement vectors (MMV) problem is proposed in this paper. The sparse representation is iteratively drawn based on the idea of zero-point attracting projection (ZAP). In each iteration, the solution is first updated along the negative gradient direction of sparse constraint, and then projected to the solution space to satisfy the under-determined equation. A variable step size scheme is adopted further to accelerate the convergence as well as to improve the recovery accuracy. Numerical simulations demonstrate that the performance of the proposed algorithm exceeds the references in various aspects, as well as when applied to the Modulated Wideband Converter, where recovering MMV problem is crucial to its performance.

And finally, a PhD thesis: Numerical methods for phase retrieval by Eliyahu Osherovich. The abstract reads:

In this work we consider the problem of reconstruction of a signal from the magnitude of its Fourier transform, also known as phase retrieval. The problem arises in many areas of astronomy, crystallography, optics, and coherent diffraction imaging (CDI). Our main goal is to develop an efficient reconstruction method based on continuous optimization techniques. Unlike current reconstruction methods, which are based on alternating projections, our approach leads to a much faster and more robust method. However, all previous attempts to employ continuous optimization methods, such as Newton-type algorithms, to the phase retrieval problem failed. In this work we provide an explanation for this failure, and based on this explanation we devise a sufficient condition that allows development of new reconstruction methods---approximately known Fourier phase. We demonstrate that a rough (up to $\pi/2$ radians) Fourier phase estimate practically guarantees successful reconstruction by any reasonable method. We also present a new reconstruction method whose reconstruction time is orders of magnitude faster than that of the current method-of-choice in phase retrieval---Hybrid Input-Output (HIO). Moreover, our method is capable of successful reconstruction even in the situations where HIO is known to fail. We also extended our method to other applications: Fourier domain holography, and interferometry. Additionally we developed a new sparsity-based method for sub-wavelength CDI. Using this method we demonstrated experimental resolution exceeding several times the physical limit imposed by the diffraction light properties (so called diffraction limit).

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, March 26, 2012

### Some Software Packages for Partial SVD Computation (implementation)

Following up on the Online SVD/PCA resources entry that Danny just put up on his blog, I wanted to mention some resources that, after some checking, I seemingly failed to mention before. They are all listed in: Some Software Packages for Partial SVD Computation by Zhouchen Lin. The abstract reads:
This technical report introduces some software packages for partial SVD computation, including optimized PROPACK, modified PROPACK for computing singular values above a threshold and the corresponding singular vectors, and block Lanczos with warm start (BLWS). The current version is preliminary. The details will be enriched soon.
If you recall most folks, as recently as Matrix ALPS seem to be using PROPACK:

The software package PROPACK contains a set of functions for computing the singular value decomposition of large and sparse or structured matrices. The SVD routines are based on the Lanczos bidiagonalization algorithm with partial reorthogonalization (BPRO). The Lanczos routines can also be used directly, and form the basis of efficient algorithms for solving linear systems of equations and linear least squares problems, in particular for systems with multiple right-hand sides.
Eventually I would not be surprised if it were to be used for algo such as SL0 and family. Zhouchen  went on and produced (links to implementations)

For more on the BLWS, the attendant paper is: A Block Lanczos with Warm Start Technique for Accelerating Nuclear Norm Minimization Algorithms  by Zhouchen Lin and Siming Wei. The abstract reads:
Recent years have witnessed the popularity of using rank minimization as a regularizer for various signal processing and machine learning problems. As rank minimization problems are often converted to nuclear norm minimization (NNM) problems, they have to be solved iteratively and each iteration requires computing a singular value decomposition (SVD). Therefore, their solution suffers from the high computation cost of multiple SVDs. To relieve this issue, we propose using the block Lanczos method to compute the partial SVDs, where the principal singular subspaces obtained in the previous iteration are used to start the block Lanczos procedure. To avoid the expensive reorthogonalization in the Lanczos procedure, the block Lanczos procedure is performed for only a few steps. Our block Lanczos with warm start (BLWS) technique can be adopted by different algorithms that solve NNM problems. We present numerical results on applying BLWS to Robust PCA and Matrix Completion problems. Experimental results show that our BLWS technique usually accelerates its host algorithm by at least two to three times.

As partial SVD is important in algorithms dedicated to nuclear norm minimization, these resources are listed in the Matrix Factorization Jungle.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Convex feasibility modeling and projection methods for sparse signal recovery (implementation)

Dick Gordon mentioned to me the following: Convex feasibility modeling and projection methods for sparse signal recovery by Avishy Carmi, Yair Censor and Pini Gurﬁl. The abstract reads:

A computationally-efficient method for recovering sparse signals from a series of noisy observations, known as the problem of compressed sensing (CS), is presented. The theory of CS usually leads to a constrained convex minimization problem. In this work, an alternative outlook is proposed. Instead of solving the CS problem as an optimization problem, it is suggested to transform the optimization problem into a convex feasibility problem (CFP), and solve it using feasibility-seeking Sequential and Simultaneous Subgradient Projection methods, which are iterative, fast, robust and convergent schemes for solving CFPs. As opposed to some of the commonly-used CS algorithms, such as Bayesian CS and Gradient Projections for sparse reconstruction, which become inefficient as the problem dimension and sparseness degree increase, the proposed methods exhibit robustness with respect to these parameters. Moreover, it is shown that the CFP-based projection methods are superior to some of the state-of-the-art methods in recovering the signal's support. Numerical experiments show that the CFP-based projection methods are viable for solving large-scale CS problems with compressible signals.

The Cyclic Subgradient Projection (CSP) is implemented here.
Thanks Dick.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Sunday, March 25, 2012

### The Bet

Imagine the room filled with conference attendees listening carefully to the latest derivation of an approximation of the linear transport equation. It's time for questions. Some attendees don't care but most do not want to really expose their sub-optimal understanding of what was just presented to them. Then, in the back of the room, quite confidently, Paul Zweifel, who minutes ago was outside the room, raises his hand and asks: "What you describe here...does that work with low energy neutrons ?" Paul was conspicuous when he entered the room, so the presenter is a little flabbergasted at first but the crowd can clearly hear a definitive slamdunkish "Yes, Paul'" followed by  "You were not here earlier but there was no assumption about speed, so the derivation also works for these energy levels, Thanks for asking". Sensing that he is losing a bet he just made that one can always ask an insightful question at a presentation one has barely seen, Paul quips back "I guess I wasn't clear but I meant really really low speed neutrons".. Upon hearing those words, the presenter's face quite literally lights up "Oh you mean, ultracold neutrons, well this is interesting because in that case, one can still avoid a full quantum mechanical treatment of the transport equation by replacing this term and .....". The feverish explanation goes on for five extremely long minutes...All agreed Paul won his bet.

What's the moral of the story ? Everybody has a different set of assumptions,

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### You explore the world with the sensor you have, not the sensor you wished you had.

Within the realm of Compressive Sensing, the answer to the following question seems obvious, initially at least, but then, it is not:
I have a problem with the reconstruction algorithm based on wavelet sampling. For example y=Ex, where E is partial wavelet transform but can select significant wavelet coefficients. Is there reconstruction algorithm for that or any papers to discuss this problem?
The canned answer stating that you have to have an incoherent basis for sampling is not optimal for several reasons:

• Who says the problem is for a signal that is sparse in some wavelet basis ?
• Who says that curvelets are not the real basis in which you want to decompose your signal, I mean you need many wavelets to get the equivalent of a curvelet.
• Who says the undersampling needs to be optimal ? Who are you to think that the practitionner needs to have an optimal set-up ? there are many couples of bases for which incoherence is not optimal but then the requirement for the number of samples is not optimal either. You explore the world with the sensor you have, not the sensor you wished you had.
• Who says that using any one of these solvers, the undersampling will not get better ? I mean most results are in O notation and most problems are finite.

Thanks Akshay and Zheng for this insightful discussion.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Saturday, March 24, 2012

### GMR killed the Media Star

Jacques Mattheij makes the case that the movie/media industry is dying because

"...The reason why I believe this to be the case is a very simple technical one: it will not be very long now when a reasonably small package (say 3.5" x 5.25" x 1") will hold all the movies and all the music that we've ever made. Right now using a 4T drive you can store 5700 movies at a usable resolution or 1.3 million songs. That's more content than anybody can consume in a lifetime...."
All these years the internet was seen as a proxy to how times are changing when in fact the real culprit has been a technology powered by the serendipitous 1988 discovery of the Giant Magnetoresistance (GMR). It makes you wonder about warped some of our current concepts can be. For instance, our current understanding of how compressive sensing, computational photography or Big Data can be nicely framed in this slide by John Wright

How would that understanding change in light of the Colossal MagnetoResistance technologies and the daily improvement in computational methods ?

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, March 23, 2012

### Calibration Time is Too Damn High (part 2)

Following up on Part I:

The other meeting I attended was the annual JIONC that stands for Journées Imagerie Optique Non Conventionelle, a workshop focused on Unconventional Optical Imagery. I just could not pass attending something with unconventional imaging in the title. As we all know, Mark Neifeld said it well in this slide

even conventional imaging systems are already compressive, so unconventional imagery has to have more compressive sensing built in. I was not surprised that it did but one of the presentations that I particularly liked was that of Pauline Trouve, who is involved in building a depth sensing camera using the fact that the PSF of the RGB colors are different when using a coded aperture.

In particular, she mentioned to me that the calibration of that camera, taking into the Bayer pattern, required about 9 PSFs evenly distributed over the whole CCD (with the hope that they are not drastically changing) and that a whooping 70 measurements were needed for distances between 1 to 5 meters for the determination of each PSF. Let us recall,. that if one would want to do the same for a random lens imager, we could not even count on the symmetry on the CCD and one would have to evaluate more than 9 PSFs (each of which requiring more than 70 measurements), I am really thinking that we ought to use robots to help at that stage. There is a calibration club on LinkedIn, let's start a conversation....

A related publication include: Single Image Local Blur Identification by P. Trouvé, F. Champagnat and G. Le Besnerais, J. Idier. The abstract reads:
We present a new approach for spatially varying blur identiﬁcation using a single image. Within each local patch in the image, the local blur is selected between a ﬁnite set of candidate PSFs by a maximum likelihood approach. We propose to work with a Generalized Likelihood to reduce the number of parameters and we use the Generalized Singular Value Decomposition to limit the computing cost, while making proper image boundary hypotheses. The resulting method is fast and demonstrates good performance on simulated and real examples originating from applications such as motion blur identiﬁcation and depth from defocus

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Calibration Time is Too Damm High (part I)

I went to two meetings this week and the topic reared its ugly face in some insideous fashion. First there was this presentation by Eric Maisonnier, the CEO of Aldebran Robotics, the makers of NAO,

The distinct impression I got from the presentation was that the comapny was trying to cross-over from the industrial market into a more public one. Their set-up of an app store also points to them trying to be the Apple of robotics. The reason you have an Apple fanbase that makes any company envious has a lot to do with the fact that the killer apps on the Mac were initially focused on designers and the artistic types. The machine was out of the way, and the artists just seamlessly cranked out their arts. With NAO, it would seem to me that the behavorial engine still takes too much time before the robot can actually mesmerize the artists owners. And I think the underlying reason is that supervised or unsupervised learning by robots which one could call the calibration stage, still takes too much time even with engineer's time factored in. Most Artists don't deal well with SDKs. Like New York City rents, the calibration time is too damn high.

There is a calibration club on LinkedIn, let's start a conversation....

## Thursday, March 22, 2012

### Reconstruction of hidden 3D shapes using diffuse reflections and Flutter Shutter Video Camera for Compressive Sensing

Three additional papers gathered my interest this week when it came to compressive imaging (the first two are related). I just have a word for these: Wow.

We analyze multi-bounce propagation of light in an unknown hidden volume and demonstrate that the reflected light contains sufficient information to recover the 3D structure of the hidden scene. We formulate the forward and inverse theory of secondary and tertiary scattering reflection using ideas from energy front propagation and tomography. We show that using careful choice of approximations, such as Fresnel approximation, greatly simplifies this problem and the inversion can be achieved via a backpropagation process. We provide a theoretical analysis of the invertibility, uniqueness and choices of space-time-angle dimensions using synthetic examples. We show that a 2D streak camera can be used to discover and reconstruct hidden geometry. Using a 1D high speed time of flight camera, we show that our method can be used recover 3D shapes of objects "around the corner".

The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.
Thanks Sylvain for the heads up. Let me note that the team used CoSAMP and SPGL1 and that CoSAMP seems to be working better. I wonder where that comes from ? maybe because CoSAMP is solving a different problem than an L1 minimization. I also note that this is the second wow of the week, and that in both instances, the solvers used are great but improvement could probably be gotten using some of the newer and faster ones.

Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. In stark contrast, hardware improvements in temporal resolution have been modest. One solution to enhance temporal resolution is to use high bandwidth imaging devices such as high speed sensors and camera arrays. Unfortunately, these solutions are expensive. An alternate solution is motivated by recent advances in computational imaging and compressive sensing. Camera designs based on these principles, typically, modulate the incoming video using spatio-temporal light modulators and capture the modulated video at a lower bandwidth. Reconstruction algorithms, motivated by compressive sensing, are subsequently used to recover the high bandwidth video at high ﬁdelity. Though promising, these methods have been limited since they require complex and expensive light modulators that make the techniques difﬁcult to realize in practice. In this paper, we show that a simple coded exposure modulation is sufﬁcient to reconstruct high speed videos. We propose the Flutter Shutter Video Camera (FSVC) in which each exposure of the sensor is temporally coded using an independent pseudo-random sequence. Such exposure coding is easily achieved in modern sensors and is already a feature of several machine vision cameras. We also develop two algorithms for reconstructing the high speed video; the ﬁrst based on minimizing the total variation of the spatiotemporal slices of the video and the second based on a data driven dictionary based approximation. We perform evaluation on simulated videos and real data to illustrate the robustness of our system.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.