Wednesday, November 02, 2011

EM-BG-AMP A Compressive Sensing Algorithm

Today we have a new solver EM-BG-AMP by Jeremy Vila and Phil Schniter that produces recovery rates that are much better than the Donoho-Tanner phase transition. In some cases like the one above, its results are equivalent to the recent solver of  Florent Krzakala, Marc Mézard, François Sausset,Yifan Sun and Lenka Zdeborová (Statistical physics-based reconstruction in compressed sensing,) The main difference is that the KMSSZ solver is tailored to a particular measurement matrix where the EM-BG-AMP solver is not. The accompanying paper to the EM-BG-AMP solver is: Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing by Jeremy Vila and Philip Schniter. The abstract reads:

The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1-regularized least-squares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not least-favorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d Bernoulli-Gaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zero-mean Gaussian with unknown
variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BG-AMP algorithm within an expectation maximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EM-BG-AMP on a range of signal types.12

From the website:




EM-BG-AMP: An Algorithm for Sparse Reconstruction

Overview

The Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing (EM-BG-AMP) is an algorithm designed to recover a signal x from (possibly) noisy measurements y = Ax +w. One particularly interesting case is when the measurements are undersampled (i.e. M <N). With sufficient signal sparsity and a well-conditioned mixing matrix, signal recovery is possible.
EM-BG-AMP assumes a sparsity promoting i.i.d. Bernoulli-Gaussian prior: p(x) = (1-lambda)delta(x)
         + lambda normal(x;theta,phi), and i.i.d. additive Gaussian noise: p(w) = \normal(w,0,psi). It then uses the Generalized Approximate Message Passing (GAMP) algorithm to evaluate the means (MMSE estimates) and variances of the posterior p(x|y). After running BG-AMP, we then find the Expectation-Maximization (EM) updates of the parameters lambda, theta, phiand psi and repeat BG-AMP until convergence. (See publications below for details.)
The EM-BG-AMP MATLAB function attempts to recover a signal through measurements observed from the above linear model. Unlike some other compressive sensing algorithms (i.e. LASSO), EM-BG-AMP does not require any tuning parameters. EM-BG-AMP does, however, have some optional inputs to improve computation time/accuracy. Currently, EM-BG-AMP supports both real and complex signals. Examples of the function's usage can be seen in the Examples section.

Advantages of EM-BG-AMP

There are plenty of compressive-sensing algorithms available. Why use EM-BG-AMP? Here are a few advantages of our approach.
  1. Excellent recovery performance over a wide range of signal/matrix types.
  2. Very good complexity scaling with problem dimensions
  3. No required tuning parameters.

Authors

Please contact the following authors regarding any questions:

Publications

The theory behind EM-BG-AMP is detailed in:

Papers

  1. J. Vila and P. Schniter "Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing," to appear in Proc. Asilomar Conf. on Signals, Systems, and Computers (Pacific Grove, CA), Nov. 2011

Posters

  1. J. Vila and P. Schniter "An Empirical-Bayes Approach to Compressive Sensing via Approximate Message Passing," presented at Sensing and Analysis of High Dimensional Data (SAHD) Workshop(Durham, NC), Jul. 2011

Support

This work has been supported in part by NSF-I/UCRC grant IIP-0968910, by NSF grant CCF-1018368, and by DARPA/ONR grant N66001-10-1-4090.



No comments:

Printfriendly