Tuesday, April 21, 2015

Thesis: Empirical-Bayes Approaches to Recovery of Structured Spar se Signals via Approximate Message Passing, Jeremy Vila

 
 

In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to ever-increasing consumer demand coupled with relatively inexpensive sensing technologies. These high-dimensional datasets bring challenges such as complexity, along with numerous opportunities. Though many signals of interest live in a high-dimensional ambient space, they often have a much smaller inherent dimensionality which, if leveraged, lead to improved recoveries. For example, the notion of sparsity is a requisite in the compressive sensing (CS) field, which allows for accurate signal reconstruction from sub-Nyquist sampled measurements given certain conditions.When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution is apriori known, then one could use computationally efficient approximate message passing (AMP) techniques that yield approximate minimum MSE (MMSE) estimates or critical points to the maximum a posteriori (MAP) estimation problem. In practice, though, the distribution is unknown, motivating the use of robust, convex algorithms such as LASSO–which is nearly minimax optimal–at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, this dissertation focuses on empirical-Bayesian techniques that simultaneously learn the underlying signal distribution using the expectation-maximization (EM) algorithm while recovering the signal. These techniques are well-justified in the high-dimensional setting since, in the large system limit under specific problem conditions, the MMSE version ofAMP’s posteriors converge to the true posteriors and a generalization of the resulting EM procedure yields consistent parameter estimates. Furthermore, in many practical applications, we can exploit additional signal structure beyond simple sparsity for improved MSE. In this dissertation, we investigate signals that are non-negative, obey linear equality constraints, and exhibit amplitude correlation/structured sparsity across its elements. To perform statistical inference on these structured signals, we first demonstrate how to incorporate these structures into our Bayesian model, then employ a technique called “turbo” approximate message passing on the underlying factor graph. Specifically, we partition the factor graph into the Markov and generalized linear model subgraphs, the latter of which can be efficiently implemented using approximate message passing methods, and combine the subgraphs using a “turbo” message passing approach. Numerical experiments on the compressive sensing and hyperspectral unmixing applications confirm the state-of-the-art performance of our approach, in both reconstruction error
and runtime, on both synthetic and real-world datasets.

 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly