Thursday, May 17, 2012

Compressive Sensing This Week


Today we have a long list of very interesting papers but before we dwell on them, there are the numerous papers on Radar and Compressive Sensing presented at the CoSeRa 2012/International Workshop on Compressed Sensing applied to Radar -


In the meantime, after reading Dustin Mixon's entry on Achieving the Welch Bound With Difference Sets, I commented to him on the possibility of a connection with the paper of Basarab Matei  and Yves Meyer (A variant on the compressed sensing of Emmanuel Candes) which links a deterministic compressive sensing approach and a sampling set resembling that of found in quasi-crystals, a field that  Meyer has studied some years ago. Here is Dustin's answer:.



It’s certainly relevant. Two things comes to mind. First, Meyer’s paper is explicitly motivated by the fact that harmonic frames with a prime number of columns map sparse vectors injectively. This is an immediate consequence of a theorem by Chebotarev, as I discuss and generalize here:
Second, Meyer’s approach is quite different, since it deals with continuous functions as opposed to finite-dimensional vectors. The results are presented in the context of sampling and interpolation, which are introduced rather nicely here:
It’s striking that Meyer gets L1-minimization guarantees from mere injectivity. (Is the reconstruction stable if the measurements are corrupted with noise?) The guarantee requires the sparse function to be nonnegative, but is there an analogous result with finite-dimensional harmonic frames?
My short answer is: I am not aware of any.

My longer answer is: This paper is really a UFO. It has a deterministic construction and it has an additional positivity constraint that most CS do not enforce automatically. I am sure Dustin and others will eventually make that connection with our current approaches. I spoke about Yves Meyer before on this blog.. Thank you Dustin !

In a different direction, Aaditya Ramdas wrote two interesting surveys (for class ?)


The following three papers are interesting on many levels, one of them is the involvement of Siemens through their engineers and the fact that the papers are hosted on Siemens' site:

Compressed Sensing using Prior Rank, Intensity and Sparsity Model (PRISM): Applications in Cardiac Cine MRI by Hao Gao, Stanislas Rapacchi, Da Wang, John Moriarty, Conor Meehan, James Sayre, Gerhard Laub, Paul Finn, and Peng Hu
Introduction:  Compressed sensing (CS) has recently been applied to MR image reconstruction to  shorten data acquisition time1,2. In this work, we propose a novel CS method for dynamic MRI applications using Prior Rank, Intensity and Sparsity Model (PRISM) 3,4 and evaluate this technique for cardiac cine MRI.  
Reconstruction for Dynamic 2D-Radial Cardiac MRI Using Prior Enhanced Compressed Sensing by Ti-chiun Chang, Mariappan S. Nadar, Jens Guehring, Michael O. Zenge, Kai T. Block, Peter Speier, Michael S. Hansen, and Edgar Mueller
Introduction: Radially encoded MR imaging (MRI) has been well adopted in the field of fast imaging, due to its robustness to motion and relatively high signal to noise ratio (SNR) compared to the direct Fourier imaging. In many of its applications, however, the trajectory is under-sampled to reduce scan time and capture fast physiological changes. Conventional gridding reconstruction from under-sampled data introduces severe streaking artifacts and results in low SNR. Recently, compressed sensing (CS) theory emerges as a promising approach that can accurately reconstruct a signal f even when its indirect measurement is severely undersampled. In practice, measurement imperfections and noise are present, so the data reduction factor promised by CS is clearly reduced. In the sense of a Bayesian formulation, the sparsity is the only prior knowledge exploited in the basic CS approach. Effort, for instance [1,2], has been devoted to incorporating more prior estimates in the CS framework. In this work, it is shown that the reconstruction results for dynamic 2d radial cardiac MRI can be improved by additional prior obtained from combining the interleaved samples in the dynamic image sequence. 

Improved Compressed Sensing Reconstruction with Overcomplete Wavelet Transforms by Alicia W Yang, Li Feng, Jian Xu, Ivan Selesnick, Daniel K Sodickson, and Ricardo Otazo
INTRODUCTION Compressed sensing (CS) MRI exploits the ability of an image to be represented with few coefficients in a sparse domain to reconstruct undersampled k-space data without aliasing artifacts [1]. Typically, orthogonal sparsifying transforms such as the discrete wavelet transform (DWT) are used in compressed sensing MRI due to their fast implementation [1]. The disadvantages of DWT, however, are that it is shift variant, lacks directionality, and is not efficient when dealing with point singularities in natural images [2]. Overcomplete wavelet transforms, such as the dual-tree wavelet transform (DTWT) and curvelets, have been demonstrated to improve reconstruction performance in 2D images [3, 4]. However, they are not often used due to their computational complexity and minimal improvement in reconstruction  when using a globally constant threshold. In this work, we propose to optimize compressed sensing reconstructions in 3D images using overcomplete wavelet transforms by (1) using local thresholds that adapt to the signal power in each band and (2) decreasing the thresholds for each step of the iterative reconstruction algorithm such that broad image features are reconstructed in the early iterations and sharp image features are reconstructed in the later iterations. Performance is demonstrated in retrospectively undersampled 3D coronary MRA (CMRA) and 3D brain datasets

and finally, here are the other interesting papers of the day I found thanks to the ArXiv and Google:

This paper deals with the computational complexity of conditions which guarantee that the NP-hard problem of finding the sparsest solution to an underdetermined linear system can be solved by efficient algorithms. In the literature several such conditions have been introduced. The most well-known ones are the mutual coherence, the restricted isometry property (RIP), and the nullspace property (NSP). While evaluating the mutual coherence of a given matrix is easy, it has been suspected for some time that evaluating RIP and NSP is computationally intractable in general. We confirm these conjectures by showing that for a given matrix A and positive integer k, computing the best constants for which the RIP or NSP hold is NP-hard in general. We also (re)provide a proof that determining the spark of a matrix is NP-hard, since our results are based on this fact.

Gtouped Incoherent Measurements for Compressive Sensing by Adam C. Polak, Marco Duarte and Dennis L. Goeckel. The abstract reads:
Compressive sensing (CS) allows for acquisition of sparse signals at sampling rates significantly lower than the Nyquist rate required for bandlimited signals. Recovery guarantees for CS are generally derived based on the assumption that measurement projections are selected independently at random. However, for many practical signal acquisition applications, this assumption is violated as the projections must be taken in groups. In this paper, we consider such applications and derive requirements on the number of measurements needed for successful recovery of signals when groups of dependent projections are taken at random. We find a penalty factor on the number of required measurements with respect to the standard CS scheme that employs conventional independent measurement selection and verify the predicted penalty through simulations.

In this paper, we introduce a new support recovery algorithm from noisy measurements called Bayesian hypothesis test via belief propagation (BHT-BP). BHT-BP focuses on sparse support recovery rather than sparse signal estimation. The key idea behind BHT-BP is to detect the support set of a sparse vector using hypothesis test where the posterior densities used in the test are obtained by aid of belief propagation (BP). Since BP provides precise posterior information using the noise statistic, BHT-BP can recover the support with robustness against the measurement noise. In addition, BHT-BP has low computational cost compared to the other algorithms by the use of BP. We show the support recovery performance of BHT-BP on the parameters (N; M; K; SNR) and compare the performance of BHT-BP to OMP and Lasso via numerical results.

Sparsity Averaging Reweighted Analysis (SARA): a novel algorithm for radio-interferometric imaging  by  R. E. Carrillo, J. D. McEwen, Y. Wiaux. The abstract reads:
We propose a novel algorithm for image reconstruction in radio interferometry. The ill-posed inverse problem associated with the incomplete Fourier sampling identified by the visibility measurements, is regularized by the assumption of average signal sparsity over representations in multiple wavelet bases. The algorithm, defined in the versatile framework of convex optimization, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We show through simulations that the proposed approach largely outperforms state-of-the-art imaging methods in the field, which are based on the assumption of signal sparsity in a single basis only.

Performance Bounds for Grouped Incoherent Measurements in Compressive Sensing by Adam C. Polak, Marco F. Duarte, and Dennis L. Goeckel. The abstract reads:
Compressive sensing (CS) allows for acquisition of sparse signals at sampling rates signi cantly lower than the Nyquist rate required for bandlimited signals. Recovery guarantees for CS are generally derived based on the assumption that measurement projections are selected independently at random. However, for many practical signal acquisition applications, this assumption is violated as the projections must be taken in groups. In this paper, we consider such applications and derive requirements on the number of measurements needed for successful recovery of signals when groups of dependent projections are taken at random. We  nd a penalty factor on the number of required measurements with respect to the standard CS scheme that employs conventional independent measurement selection and verify the predicted penalty through simulations.

Localization and Bearing Estimation via Structured Sparsity Models by  Marco Duarte . The abstract reads:
Recent work has leveraged sparse signal models for parameter estimation purposes in applications including localization and bearing estimation. A dictionary whose elements correspond to observations for a sampling of the parameter space is used for sparse approximation of the received signals; the resulting sparse coefficient vector’s support identifies the parameter estimates. While increasing the parameter space sampling resolution provides better sparse approximations for arbitrary observations, the resulting high dictionary coherence hampers the performance of standard sparse approximation, preventing accurate parameter estimation. To alleviate this shortcoming, this paper proposes the use of structured sparse approximation that rules out the presence of pairs of coherent dictionary elements in the sparse approximation of the observed data. We show through simulations that our proposed algorithms offer significantly improved performance when compared with their standard sparsity-based counterparts. We also verify their robustness to noise and applicability to both full-rate and compressive sensing data acquisition.

Compressed Matrix Multiplication by Rasmus Pagh. The abstract reads:
Motivated by the problems of computing sample covariance matrices, and of transforming a collection of vectors to a basis where they are sparse, we present a simple algorithm that computes an approximation of the product of two n- by-n real matrices A and B. Let jjABjjF denote the Frobenius norm of AB, and b be a parameter determining the time/accuracy trade-o . Given 2-wise independent hash functions h1; h2 : [n] ! [b], and s1; s2 : [n] ! f1; +1g the algorithm works by rst \compressing" the matrix product into the polynomial p(x) =Xnk=1Xni=1Aiks1(i) xh1(i)!  Xnj=1Bkj s2(j) xh2(j)! Using FFT for polynomial multiplication, we can compute c0; : : : ; cb1 such thatPicixi= (p(x) mod xb)+(p(x) div xb)in time O~(n2+ nb). An unbiased estimator of (AB)ij with variance at most jjABjj2F =b can then be computed as: Cij = s1(i) s2(j) c(h1(i)+h2(j)) mod b : Our approach also leads to an algorithm for computing AB exactly, whp., in time O~(N + nb) in the case where A and B have at most N nonzero entries, and AB has at most b nonzero entries. Also, we use error-correcting codes in a novel way to recover signi cant entries of AB in near-linear time


Compressed sensing algorithms for fan-beam computed tomography image reconstruction by Jun Zhang, Jun Wang, Hongquan Zuo, and Guangwu Xu, Jean-Baptiste Thibault. The abstract reads:
Compressed sensing can recover a signal that is sparse in some way from a small number of samples. For computed tomography (CT) imaging, this has the potential to obtain good reconstruction from a smaller number of projections or views, thereby reducing the amount of radiation that a patient is exposed to In this work, we applied compressed sensing to fan beam CT image reconstruction, which is a special case of an important 3-D CT problem (cone beam CT). We compared the performance of two compressed sensing algorithms, denoted as the LP and the QP, in simulation. Our results indicate that the LP generally provides smaller reconstruction error and converges faster; therefore, it is preferable.


Gated viewing laser imaging with compressive sensing by Li Li, Lei Wu, Xingbin Wang, and Ersheng Dang. The abstract reads:
We present a prototype of gated viewing laser imaging with compressive sensing (GVLICS). By a new framework named compressive sensing, it is possible for us to perform laser imaging using a single-pixel detector where the transverse spatial resolution is obtained. Moreover, combining compressive sensing with gated viewing, the three-dimensional (3D) scene can be reconstructed by the time-slicing technique. The simulations are accomplished to evaluate the characteristics of the proposed GVLICS prototype. Qualitative analysis of Lissajous-type eye-pattern figures indicates that the range accuracy of the reconstructed 3D images is affected by the sampling rate, the image’s noise, and the complexity of the scenes.



Introduction: Compressed sensing (CS) [1] has recently emerged as a valuable technique for speeding up data acquisition in MRI by exploiting the sparse representation of MR images. However, the performance of CS using a conventional Cartesian trajectory is limited because undersampling can only be employed along the phase-encoding dimensions, thus both sparsity and incoherence cannot be exploited along readout dimensions. Radial sampling is an attractive alternative for CS due to its unique properties such as dense oversampling in the center of k-space and high incoherence in multiple directions, which enables exploitation of sparsity and incoherence along frequency encoding dimension [2-3]. Additionally, it is known that radial 
trajectories are less sensitive to motion, allowing for better performance in capturing dynamic information. The use of the golden angle approach in dynamic radial MRI, where uniform coverage of k-space is obtained by grouping a number of consecutive spokes, allows for continuous data acquisition and retrospective reconstruction with arbitrary temporal resolution. K-t SPARSE-SENSE is a technique recently developed CS and parallel imaging to highly accelerate dynamic imaging by jointly exploiting temporal sparsity and coil sensitivity encoding [4-5]. In this work, we propose k-t RAdial SParseSense (k-t RASPS) for accelerated dynamic MRI using stack-of-stars 3D golden-angle radial trajectories. The performance of this technique is demonstrated for highly accelerated 3D free breathing liver perfusion imaging with both high spatial and temporal resolutions


By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm characterization of coding coefficient is related to the degree of discrimination of facial features. Extensive experiments were conducted to verify the face recognition accuracy and efficiency of CRC with different instantiations.

Matching pursuits are a class of greedy algorithms commonly used in signal processing, for solving the sparse approximation problem. They rely on an atom selection step that requires the calculation of numerous projections, which can be computationally costly for large dictionaries and burdens their competitiveness in coding applications. We propose using a non adaptive random sequence of subdictionaries in the decomposition process, thus parsing a large dictionary in a probabilistic fashion with no additional projection cost nor parameter estimation. A theoretical modeling based on order statistics is provided, along with experimental evidence showing that the novel algorithm can be efficiently used on sparse approximation problems. An application to audio signal compression with multiscale time-frequency dictionaries is presented, along with a discussion of the complexity and practical implementations.


Penalty Decomposition Methods for Rank Minimization by Zhaosong Lu, Yong Zhang. the abstract reads:
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence results of the PD methods have been shown in the longer version of the paper [19]. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed.


An Adaptive Compressed Sensing Method in Speech  by Xu Feng, Wang Xia , Zheng Xiao-Dong, Wang Hao. The abstract reads:
The application of an adaptive compressive sensing method in the speech signal processing is
proposed in this paper. First, the threshold of wavelet transform is used to preprocess the speech  signal. Then, according to the parameters of the speech frame, each frame is adaptively assigned a measurement number. Finally, the measurement matrix is used to reconstruct the speech signal. Experimental results show that the proposed method can improve the SNR in the speech signal reconstruction and is of high robustness in different noise intensity


A compressive sensing framework is described for hyperspectral imaging. It is based on the widely used linear mixing model, LMM, which represents hyperspectral pixels as convex combinations of small numbers of endmember (material) spectra. The coefficients of the endmembers for each pixel are called proportions. The endmembers and proportions are often the sought-after quantities; the full image is an intermediate representation used to calculate them. Here, a method for estimating proportions and endmembers directly from compressively sensed hyperspectral data based on LMM is shown. Consequently, proportions and endmembers can be calculated directly from compressively sensed data with no need to reconstruct full hyperspectral images. If spectral information is required, endmembers can be reconstructed using compressive sensing reconstruction algorithms. Furthermore, given known endmembers, the proportions of the associated materials can be measured directly using a compressive sensing imaging device. This device would produce a multiband image; the bands would directly represent the material proportions.




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly