Wednesday, February 29, 2012

What does a compressive sensing approach bring to the table ?

Following up on yesterday's rant, let me give some perspective as to what compressive sensing brings to the table through a new chosen crop of papers from Arxiv in the past two weeks.

Clearly the next two studies fall into the new algorithmic tools section with the second one making an inference that was not done before by specialists in the field. in other words, CS provides a new insight in an older problem that was not recognized by a dedicated community. 

1.Semi-Quantitative Group Testing by Amin Emad, Olgica Milenkovic. The abstract reads:
We consider a novel group testing procedure, termed semi-quantitative group testing, motivated by a class of problems arising in genome sequence processing. Semi-quantitative group testing (SQGT) is a non-binary pooling scheme that may be viewed as a combination of an adder model followed by a quantizer. For the new testing scheme we define the capacity and evaluate the capacity for some special choices of parameters using information theoretic methods. We also define a new class of disjunct codes suitable for SQGT, termed SQ-disjunct codes. We also provide both explicit and probabilistic code construction methods for SQGT with simple decoding algorithms.
We propose a novel framework for studying causal inference of gene interactions using a combination of compressive sensing and Granger causality techniques. The gist of the approach is to discover sparse linear dependencies between time series of gene expressions via a Granger-type elimination method. The method is tested on the Gardner dataset for the SOS network in E. coli, for which both known and unknown causal relationships are discovered.

This next paper describes a new imaging system which is clearly at low technology readiness level. It has some potential.


An imaging system based on single photon counting and compressive sensing (ISSPCCS) is developed to reconstruct a sparse image in absolute darkness. The single photon avalanche detector and spatial light modulator (SLM) of aluminum micro-mirrors are employed in the imaging system while the convex optimization is used in the reconstruction algorithm. The image of an object in the very dark light can be reconstructed from an under-sampling data set, but with very high SNR and robustness. Compared with the traditional single-pixel camera used a photomultiplier tube (PMT) as the detector, the ISSPCCS realizes photon counting imaging, and the count of photons not only carries fluctuations of light intensity, but also is more intuitive.

The following paper shows a change in the software used in the data processing chain taking place after data has been acquired. There is no change in hardware here even though it might ultimately lead to one.


Emerging sonography techniques often require increasing the number of transducer elements involved in the imaging process. Consequently, larger amounts of data must be acquired and processed. The significant growth in the amounts of data affects both machinery size and power consumption. Within the classical sampling framework, state of the art systems reduce processing rates by exploiting the bandpass bandwidth of the detected signals. It has been recently shown, that a much more significant sample-rate reduction may be obtained, by treating ultrasound signals within the Finite Rate of Innovation framework. These ideas follow the spirit of Xampling, which combines classic methods from sampling theory with recent developments in Compressed Sensing. Applying such low-rate sampling schemes to individual transducer elements, which detect energy reflected from biological tissues, is limited by the noisy nature of the signals. This often results in erroneous parameter extraction, bringing forward the need to enhance the SNR of the low-rate samples. In our work, we achieve SNR enhancement, by beamforming the sub-Nyquist samples obtained from multiple elements. We refer to this process as "compressed beamforming". Applying it to cardiac ultrasound data, we successfully image macroscopic perturbations, while achieving a nearly eight-fold reduction in sample-rate, compared to standard techniques.

The next paper falls under the new conceptual study for a new architecture:

Smart Grids measure energy usage in real-time and tailor supply and delivery accordingly, in order to improve power transmission and distribution. For the grids to operate effectively, it is critical to collect readings from massively-installed smart meters to control centers in an efficient and secure manner. In this paper, we propose a secure compressed reading scheme to address this critical issue. We observe that our collected real-world meter data express strong temporal correlations, indicating they are sparse in certain domains. We adopt Compressed Sensing technique to exploit this sparsity and design an efficient meter data transmission scheme. Our scheme achieves substantial efficiency offered by compressed sensing, without the need to know beforehand in which domain the meter data are sparse. This is in contrast to traditional compressed-sensing based scheme where such sparse-domain information is required a priori. We then design specific dependable scheme to work with our compressed sensing based data transmission scheme to make our meter reading reliable and secure. We provide performance guarantee for the correctness, efficiency, and security of our proposed scheme. Through analysis and simulations, we demonstrate the effectiveness of our schemes and compare their performance to prior arts.

Finally, the last paper shows an improvement in the reconstruction rather than in the actual hardware.

Purpose: To retrospectively evaluate the fidelity of magnetic resonance (MR) spectroscopic imaging data preservation at a range of accelerations by using compressed sensing. Materials and Methods: The protocols were approved by the institutional review board of the university, and written informed consent to acquire and analyze MR spectroscopic imaging data was obtained from the subjects prior to the acquisitions. This study was HIPAA compliant. Retrospective application of compressed sensing was performed on 10 clinical MR spectroscopic imaging data sets, yielding 600 voxels from six normal brain data sets, 163 voxels from two brain tumor data sets, and 36 voxels from two prostate cancer data sets for analysis. The reconstructions were performed at acceleration factors of two, three, four, five, and 10 and were evaluated by using the root mean square error (RMSE) metric, metabolite maps (choline, creatine, N-acetylaspartate [NAA], and/or citrate), and statistical analysis involving a voxelwise paired t test and one-way analysis of variance for metabolite maps and ratios for comparison of the accelerated reconstruction with the original case. Results: The reconstructions showed high fidelity for accelerations up to 10 as determined by the low RMSE (, 0.05). Similar means of the metabolite intensities and hot-spot localization on metabolite maps were observed up to a factor of five, with lack of statistically significant differences compared with the original data. The metabolite ratios of choline to NAA and choline plus creatine to citrate did not show significant differences from the original data for up to an acceleration factor of five in all cases and up to that of 10 for some cases. Conclusion: A reduction of acquisition time by up to 80%, with negligible loss of information as evaluated with clinically relevant metrics, has been successfully demonstrated for hydrogen 1 MR spectroscopic imaging.


In all what do we see ? A compressive sensing approach currently either:

  • provide a means of designing a low TRL hardware  [3]
  • provide a means of defining new architectures [5]
  • provide a means of changing the current computational chain yielding gains at operational level [1, 4,6] (high TRL) and even permit discovery [2]! At this stage there is no change in hardware but it is likely the first step before new hardware gets to be complemented in view of the new data chain pipeline [4].
In short, at this stage, either the changes possible due to compressive sensing are invisible or very few (only one new sensor [3]) at a very low technology maturity level.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly