Thursday, January 31, 2008

Predicting NROL-21 's fall: When and How.


Here is a different kind of debris. Aviation Week reports that an NRO spacecraft is uncontrollably descending into the atmosphere "at a rate of about 2,310 feet per day, according to Canadian analyst Ted Molczan." The reporter seems to be saying that the Columbia fateful reentry radar and recovery data are aiding in the analysis of the potential for accidents with the debris of that satellite. I am a little bit skeptical on several counts. When people tried to evaluate when a small disco ball would reenter the atmosphere, few guessed the right time and day it eventually happened. Also, when I last looked into it, most of the modeling that used specific small scale experiments did not seem to have some real predictive capability. It seems that this conclusion was also reached by some of the SCM folks who eventually did a new kind of statistical study on this very problem for the French Space Agency. Finally, the two photos below show before and after photographs of our camera that was on-board the Columbia Space Shuttle. It is in aluminum and even though the fusion temperature of Aluminum is 600 C, well below the temperature felt outside of the craft on reentry, the base plate was nearly intact on impact (our camera was outside). The rest melted away except for the optical component in Invar.


Compressed Sensing: Thoughts on Coded Aperture and Compressed Sensing

In previous entries, we saw the large body of work at Duke and University of Arizona using Coded Aperture in order to perform hyperspectral measurements, superesolution and many other things.

It was also mentioned that coded aperture was used in Astronomy for the past forty years and that a recent surge of interest has been developing in X-ray and medical imaging (here and here). In this entry, we also saw the use of coded aperture to perform gamma radiation detection in large field of views. In another post, one could learn more about computational optical imaging, but the basic question is:

What is Coded Aperture ?

I took the following picture from [1] who took it from [2]


It certainly looks to me like a scheme that resembles that of Compressed Sensing or Compressive Sampling. In other words, the recorded data needs a computer decoding procedure to reconstruct the initial image. But the next question is:
Why would you want to do that ?

Think of every hole in the mask as being the equivalent of a pinhole camera. The reason you want to have several holes is because you want as much light (or other particles) as possible in your image in order to reduce the overall noise or you want to see the same object from different angles. With several holes, one now has several images super-imposed over each other. On the one hand, more light reach the image (less noise, high SNR) but on the other hand, one has to find a procedure in place to reassemble all these images together.


But how do you get that image back ?

The answer is well explained in Roberto Accorsi thesis [3],

Different point soruces are characterized by the pattern shift. In this sense the signal from the source is encoded. The first consequence is that a point source is not counted once, but once for every pinhole of the coded aperture, which is expected to increase the counting statistics and, consequently, the SNR. The second consequence is that each detector point is present information about many points of the source: in this sense information is multiplexed. The third consequence is that the record pattern R must be decoded to obtain an image. In intuitive terms, R must be scanned looking for known patterns which must be replaced with the point source that cast them. This is the same as separating the overlapped copies and is done by an operation known in signal processing as "matched filtering' ([5]). The technique prescribes to take the correlation of the collected data with the known pattern, in our case A. In a more general case in a pattern may be sought not through the pattern itself, but through an associated decoding pattern (or decoding array) G such that:

A x G = \delta

where x indicates periodic correlation, the matched filtering process is :

R x G

The result of this operation is to produce a perfect copy of the object O. In fact given the linearity of correlation operations and eq (1.1) one can write (using appendix A.3)

O_bar = R x G = O * (A x G)

where O_bar is by definition the estimate of the object or reconstructed image. This chain of equalities shows that the output of the imaging system is not directly the object but, as in all linear systems, a convolution of the object with a kernel, in this case A x G.

And so most of the effort in the literature has been dedicated to find the right set of mask A and decoding mask G that fulfill the periodic correlation result. I think this is where Compressed Sensing can bring some new approach to the subject. Whereas in normal Coded Aperture work, the emphasis is in finding the right mask whose combination will yield a clean autocorrelation function for all signals, what about a mask that yields a clean autocorrelation function for sparse signals using an L0-L1 reconstruction technique instead of a mask G? Well it does to a certain extent, in that there is an expectation that one is looking at determining the location of a points of light (or radiation) in a sea of background noise (especially in Astronomy or Medical Imaging.)

One should note that random masks also exist in that literature:




Clearly the method seems to have problems with images that have items other than point source (either light or radiation). In the context of using coded aperture for radiation systems and especially in the medical imaging where one is starving for photons/particles. Roberto Accorsi [3] mentions:
In conclusion, coded aperture imaging can produce a perfect copy of the object. It is a two step process: the first is physical, the projection of the source through the aperture; the second is computational decoding. The motivation to go through this complication is the potential of achieving a higher SNR. Chapter 4 is dedicated to quantifying this potential and understanding the hypothesis under which it is actually present.

If the object is extended, the advantage is not as large. Several point sources project overlapping mask patterns, so each detector records information from more than one source and measurements are not independent. Overlaps, i.e. multiplexing, cause some costs in terms of SNR, depending on the extent of superposition. We shall see that if an object takes the whole field, the SNR advantage may be lower than 1, i.e. a pinhole image would offer a better SNR, a result well known in literature ([31],[52]). Two conflicting needs arise: opening more pinholes, to increase the SNR advantage sqrt(N), and opening fewer pinholes, to reduce surperposition. This trade-off is the necessary (but, as we shall see, not sufficient) condition for the existence of an optimal mask open fraction.


Indeed from [4], "Because the reconstructed image of the URA contains virtually uniform noise regardless of the original object's structure, the improvement over the single pinhole camera is much larger for the bright points than it is for the low intensity points. "


I note that sometimes the issue is not just SNR as in the particle detection case but rather the ability to reconstruct a 3-d view of the scene of interest. Also I note the words from [3 page 192] about reconstructing 3D information

A thorough investigation shows that the restricted view angle results in severe undersampling of some regions of the fourier space [[72]. Criteria for successful imaging ( concerning the extent of the object in real and Fourier space) have also been given starting from the formulation of a sampling theorem which shows that the finite aperture bandlimits the spatial frequencies present in the object ([73]).

mmmuhh, severe undersampling of some regions of the fourier space. But it looks like six years later, the framework makes it possible to think about obtaining that third dimension [5]. In Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography by Roberto Accorsi [5] one can read the following abstract
Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging.

Irrespective, it would seem that the current effort in compressed sensing should allow for better reconstructions. Furthermore, I also note the relative similarity between the density of the coded aperture array that describe how much light passes through the mask and the Bernoulli matrices and the contruction of Radu Berinde and Piotr Indyk.

Since many copies of the same scene are superimposed on each other and that the reconstruction amount to a matched filter approach, could we use the Smashed filter approach devised by the group at Rice to find all the copies of the same item in the image plane ?

Another intriguing paper is that of Netanel Ratner, Yoav Y. Schechner, and Felix Goldberg on Optimal multiplexed sensing: bounds, conditions and a graph theory link [6]

particularly interesting are the following words:
"...Each of the measurements is prone to noise. This measurement noise propagates to an error in the estimation of the desired array of variables. This problem is exacerbated by constraints on the system and the object. For example, according to Ref. [3], constrains on system weight, price and power consumption of infrared spectrometers may require the use of uncooled detectors, which are generally noisy. In another example, the radiating object may be wide and diffuse [4], making its coupling into a narrow slit of a spectrometer inefficient, thus yielding a low signal, relative to the noise. For a given acquisition time, however, noise can be efficiently reduced using multiplexing [2–7,9,18–23]. Here, in each sequential measurement, multiple elements of the sought array (e.g., multiple spectral bands) are linearly weighted (multiplexed) and sensed simultaneously at a detector. In sequential measurements, the multiplex weighting changes. The set of weights is termed a multiplexing code. Once the measurements are done, computational demultiplexing derives the sought variable array.......The sequential measurements acquired at a detector are denoted by the vector a = (a 1,a2, . . . ,aN)t , given by a = Wi+υ . (1)
Here υ is the measurement noise. Any bias to this noise is assumed to be compensated for. Let the noise υ be uncorrelated in different measurements, and let its variance be σ 2 a .
Once measurements have been acquired under multiplexed sources, those measurements are de-multiplexed computationally. This derives estimates for the irradiance values of the object under the individual radiation sourcesˆi. The best unbiased linear estimator in the sense of mean square error (MSE) for the irradiance corresponding to the individual-sources is ˆi =W−1a . (2)
The MSE of this estimator [9, 18] is MSEˆi = σ 2 a N traceWtW−1 .......Hadamard codes are special cases of our analysis, hence this work significantly generalizes the known art..."

Strangely enough, the function used in the optimization problem devised to reduce the mean square error (MSE) in the reconstructed signal is the trace of the inverse of the gram matrix of the mask. This is reminiscent of similar techniques used to squash low dimensional manifolds in dimensionality reduction techniques (Trace, what is it good for ?).

In all, I cannot shake the sentiment that this coded aperture business is at the confluence of several fields (sketching, Linear programming, optics, radiation transport....) that is bound to grow using a compressed sensing perspective

On a different note, the isotopic element used for the coded aperture medical imager comes from this reactor.

[1] High-Sensitivity Dynamic Coded Aperture Imaging, Roberto Accorsi and Richard C. Lanza, Nuclear Science Symposium Conference Record, 2003 IEEE, 19-25 Oct. 2003, 1833- 1837 Vol.3

[2] E. E. Fenimore, and T. M. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Applied Optics, vol. 17, pp. 337-347, 1978.

[3] Roberto Accorsi, Design of near-field coded aperture cameras for high resolution medical and industrial gamma-ray imaging. June 2001, MIT.

[4] Coded aperture imaging: predicted performance of uniformly redundant arrays, E. E. Fenimore, 17 November 1978, Vol. 17, No. 22, Applied Optics.

[5] Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography, Roberto Accorsi, Applied Optics, Vol. 44, Issue 28, pp. 5872-5883

[6] Optimal multiplexed sensing: bounds, conditions and a graph theory link, Netanel Ratner, Yoav Y. Schechner, and Felix Goldberg, Optics Express, Vol. 15, Issue 25, pp. 17072-17092

Wednesday, January 30, 2008

Compressed Sensing: SPGL1 new version, Lp reconstructon from Random and Fourier Sampling, Random matrix norm, Randomness and Trees

SPGL1 one of the solver for large-scale sparse reconstruction just released version 1.3. It is here : .zip, .tgz. More information can be found on their webpage.

Rick Chartrand, Nonconvex compressive sensing and reconstruction of gradient-sparse images: random vs. tomographic Fourier sampling. The abstract reads:

Previous compressive sensing papers have considered the example of recovering an image with sparse gradient from a surprisingly small number of samples of its Fourier transform. The samples were taken along radial lines, this being equivalent to a tomographic reconstruction problem. The theory of compressive sensing, however, considers random sampling instead. We perform numerical experiments to compare the two approaches, in terms of the number of samples necessary for exact recovery, algorithmic performance, and robustness to noise. We use a nonconvex approach, this having previously been shown to allow reconstruction with fewer measurements and greater robustness to noise, as confirmed by our results here.

They use the lp algorithm mentioned before in the Monday Morning Algorithm and find some surprising results with regards to the convergence with random projection and Fourier samples. Usually, Fourier sampling needs more sample to enable a good reconstruction, but in this case, they seem to show the contrary. Then again, they only tried Random Gaussian ensembles.

In a previous post, I mentioned that Rick Chartrand and Valentina Staneva had just submitted a paper on the subject of Restricted isometry properties and nonconvex compressive sensing. It is now available.

In another more theoretical area, Mark Rudelson, Roman Vershynin just released The Littlewood-Offord Problem and invertibility of random matrices. From the commentary section:
Here we prove two basic conjectures on the invertibility of random matrices. One of them has been a prediction of Von Neumann and his associates, who speculated that the condition number of a random n by n matrix with i.i.d. entries should be linear in n (with high probability). The condition number is the ratio of the largest to the smallest singular values of the matrix, or equivalently the product of the norms of the matrix and its inverse. .....Thus the norm of random matrix and its inverse are both of order n^{1/2} with high probability. This confirms Von Neumann's prediction in full generality.
The abstract reads:
We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n^(−1/2), which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of a new and essentially sharp estimate in the Littlewood-Offord problem: for i.i.d. random variables X_k and real numbers a_k, determine the probability p that the sum ( a_k X_k) lies near some number v. For arbitrary coefficients a_k of the same order of magnitude, we show that they essentially lie in an arithmetic progression of length 1/p.
On the same subject of theoretical approaches to randomness, Terry Tao took notes at the Distinguished Lecture Series by Avi Wigderson entitled “The power and weakness of randomness in computation”. He mentions these very interesting slides

Finally, Sanjoy Dasgupta and Yoav Freund describe the intriguing Random projection trees and low dimensional manifolds. The abstract reads:
We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data without having to explicitly learn this structure.
and its companion NIPS paper (Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, Nakul Verma, Learning the structure of manifolds using random projections). I wonder when a matlab code will be made available.


Credit Photo: SPI coded aperture detector, one of the main isntrument on the ESA INTEGRAL satellite. From F. Sancheza,*, R. Chatob, J.L. Gasentb, J. Rodrigob, T. Velascob, A coded mask for gamma-ray astronomy. Design and calibration, Nuclear Instruments and Methods in Physics Research A 500 (2003) 253–262.

Tuesday, January 29, 2008

Compressed Sensing: Computational Optical Imaging and Spectroscopy videos, I need some more popcorn.

On the previous entry, we saw how Compressed Sensing (or Compressive Sampling) can be included in optical systems. In order to understand how the field has evolved to that level of integration of CS one may want to watch the videos of David Brady in 2005 in a workshop on Radar and Optical Imaging that took place at IMA in 2005 where he presented a week of talks on Computational Optical Imaging and Spectroscopy. The slides are here whereas the videos are here:


Some additional information can be found here. A very nice explanation on how a hologram is generated can be found at the beginning of presentation 9. Presentation 10 is also useful in order to better understand previous work at DISP. I liked the response to a question when he talks about a hierarchical set of lenses at 33 minutes into the talk 10 as it touches on a question asked in a previous entry. One also gets an insight as to why the one wants to use multiple apertures and why in focal plane coding CMOS technology might bring an advantage by allowing pixels to be distributed "randomly". I also note that there is no reference to NMF as regards to imposing non-negativity constraints as done by Bob Plemmons for some specific hyperspectral problems.

Credit photo: Wikipedia/National Nuclear Safety Administration, Operation Plumbob.

Monday, January 28, 2008

Compressed Sensing: Hardware Implementations in Computational Imaging, Coded Apertures and Random Materials

[Update Nov. 08: I have listed most of the Compressed Sensing Hardware in the following page]

I have mentioned some realization of hardware implementations of Compressed Sensing (or Compressive Sensing or Compressive Sampling) before ([L1], [L2],[L3],[L4],[L5],[L6]). However owing to my ignorance, I did not give full credit to some work that made the most contribution to the subject. I think it had mostly with the fact that some websites were not up at the time I looked at the issue or that too many articles required full access to some journals (some still do). Much work on the subject of Optical Compressed Sensing has been performed at the Duke Imaging and Spectroscopy Program or DISP led by David Brady and at the Optical Computing and Processing Laboratory led by Mark Neifeld at the University of Arizona.

To give some perspective on this new and evolving field here is the story is told by the Duke researchers in Compressive Imaging Sensors [1]

An optical image has been understood as an intensity field distribution representing a physical object or group of objects. The image is considered two dimensional because the detectors are typically planary, although the objects may not. This understanding of the optical intensity field as the image has persisted even as electronic focal planes have replaced photochemical films. Lately, however, more imaginative conceptions of the relationship between the detected field and the reconstructed image have emerged. Much of this work falls under the auspices of the “computational optical sensing and imaging”,1 which was pioneered in Cathey and Dowski’s use of deliberate image aberrations to extend the depth of field2, 3 and by computed spectral tomography as represented, for example, in the work by Descour and Derniak.4 More recently, both extended depth of field and spectral features in imaging systems have been considered by many research groups. Images of physical objects have many features, such as lines and curves as well as areas separated or segmented by lines and curves. The most fundamental feature of images is the fascinating fact that an image is not an array of independent random data values. Tremendous progress has been made in the past decade in feature extraction and compression of images via post digital processing. Only recently has intelligent sampling and compression at the physical layer become a major interest. The work of Neifeld is particularly pioneering in this regard.5, 6. The DISP group at Duke University has also focused in several studies on data representation at the optical sampling layer and on physical layer compression.7–12 The interest in data compression at physical layer is also encouraged by the mathematical results by Donoho et al., who measure general functionals of a compressible and discretized function and recover n values from O(n1/4 log5/2(n)) measurements. In particular, the 1-norm of the unknown signal in its representation with respect to an orthonormal basis is used as the minimization objective, subject to a condition on the sparsity in the representation coefficients.13, 14 Rapid progress along these lines by Candes, Baraniuk and others is summarized in publications on line www-dsp.rice.edu/CS/.

And therefore, it is becoming obvious that in the optics world, researchers have known for some time that one could get more information out of scenes as long as a physical layer allowed some type of interference between the signal and some physical device (preferably random). Before this entry, I had mentioned the Hyperspectral Imager at Duke and the page of Ashwin Wagadarikar but I had not seen the full list of publications at DISP that lists most of their work for the past five years on the subject. In line with the Random Lens Imager at MIT and the ability to detect movement as mentioned by Rich Baraniuk, here some articles that caught my eyes:


Multiple order coded aperture spectrometer by S. D. Feller, Haojun Chen, D. J. Brady, Michael Gehm, Chaoray Hsieh, Omid Momtahan, and Ali Adibi [2]. The abstract reads:
We introduce a multiple order coded aperture (MOCA) spectrometer. The MOCA is a system that uses a multiplex hologram and a coded aperture to increase the spectral range and throughput of the system over conventional spectrometers while maintaining spectral resolution. This results in an order of magnitude reduction in system volume with no loss in resolution.
This is, I believe, a follow-up of the work on Coded Aperture Snapshot Spectral Imaging (CASSI) mentioned before. At the end of this page there is a comparison between the two types of CASSI concept tried by the DISP group. Very informative. Then there is this series of papers in reference structure imaging, i.e. put something well designed in between the imager and the object and try to see what properties can be detected. In this case, they look at tracking objects:


Reference structure tomography, by David J. Brady, Nikos P. Pitsianis, and Xiaobai Sun, [3] The abstract reads:

Reference structure tomography (RST) uses multidimensional modulations to encode mappings between radiating objects and measurements. RST may be used to image source-density distributions, estimate source parameters, or classify sources. The RST paradigm permits scan-free multidimensional imaging, data-efficient and computation-efficient source analysis, and direct abstraction of physical features. We introduce the basic concepts of RST and illustrate the use of RST for multidimensional imaging based on a geometric radiation model.


Lensless sensor system using a reference structure by P. Potuluri, U. Gopinathan, J. R. Adleman, and D. J. Brady. The abstract reads:
We describe a reference structure based sensor system for tracking the motion of an object. The reference structure is designed to implement a Hadamard transformation over a range of angular perspectives. We implemented a reference structure with an angular resolution of 5o and a field of view of 40o
.

But then, after putting a well known object between the object and the imager, they insert random objects in Imaging with random 3D reference structures, by P. Potuluri, M. Xu, and D. J. Brady. The abstract reads:
Three dimensional (3D) reference structures segment source spaces based on whether particular source locations are visible or invisible to the sensor. A lensless 3D reference structure based imaging system measures projections of this source space on a sensor array. We derive and experimentally verify a model to predict the statistics of the measured projections for a simple 2D object. We show that the statistics of the measurement can yield an accurate estimate of the size of the object without ever forming a physical image. Further, we conjecture that the measured statistics can be used to determine the shape of 3D objects and present preliminary experimental measurements for 3D shape recognition.
and in Imaging with random 3D reference structures by Prasant Potuluri, Mingbo Xu and David J. Brady


The abstract reads:
We describe a sensor system based on 3D ‘reference structures’ which implements a mapping from a 3D source volume on to a 2D sensor plane. The reference structure used here is a random three dimensional distribution of polystyrene beads.We show how this bead structure spatially
segments the source volume and present some simple experimental results of 2D and 3D imaging.




And so they begin to detect size, shape and motion of objects! The shape feature has also been looked at by some folks at Rice in a small course on Compressed Sensing (course given on CNX.org).

The motion is studied in Coded apertures for efficient pyroelectric motion tracking, by U. Gopinathan, D. J. Brady, and N. P. Pitsianis

The abstract reads

Coded apertures may be designed to modulate the visibility between source and measurement spaces such that the position of a source among N resolution cells may be discriminated using logarithm of N measurements. We use coded apertures as reference structures in a pyroelectric motion tracking system. This sensor system is capable of detecting source motion in one of the 15 cells uniformly distributed over a 1.6microns × 1.6microns domain using 4 pyroelectric detectors.





The size and shape are investigated in Size and shape recognition using measurement statistics and random 3D reference structures by Arnab Sinha and David J. Brady


The abstract reads
Three dimensional (3D) reference structures segment source spaces based on whether particular source locations are visible or invisible to the sensor. A lensless 3D reference structure based imaging system measures projections of this source space on a sensor array. We derive and experimentally verify a model to predict the statistics of the measured projections for a simple 2D object. We show that the statistics of the measurement can yield an accurate estimate of the size of the object without ever forming a physical image. Further, we conjecture that the measured statistics can be used to determine the shape of 3D objects and present preliminary experimental measurements for 3D shape recognition.




A similar argument was used by the folks at University of Arizona to obtain Superresolution when they looked at the pseudorandom phase-enhanced lens (PRPEL) imager in Pseudorandom phase masks for superresolution imaging from subpixel shifting by Amit Ashok and Mark A. Neifeld [9]. The abstract reads:
We present a method for overcoming the pixel-limited resolution of digital imagers. Our method combines optical point-spread function engineering with subpixel image shifting. We place an optimized pseudorandom phase mask in the aperture stop of a conventional imager and demonstrate the improved performance that can be achieved by combining multiple subpixel shifted images. Simulation results show that the pseudorandom phase-enhanced lens (PRPEL) imager achieves as much as 50% resolution improvement over a conventional multiframe imager. The PRPEL imager also enhances reconstruction root-mean-squared error by as much as 20%. We present experimental results that validate the predicted PRPEL imager performance.

The idea is to, through the use of a random phase materials, spread out the Point Spread Function (it is generally a point/dirac in normal cameras) so that it is bigger and more easily delineated from other points. The expectation is that the diffraction limit is pushed since now one can delineate more easily one "spread" peak from another one (the two peaks are not spread symmetrically thanks to the random materials). The ability to have higher resolution will then come from the ability in compressed sensing to find the sparsest dictionary of Point Spread Functions that can explain the image.


Some more explanative figures (see above) can be found in Imager Design using Object-Space Prior Knowledge a presentation at IMA 2005 by Mark Neifeld.

All in all, it seems to me that the major issue when designing these random imagers is the calibration issue that seems to be very cumbersome. Is there a way to do this faster ? Can Machine learning help ?

On a related note, Dharmpal Takhar will defend his thesis on Compressed Sensing for Imaging Applications. It'll be on Monday, February 4, 2008, from 10:00 AM to 12:00 PM in 3076 Duncan Hall at Rice University.

His abstract is:
Compressed sensing is a new sampling theory which allows reconstructing signals using sub-Nyquist measurements/sampling. This can significantly reduce the computation required for image/video acquisition/encoding, at least at the sensor end. Compressed sensing works on the concept of sparsity of the signal in some known domain, which is incoherent with the measurement domain. We exploit this technique to build a single pixel camera based on an optical modulator and a single photosensor. Random projections of the signal (image) are taken by optical modulator, which has random matrix displayed on it corresponding to the measurement domain (random noise). This random projected signal is collected on the photosensor and later used for reconstructing the signal. In this scheme we are making a tradeoff between the spatial extent of sampling array and a sequential sampling over time with a single detector. In addition to this method, we will also demonstrate a new design which overcomes this shortcoming by parallel collection of many random projections simultaneously. Applications of this technique in hyperspectral and infrared imaging will be discussed.


This is going to be interesting, I can't wait to see how the random projections are gathered simultaneously. Good luck Dharmpal!


References:
[1] N. Pitsianis, D. Brady, A. Portnoy, X. Sun, M. Fiddy, M. Feldman, R. TeKolste, Compressive Imaging Sensors, ," Proceedings of SPIE. Vol. SPIE-6232,pp. 43-51. (2006)

[2] Multiple order coded aperture spectrometer, S. D. Feller, Haojun Chen, D. J. Brady, M. E. Gehm, Chaoray Hsieh, Omid Momtahan, and Ali Adibi , Optics Express, Vol. 15, Issue 9, pp. 5625-5630

[3] David J. Brady, Nikos P. Pitsianis, and Xiaobai Sun, Reference structure tomography, J. Opt. Soc. Am. A/Vol. 21, No. 7/July 2004

[4] P. Potuluri, U. Gopinathan, J. R. Adleman, and D. J. Brady, ‘‘Lensless sensor system using a reference structure,’’ Opt. Express 11, 965–974 (2003).

[5] Coded apertures for efficient pyroelectric motion tracking, U. Gopinathan, D. J. Brady, and N. P. Pitsianis Opt. Express 11, 2142–2152 (2003).

[6] Size and shape recognition using measurement statistics and random 3D reference structures, Arnab Sinha and David J. Brady

[8] Pseudorandom phase masks for superresolution imaging from subpixel shifting by Amit Ashok and Mark A. Neifeld Applied Optics, Vol. 46, Issue 12, pp. 2256-2268

[9] A. Ashok and M. A. Neifeld, " Pseudorandom phase masks for superresolution imaging from subpixel shifting," Appl. Opt. 46, 2256-2268 (2007)

Friday, January 25, 2008

Compressed Sensing: Building an Overcomplete Basis to Make Your Signal Sparse: Texture Decomposition using NMF

The problem of finding the sparsest decomposition of a signal is intrinsically linked to finding one or several bases for the signal at hand. Either one is lucky enough to choose from a certain sets of known functions like the Gabor functions in EEG signals or these functions are so involved that a more empirical approach is required, i.e. Find as many examples as possible and try to decompose these training examples into 'atoms' that form some type of basis. I previously noted the very fast approach of Honglak Lee, Alexis Battle, Rajat Raina, Andrew Ng in Efficient sparse coding algorithms but was also surprised that no use of Non-Negative Matrix factorization had been undertaken. I was wrong, Gabriel Peyre did that for textures in Non-negative Sparse Modeling of Textures

The abstract reads:
This paper presents a statistical model for textures that uses a non-negative decomposition on a set of local atoms learned from an exemplar. This model is described by the variances and kurtosis of the marginals of the decomposition of patches in the learned dictionary. A fast sampling algorithm allows to draw a typical image from this model. The resulting texture synthesis captures the geometric features of the original exemplar. To speed up synthesis and generate structures of various sizes, a multi-scale process is used. Applications to texture synthesis, image inpainting and texture segmentation are presented.


Please note his comment that the sparsity requirement does not help much:

An explicit sparsity prior can be added into this process [23] to enforce the constraints $||xj ||_0 < \tau$  required by equation (1). In practice, this did not result in a noticeable enhancement for texture modeling.
This is surprising (see Sparsity or positivity ?) but one might not be so lucky for other types of signals. Let us not forget that while curvelets or contourlets seem doing a good job at describing contours and therefore enable a better Compressive Sampling, they do not address textures (which is the reason of this paper), nor time related or hyperspectral compressible bases.

Thermoacoustic Flame Extinguisher ?

When George Mozurkewich came in town in the late '90s, I learned that there were more than three heat transfer mechanisms. The ones everybody know are: Conduction, Convection and Radiation. A fourth one presented by George was thermoacoustics whereby given the right frequency, sound waves can be modulated to move heat from one place to another. I wonder why these Thermoacoustic oscillations are not even considered as a plausible explanation for the reason why fires can be extinguished with sound waves. One could try to check this by making a $25 tabletop thermoacoustic engine (you may want to check this document to do that) and check it out against different types of flames and frequencies. And while space might be a good place to think about using this technology, I am sure that other areas such as fire prevention and containment would be much more well served by investigating it. A device using this technique could be called the Muad'Dib Fire Extinguisher.

Thursday, January 24, 2008

Compressed Sensing: SSP 2007, Some papers and posters and the ICASSP 2008 Abstracts

I found the following papers and posters on the Compressed Sensing subject at the 2007 IEEE Statistical Signal Processing Workshop. I know it is a little late but I have not seen some of them on the Rice Repository yet. It took place at the end of August 2007.

Sparse MRI Reconstruction via Multiscale L0-Continuation by Joshua Trzasko, Armando Manduca, Eric Borisch. The abstract reads:
Compressed Sensing” and related L1-minimization methods for reconstructing sparse magnetic resonance images (MRI) acquired at sub-Nyquist rates have shown great potential for dramatically reducing exam duration. Nonetheless, the nontriviality of numerical implementation and computational intensity of these reconstruction algorithms has thus far precluded their widespread use in clinical practice. In this work, we propose a novel MRI reconstruction framework based on homotopy continuation of the L0 semi-norm using redescending M-estimator functions. Following analysis of the continuation scheme, the sparsity measure is extended to multiscale form and a simple numerical solver that can achieve accurate reconstructions in a matter of seconds on a standard desktop computer is presented.
Differences between Observation and Sampling Error in Sparse Signal Reconstruction by Galen Reeves, Michael Gastpar. The abstract reads:
The field of Compressed Sensing has shown that a relatively small number of random projections provide sufficient information to accurately reconstruct sparse signals. Inspired by applications in sensor networks in which each sensor is likely to observe a noisy version of a sparse signal and subsequently add sampling error through computation and communication, we investigate how the distortion differs depending on whether noise is introduced before sampling (observation error) or after sampling (sampling error). We analyze the optimal linear estimator (for known support) and an $\ell_1$ constrained linear inverse (for unknown support). In both cases, observation noise is shown to be less detrimental than sampling noise and low sampling rates. We also provide sampling bounds for a non-stochastic $\ell_\infty$ bounded noise model.
Previously it was mentioned that EEG signals were interesting for CS because of their sparsity. Selin Aviyente made a similar finding (Sparse Representation for Signal Classification Ke Huang and Selin Aviyente) and continued her investigation by looking at an implementation of Compressed Sensing for EEG signals in
Compressed Sensing Framework for EEG Compression by Selin Aviyente. The abstract reads:
Many applications in signal processing require the efficient representation and processing of data. The traditional approach to efficient signal representation is compression. In recent years, there has been a new approach to compression at the sensing level. Compressed sensing (CS) is an emerging field which is based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper, we propose an application of compressed sensing in the field of biomedical signal processing, particularly electro-encophelogram (EEG) collection and storage. A compressed sensing framework is introduced for efficient representation of multichannel, multiple trial EEG data. The proposed framework is based on the revelation that EEG signals are sparse in a Gabor frame. The sparsity of EEG signals in a Gabor frame is utilized for compressed sensing of these signals. A simultaneous orthogonal matching pursuit algorithm is shown to be effective in the joint recovery of the original multiple trail EEG signals from a small number of projections.

DNA Array Decoding from Nonlinear Measurements by Belief Propagation by Mona Sheikh, Shriram Sarvotham, Olgica Milenkovic, Richard Baraniuk. The abstract reads:
We propose a signal recovery method using Belief Propagation (BP) for nonlinear Compressed Sensing (CS) and demonstrate its utility in DNA array decoding. In a CS DNA microarray, the array spots identify DNA sequences that are shared between multiple organisms, thereby reducing the number of spots required in a traditional DNA array. The sparsity in DNA sequence commonality between different organisms translates to conditions that render Belief Propagation (BP) ideal for signal decoding. However a high concentration of target molecules has a nonlinear effect on the measurements - it causes saturation in the spot intensities. We propose a tailored BP technique to estimate the target signal in spite of the nonlinearity and show that the original signal coefficients can be recovered from saturated values of their linear combinations.
Related report/papers from the Rice repository are: Compressed Sensing DNA Microarrays and DNA array decoding from nonlinear measurements by belief propagation


Rate-Distortion Bounds for Sparse Approximation by Alyson Fletcher, Sundeep Rangan, Vivek Goyal. The abstract reads:
Sparse signal models arise commonly in audio and image processing. Recent work in the area of compressed sensing has provided estimates of the performance of certain widely-used sparse signal processing techniques such as basis pursuit and matching pursuit. However, the optimal achievable performance with sparse signal approximation remains unknown. This paper provides bounds on the ability to estimate a sparse signal in noise. Specifically, we show that there is a critical minimum signal-to-noise ratio (SNR) that is required for reliable detection of the sparsity pattern of the signal. We furthermore relate this critical SNR to the asymptotic mean squared error of the maximum likelihood estimate of a sparse signal in additive Gaussian noise. The critical SNR is a simple function of the problem dimensions.


Differences between Observation and Sampling Error in Sparse Signal Reconstruction by G. Reeves and M. Gastpar. The abstract reads:
The field of Compressed Sensing has shown that a relatively small number of random projections provide sufficient information to accurately reconstruct sparse signals. Inspired by applications in sensor networks in which each sensor is likely to observe a noisy version of a sparse signal and subsequently add sampling error through computation and communication, we investigate how the distortion differs depending on whether noise is introduced before sampling (observation error) or after sampling (sampling error). We analyze the optimal linear estimator (for known support) and an $\ell_1$ constrained linear inverse (for unknown support). In both cases, observation noise is shown to be less detrimental than sampling noise and low sampling rates. We also provide sampling bounds for a non-stochastic $\ell_\infty$ bounded noise model.

Variable Projection and Unfolding in Compressed Sensing by Joel Goodman, Benjamin Miller, Gil Raz, Andrew Bolstad. The abstract reads:
The performance of linear programming techniques that are applied in the signal identification and reconstruction process in compressed sensing (CS) is governed by both the number of measurements taken and the number of non-zero coefficients in the discrete orthormal basis used to represent the signal. To enhance the capabilities of CS, we have developed a technique called Variable Projection and Unfolding (VPU) to extend the identification and reconstruction capability of linear programming techniques to signals with a much greater number of non-zero coefficients in the orthonormal basis in which the signals are best compressible.
Some details of this study can be found in this report entitled Analog-to-Information Study Phase
with the following abstract:
Many communications and Radar receivers must process data over a very wide band, which requires either high-rate analog-to-digital converters (ADCs) or multichannel receivers. The information content of that wideband data, however, is often sparse in some basis. Analog-to-Information (A2I) receivers exploit this sparseness in both the digital and analog domains by non-adaptively spreading the signal energy (analog) and using digital signal processing to recover the signal from an ADC sampling at a sub-Nyquist rate. A subsampled ADC implies the use of fewer receiver channels or less expensive, lower- rate devices. This report documents the signal processing techniques for such receivers developed by the MIT Lincoln Laboratory/GMR Research and Technology team in the study phase of the A2I program. We have developed two new A2I signal processing methods, both significantly outperforming compressed sensing (CS) techniques currently in the literature, which typically fail when signals occupy more than 15-20% of the downsampled band. One of our methods, Nonlinear Affine processing (NoLaff), uses a nonlinear front-end to spread signal energy before the sub-Nyquist ADC, and uses hypothesis testing to reconstruct the signal. In simulations, this technique has shown that it can reconstruct wideband signals occupying up to 72% of the downsampled basis, It is also much less sensitive to the difficulties CS has detecting signals with large magnitude variation in the compressible basis. Our other method, called Variable Projection and Unfolding (VPU), spreads the signal energy using random linear projections similar to those used in compressed sensing, but is able to reconstruct signals occupying nearly 100% of the downsampled basis. VPU achieves this using a technique similar to matching pursuit; the key difference being that VPU searches over blocks of consecutive columns rather than one column at a time.
So in short the VPU is a greedy algorithm.


Colored Random Projections for Compressed Sensing by Zhongmin Wang; Arce, G.R.; Paredes, J.L. The abstract reads:
The emerging theory of compressed sensing (CS) has led to the remarkable result that signals having a sparse representation in some known basis can be represented (with high probability) by a small sample set, taken from random projections of the signal. Notably, this sample set can be smaller than that required by the ubiquitous Nyquist sampling theorem. Much like the generalized Nyquist sampling theorem dictates that the sampling rate can be further reduced for the representation of bandlimited signals, this paper points to similar results for the sampling density in CS. In particular, it is shown that if additional spectral information of the underlying sparse signals is known, colored random projections can be used in CS in order to further reduce the number of measurements needed. Such a priori information is often available in signal processing applications and communications. Algorithms to design colored random projection vectors are developed. Further, an adaptive CS sampling method is developed for applications where non-uniform spectral characteristics of the signal are expected but are not known a priori.

Compressed Sensing for Wideband Cognitive Radios by Zhi Tian and Giannakis, G.B. The Abstract reads:
In the emerging paradigm of open spectrum access, cognitive radios dynamically sense the radio-spectrum environment and must rapidly tune their transmitter parameters to efficiently utilize the available spectrum. The unprecedented radio agility envisioned, calls for fast and accurate spectrum sensing over a wide bandwidth, which challenges traditional spectral estimation methods typically operating at or above Nyquist rates. Capitalizing on the sparseness of the signal spectrum in open-access networks, this paper develops compressed sensing techniques tailored for the coarse sensing task of spectrum hole identification. Sub-Nyquist rate samples are utilized to detect and classify frequency bands via a wavelet-based edge detector. Because spectrum location estimation takes priority over fine-scale signal reconstruction, the proposed novel sensing algorithms are robust to noise and can afford reduced sampling rates.

Laurent Duval mentioned to me that ICASSP 2008 will take place in Las Vegas and will feature three sessions on Compressed Sensing. Hopefully what will happen in Vegas will not stay in Vegas :-). I note the first Show and Tell Presentation session that is supposed to draw a large crowd. The deadline for proposing one of these show and tell presentation is February 1st, I am sure that a good presentation on dramatic improvement related to Compressed Sensing would go a long way toward making people think how it could be used in their own field. An enterprising student might even make a name for her/himself doing that. Here is the list of abstracts from the three sessions.

SS-1: Compressed Sensing I


Session Type: Special Lecture
Time: Tuesday, April 1, 10:30 - 12:30
Location: Lecture Room 1

SS-1.1: COMPRESSIVE SENSING ON A CMOS SEPARABLE TRANSFORM IMAGE SENSOR
;
Ryan Robucci; Georgia Institue of Technology
Leung Kin Chiu; Georgia Institue of Technology
Jordan Gray; Georgia Institue of Technology
Justin Romberg; Georgia Institue of Technology
Paul Hasler; Georgia Institue of Technology
David Anderson; Georgia Institue of Technology

SS-1.2: COMPUTING PERFORMANCE GUARANTEES FOR COMPRESSED SENSING
Kiryung Lee; University of Illinois at Urbana-Champaign
Yoram Bresler; University of Illinois at Urbana-Champaign

SS-1.3: FINDING NEEDLES IN NOISY HAYSTACKS
Rui Castro; University of Wisconsin - Madison
Jarvis Haupt; University of Wisconsin - Madison
Robert Nowak; University of Wisconsin - Madison
Gil Raz; University of Wisconsin - Madison

SS-1.4: WAVELET-DOMAIN COMPRESSIVE SIGNAL RECONSTRUCTION USING A HIDDEN MARKOV TREE MODEL
Marco Duarte; Rice University
Michael Wakin; University of Michigan
Richard Baraniuk; Rice University

SS-1.5: DISTRIBUTED COMPRESSED SENSING: SPARSITY MODELS AND RECONSTRUCTION ALGORITHMS USING ANNIHILATING FILTER
Ali Hormati; Ecole Polytechnique Federale de Lausanne
Martin Vetterli; Ecole Polytechnique Federale de Lausanne

SS-1.6: ON THE UNIQUENESS OF NON-NEGATIVE SPARSE AND REDUNDANT REPRESENTATIONS
Alfred M. Bruckstein; The Computer-Science Department
Michael Elad; The Computer-Science Department
Michael Zibulevsky; The Computer-Science Department

SPTM-L5: Compressed Sensing II

Session Type: Lecture
Time: Friday, April 4, 09:30 - 11:30
Location: Lecture Room 4

SPTM-L5.1: COMPRESSED SENSING WITH SEQUENTIAL OBSERVATIONS
Dmitry Malioutov; MIT
Sujay Sanghavi; MIT
Alan Willsky; MIT

SPTM-L5.2: RECONSTRUCTING SPARSE SIGNALS FROM THEIR ZERO CROSSINGS
Petros Boufounos; Rice
Richard Baraniuk; Rice

SPTM-L5.3: SPECTRUM-BLIND RECONSTRUCTION OF MULTI-BAND SIGNALS
Moshe Mishali; Technion
Yonina Eldar; Technion

SPTM-L5.4: FAST COMPRESSIVE SAMPLING WITH STRUCTURALLY RANDOM MATRICES
Thong Do; The Johns Hopkins University
Trac Tran; The Johns Hopkins University
Gan Lu; University of Liverpool

SPTM-L5.5: SPARSE RECONSTRUCTION BY SEPARABLE APPROXIMATION
Stephen Wright; University of Wisconsin
Robert Nowak; University of Wisconsin
Mário Figueiredo; Instituto Superior Técnico

SPTM-L5.6: COMPRESSED SENSING - PROBABILISTIC ANALYSIS OF A NULL-SPACE CHARACTERIZATION
Mihailo Stojnic; California Institute of Technology
Weiyu Xu; California Institute of Technology
Babak Hassibi; California Institute of Technology

SPTM-P11: Compressed Sensing III

Session Type: Poster
Time: Friday, April 4, 13:00 - 15:00
Location: Poster Area 5

SPTM-P11.1: FUNDAMENTAL PERFORMANCE BOUNDS FOR A COMPRESSIVE SAMPLING SYSTEM
Anna Gilbert; University of Michigan
Martin Strauss; University of Michigan

SPTM-P11.2: COMPRESSED SIGNAL RECONSTRUCTION USING THE CORRENTROPY INDUCED METRIC
Sohan Seth; University of Florida
Jose C. Principe; University of Florida

SPTM-P11.3: APPROXIMATE LOWER BOUNDS FOR RATE-DISTORTION IN COMPRESSIVE SENSING SYSTEMS
Bernard Mulgrew; The University of Edinburgh
Michael Davies; The University of Edinburgh

SPTM-P11.4: EXPLICIT MEASUREMENTS WITH ALMOST OPTIMAL THRESHOLDS FOR COMPRESSED SENSING
Farzad Parvaresh; California Institute of Technology
Babak Hassibi; California Institute of Technology

SPTM-P11.5: COMPRESSED SENSING – A LOOK BEYOND LINEAR PROGRAMMING
Christian R. Berger; University of Connecticut
Javier Areta; University of Connecticut
Krishna Pattipati; University of Connecticut
Peter Willett; University of Connecticut

SPTM-P11.6: MIXED-SIGNAL PARALLEL COMPRESSED SENSING AND RECEPTION FOR COGNITIVE RADIO
Zhuizhuan Yu; Texas A&M University
Sebastian Hoyos; Texas A&M University
Brian M. Sadler; Army Research Laboratory

SPTM-P11.7: COMPRESSIVE SENSING AND WAVEFORM DESIGN FOR THE IDENTIFICATION OF LINEAR TIME-VARYING SYSTEMS
Jun Zhang; Arizona State University
Antonia Papandreou-Suppappola; Arizona State University

SPTM-P11.8: ITERATIVELY REWEIGHTED ALGORITHMS FOR COMPRESSIVE SENSING
Rick Chartrand; Los Alamos National Laboratory
Wotao Yin; Rice University

SPTM-P11.9: SUBSPACE COMPRESSIVE DETECTION FOR SPARSE SIGNALS
Zhongmin Wang; Uinversity of Delaware
Gonzalo R. Arce; Uinversity of Delaware
Brian M. Sadler; Army Research Laboratory

SPTM-P11.10: AVERAGE CASE ANALYSIS OF SPARSE RECOVERY WITH THRESHOLDING: NEW BOUNDS BASED ON AVERAGE DICTIONARY COHERENCE
Mohammad Golbabaee; Ecole Polytechnique Federale de Lausanne
Pierre Vandergheynst; Ecole Polytechnique Federale de Lausanne

SPTM-P11.11: COMPLEX-VALUED SPARSE REPRESENTATION BASED ON SMOOTHED L0 NORM
G. Hosein Mohimani; Electrical Engineering Department, Sharif University of Technology
Massoud Babaie-Zadeh; Electrical Engineering Department, Sharif University of Technology
Christian Jutten; Labratoire des Images et de Signaux (LIS), Institut National Polytechnique de Grenoble (INPG)

SPTM-P11.12: STABLE SPARSE APPROXIMATIONS VIA NONCONVEX OPTIMIZATION
Rayan Saab; The University of British Columbia
Rick Chartrand; Los Alamos National Laboratory
Ozgur Yilmaz; The University of British Columbia


Finally there is another talk but not in the Compressed Sensing
Session: SPCOM-P2: Channel Modeling and Estimation
Location: Poster Area 2
Time: Tuesday, April 1, 10:30 - 12:30
Presentation: Poster
Topic: Signal Processing for Communications and Networking: Signal Transmission and Reception
Title: A COMPRESSED SENSING TECHNIQUE FOR OFDM CHANNEL ESTIMATION IN MOBILE ENVIRONMENTS: EXPLOITING CHANNEL SPARSITY FOR REDUCING PILOTS

Authors: Georg Tauboeck, Franz Hlawatsch
Abstract: We consider the estimation of doubly selective wireless channels within pulse-shaping multicarrier systems (which include OFDM systems as a special case). A new channel estimation technique using the recent methodology of compressed sensing (CS) is proposed. CS-based channel estimation exploits a channel’s delay-Doppler sparsity to reduce the number of pilots and, hence, increase spectral efficiency. Simulation results demonstrate a significant reduction of the number of pilots relative to least-squares channel estimation.


and two panels might be of relevance:
Tuesday, April 1

PANEL-1: Compressed Sensing, Sparse Signal Processing, Rate of Innovation, Real Field Coding, and Irregular Sampling
Organized by Farokh Marvasti

and

Wednesday, April 2

PANEL-2: Convex Optimization Theory and Practice in Signal Processing
Organized by Yonina Eldar

Compressed Sensing: Links, ITA'08 at UCSD and EUSIPCO 2008 in Lausanne

I have been told by one reader how uneasy it was to find this blog on the subject of compressed sensing. I checked with Google and other sites and found that there is indeed a small issue. It looks as though there is much spam on subjects that are very new and specifically an inability for Google to reference the subject through the labeling provided by the Blogger/Google system. Some of the difficulties may have to do with the fact that people tend to link to a specific entry rather than the full list of entries. I think one of the ways to solve this problem is for whoever wants to link to this blog on the subject of Compressed Sensing, Compressive Sampling or Compressive Sensing should link to the address below:


I also use
http://nuit-blanche.blogspot.com/search/label/compressed%20sensing

so there is no need to change anything on the websites that already have that address. These addresses have the distinct appeal of removing all my other ramblings not related to the subject :-)

Jort mentions that the 2008 European Signal Processing Conference (EUSIPCO-2008) taking place in Lausanne, Switzerland, will feature a special session on Sparsity in Signal Processing organized by Holger Rauhut, Georg Tauboeck, and Franz Hlawatsch. The deadline for submission is February 8th. The conference is taking place August 25th-29th.

Another reader pointed out that several talks will be given at Information Theory and Applications '08 at UCSD that we have not seen yet on the Rice Repository, I am going to mention the abstracts, I'd be very interested in seeing preprints whenever they are available (there is also a post-workshop course with some of the leading people as teachers).
Compressed sensing and linear codes over real numbers by Fan Zhang and Henry Pfister (Henry, I understand now why the preprint won't be available before the end of the month :-)).

Compressed sensing (CS) is a new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as a syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, many results can be modified to apply in this case. In particular, there is a very close relationship between CS and error-correcting codes over large discrete alphabets.

Exploiting recent advances in capacity-approaching codes, we propose new approaches for CS of exactly sparse vectors. These methods are based on irregular low-density parity-check codes and result in provable oversampling thresholds with reconstruction complexities that are linear in the signal dimension $n$. Reconstruction succeeds with high probability (as $n\rightarrow\infty$) using a random signal and measurement model. We also discuss extensions to deterministic signal models and "approximately sparse" signals.


Algorithms and Bounds for Sensing Capacity and Compressed Sensing by M. Zhao, S. Aeron, V. Saligrama

We consider the problem of recovering sparse phenomena from projections of noisy data, a topic of interest in compressed sensing. We consider a notion of Sensing Capacity which is defined as the supremum of the ratio of the number of signal dimensions that can be identified per projection. Sensing Capacity is a function of sensing diversity, SNR, sensed environment (sparsity) as well as desired distortion up to which the sensed phenomena must be reconstructed. In this paper we will focus on the importance of SNR on sensing capacity. First, we prove algorithm independent results, i.e. find necessary and sufficient conditions on the number of projections required for reconstruction to given fidelity using information theoretic inequalities. Next, we present linear programming (LP) methods for recovering sparse signals in the presence of noise. The proposed algorithm does not require tuning parameters in contrast to other algorithms such as Lasso, which use tuning as a means of incorporating prior knowledge of noise statistics. We show that $SNR=O(\log n)$ is a necessary and sufficient condition for perfect recovery of discrete signals and exact support of continuous signals.
Integer compressed sensing via superimposed coding by Wei Dai and Olgica Milenkovic.

We introduce a new family of codes, termed weighted Euclidean superimposed codes (WESCs). This family generalizes the class of Euclidean superimposed codes, used in multiuser identification systems. WESCs allow for deterministic discrimination of bounded, integer-valued linear combinations of real-valued codewords, and can therefore also be seen as a specialization of compressed sensing schemes. We first present lower and upper bounds on the largest size a WESC, and show how to use classical coding-theoretic and new compressed sensing analytical tools to devise low-complexity decoding algorithms for these codes. Then we study sparse WESCs, in which the number of nonzero coordinates of codewords are significantly smaller than their length. Our results include a lower bound on the largest rate of sparse WESCs and a deterministic construction based on DeVore's method.


Combining geometry and combinatorics: A unified approach to sparse signal recovery by A.C. Gilbert, P. Indyk, H. Karloff, M.J. Strauss

There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix $\Phi$ and then uses linear programming to decode information about $x$ from $\Phi x$. The combinatorial approach constructs $\Phi$ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms.

The unifying elements are the adjacency matrices of high-quality {\em unbalanced expanders}. We generalize the notion of {\em restricted isometry property} (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the $\ell_p$ norm for $p >
1$, and then show that unbalanced expanders are essentially equivalent to RIP-$p$ matrices. From known deterministic constructions for such matrices, we obtain new {\em deterministic} measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.

Bayesian matching pursuit Phil Schniter and Lee C. Potter.
We propose a Bayesian approach to the compressive sensing problem. In our formulation, the measurements are modeled as a known linear combination of the sparse coefficients plus additive white Gaussian noise of known variance, and the sparse coefficients are modeled as independent Gaussian random variables whose unknown means and variances are drawn
independently from a finite set according to known priors. So that the coefficient vector is sparse, the most a priori probable {mean,variance} combination can be set to {0,0}. Leveraging the discrete nature of the Gaussian mixture parameters, we propose an efficient tree-search which finds the mixture vectors that account for the dominant posterior probabilities. The MMSE estimate of the sparse-coefficient vector then follows directly from an average over these dominant posteriors. A key feature of our tree-search is a fast metric update which operates on the "matched filter outputs" (i.e., the inner products between the measurement vector and the columns of the combining matrix), hence the name ``Fast Bayesian Matching Pursuit.''
Compressive learning and inference by Richard Baraniuk, Chinmay Hegde, Marco Duarte, Mark Davenport, and Michael Wakin

Compressive Sensing (CS) is a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions. In contrast with conventional wisdom, the new theory asserts that one can combine "low-rate sampling" with an optimization-based reconstruction process for efficient and accurate signal acquisition. While the growing CS literature has focused on reconstructing signals, in many applications, we are ultimately interested only in making inferences about the data. Examples include detecting tumors in medical images, classifying the type of vehicle in airborne surveillance, or estimating the trajectory of an object in a video sequence. In this talk we will review our recent work on new algorithms for compressive learning and inference that enjoy the same general benefits as CS. Specific techniques include compressive versions of the matched filter (dubbed the "smashed filter"), intrinsic dimension estimation for point clouds, and manifold learning algorithms. In our techniques, the number of measurements required is linear in the complexity of the data (sparsity in the case of a basis representation or dimension in the case of a manifold model) and only logarithmic in the ambient dimension.
Compressed channel sensing by R. Nowak, W. Bajwa, and J. Haupt

Channel sensing and estimation is an important component of modern wireless communication systems, and it is a crucial element of emerging systems such as cognitive radio. Wireless channels are multidimensional; delay, Doppler, and spatial diversity provides rich signaling environments that can be exploited to enhance communication capabilities. The identification of such channels is usually carried out by fully exciting or illuminating all dimensions and then applying linear estimation techniques. For high-dimensional channels, this is prohibitive and often ineffective. Linear estimation methods are ineffective because they ignore a key feature of such channels. Because of their high-dimensionality and the physical causes underlying them, wireless channels are often quite sparse, with a small number of dominant modes characterizing their structure. Estimation methods that do not capitalize on this sparsity do not perform well. This paper addresses channel sensing and estimation from a new perspective that exploits the inherent sparsity in high-dimensional wireless communication channels. Adapting ideas from
the theory of compressed sensing, we make two key observations: 1) sparse channels can be identified from a small number of properly chosen excitations or illuminations, and exhaustive probing is unnecessary; 2) nonlinear, sparse estimation methods such as the Lasso yield superior channel estimates compared to linear techniques. One of the especially novel mathematical aspects of our work is that we show that Toeplitz-structured sensing matrices, which naturally arise when pseudorandom excitations are employed, are very well suited to the identification of sparse channels.
Iterative signal recovery from incomplete and inaccurate measurements by J. Tropp, D. Needell, and R. Vershynin

Compressive Sampling (CoSa) offers a new paradigm for acquiring signals that are compressible with respect to an orthobasis. The major algorithmic challenge in CoSa is to approximate a compressible signal from noisy samples. Until recently, all provably correct reconstruction
techniques have relied on large-scale optimization, which tends to be computationally burdensome.

This talk describes a new iterative, greedy recovery algorithm, called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix--vector multiplies with the sampling matrix. For some cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.


Credit: NASA/JPL, Mars Rover Spirit on Sol 1002 or about 417 sols ago (one sol a Martian day is about 25 Earth hours).

Printfriendly