Saturday, November 14, 2009

CS: Seeking CS data where the signal prior is not sparse or noise is non-gaussian

Danny Bickson, a reader of this blog, asks the following interesting question:
Does anyone have/know of measurement data which captures signals with either non-gaussian noise, skewed noise, skewed signal prior, or non-unimodal signal (signal has several peeks not just at zero). Most of the works until now, assume sparse signal, so a Laplacian prior captures sparsity very well.

I initially replied that an answer could include most datasets that depends on power-laws (not strictly sparse) such as natural images but this is obviously not what Danny is after. There are also datasets coming out of hyperspectral work photon counting follows Poisson distribution (see for instance: Performance bounds on compressed sensing with Poisson noise by Rebecca Willett and Maxim Raginsky). This happens not just in hyperspectral work as other types of radiation (i.e. radioactive decay for instance,...) would also fall in that category, but I know of no CS radiation sensing system other than those for photons so far. In a different direction , there are different kinds of noises as well as shown in General Deviants: An Analysis of Perturbations in Compressed Sensing by Matthew A. Herman and Thomas Strohmer, the authors talk about a multiplicative noise and how it relates to the physics studied in different areas:

....It is important to consider this kind of noise since it can account for precision errors when applications call for physically implementing the measurement matrix A in a sensor. In other CS scenarios, such as when A represents a system model, E can absorb errors in assumptions made about the transmission channel. This can be realized in radar [7], remote sensing [8], telecommunications, source separation [5], [6], and countless other problems....
as Matt later explained here:
It is important to consider perturbations to the measurement matrix A since the physical implementation of sensors is never perfect in the real world (thus the matrix E can represent precision errors or other non-ideal phenomena). Viewed in a different way, the matrix A can also model a channel that a signal passes through such as in radar, telecommunications, or in source separation problems. The models of these types of channels always involve assumptions and/or approximations of their physical characteristics. In that way, the matrix E can absorb errors in these assumptions. Factors such as these must be accounted for in real-world devices.

Finally, the only bimodal priors I have ever seen were in sensorimotor studies [2] and they were used to see how humans could learn them and how sleep was essential in that learning process:
It could be that subjects need a consolidation period to adequately learn the distribution. Such improvements in learning contingent upon sleep have also been observed in visual learning [7].

But I am not sure this really fit what Danny was asking for and beyond these examples, I am a little bit at loss. Can any expert help ?




While we are on the subject, Aaron Clauset mentioned that his paper on power laws [1] has finally been accepted in SIAM review. I note from his blog entry the following:
Here's a brief summary of the 24 data sets we looked at, and our conclusions as to how much statistical support there is in the data for them to follow a power-law distribution:

Good:
frequency of words (Zipf's law)

Moderate:
frequency of bird sightings
size of blackouts
book sales
population of US cities
size of religions
severity of inter-state wars
number of citations
papers authored
protein-interaction degree distribution
severity of terrorist attacks

With an exponential cut-off:
size of forest fires
intensity of solar flares
intensity of earthquakes (Gutenberg-Richter law)
popularity of surnames
number of web hits
number of web links, with cut-off
Internet (AS) degree distribution
number of phone calls
size of email address book
number of species per genus

None:
HTTP session sizes
wealth
metabolite degree distribution

One should note that while power-laws are interesting for the purpose of spotting phenomena with potentially sparse sets (larger probability elements dominating the counting process and lower probability elements being compared to noise with the same counting process), those phenomena who do not follow power laws can evidently produce population of sparse sets as well. In the end, we are left with whether a CS counting process would be more appropriate than a more conventional counting process.


[1] Power-law distributions in empirical data by Aaron Clauset, Cosma Rohilla Shalizi, M. E. J. Newman
[2] Körding, KP. and Wolpert, D. (2003) Bayesian Integration with Multimodal priors.

3 comments:

JackD said...

About CS with non-gaussian noise, for what I know, some references (not a comprehensive list):

* Poisson noise:

Compressed sensing performance bounds under Poisson noise
Authors: Maxim Raginsky, Zachary T. Harmany, Roummel F. Marcia, Rebecca M. Willett
http://arxiv.org/abs/0910.5146

* multiplicative noise (e.g. Gamma distribution):

M. Figueiredo and J. Bioucas-Dias, “Multiplicative noise removal using variable splitting and constrained optimization”,
Submitted to the IEEE Transactions on Image Processing, 2009.

J. Bioucas-Dias and M. Figueiredo, “Total variation restoration of speckled images using a split-Bregman algorithm”,
IEEE International Conference on Image Processing – ICIP’2009, Cairo, Egypt, 2009. Available at http://arxiv.org/abs/0903.4162

* Generalized Gaussian noise (GGD) and uniform noise (e.g. quantization):

Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine
Authors: Laurent Jacques, David Hammond, M. Jalal Fadili
http://arxiv.org/abs/0902.2367
Submitted in May to IEEE TSP

Distortion-Rate Functions for Quantized Compressive Sensing
Wei Dai, Hoa Vinh Pham, Olgica Milenkovic

Quantized Compressive Sensing
Wei Dai, Hoa Vinh Pham, Olgica Milenkovic
http://arxiv.org/abs/0901.0749

+ recent works on quantization and saturation by J. Laska et al.

* Impulsive noise:

ROBUST SAMPLING AND RECONSTRUCTION METHODS FOR COMPRESSED SENSING
Rafael E. Carrillo, Kenneth E. Barner and Tuncer C. Aysal
ICASSP'09 and Special issue on CS in IEEE journal of selected topics in signal processing

BTW, could be interesting to create a special section for "CS under different noise distribution" in the Big Picture.

Best,
Laurent

JackD said...

And for sparse noise (sparsely corrupted measurements), the paper:

Jason Laska, Mark Davenport, and Richard Baraniuk, Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. (Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, California, November 2009)

http://www-dsp.rice.edu/files/publications/conference-paper/2009/ldb-asilomar-2009.pdf

Igor said...

Laurent,

Thanks, yes I ought to enlarge the big picture to include some of the connex issues such as this one and matrix completion for instance.

Igor.

Printfriendly