Saturday, April 17, 2010

Around the blogs in 80 hours (part deux) and a question on Random Matrix Theory

Here is an addendum to yesterday's series of blog entries. The first one is that of Tanya Khovanova who is interested in the determination of a bound, my question is can it be framed in a compressed sensing/group testing manner (as Marc Bernot did a while back in CS: Ain't the Power of Interwebs Great or What ?) ? Here is here entry:
David Brady in
Bob Sturm in

Does somebody have a nice a and simple explanation on how Compressed Sensing fits into the Random Matrix Theory results ? In particular, I am interested in figuring out how the noisy Donoho-Tanner phase transition result [1][2] fits with the results presented in
  • Raj Rao Nadakuditi and Alan Edelman, Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples. The abstract reads: The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a mathematically justifiable,
    computationally simple, sample-eigenvalue-based procedure for estimating the number of high-dimensional signals in white noise using relatively few samples. The main motivation for considering a sample-eigenvalue-based scheme is the computational simplicity and the robustness to eigenvector modelling errors which can adversely impact the performance of estimators that exploit information in the sample eigenvectors. There is, however, a price we pay by discarding the information in the sample eigenvectors; we highlight a fundamental asymptotic limit of sample-eigenvalue-based detection of weak or closely spaced high-dimensional signals from a limited sample size. This motivates our heuristic definition of the effective number of identifiable signals which is equal to the number of “signal” eigenvalues of the population covariance matrix which exceed the noise variance by a factor strictly greater than 1 + sqrt(Dimensionality of the system Sample size). The fundamental asymptotic limit brings into sharp focus why, when there are too few samples available so that the effective number of signals is less than the actual number of signals, underestimation of the model order is unavoidable (in an asymptotic sense) when using any sample-eigenvalue-based detection scheme, including the one proposed herein. The analysis reveals why adding more sensors can only exacerbate the situation. Numerical simulations are used to demonstrate that the proposed estimator, like Wax and Kailath’s MDL-based estimator, consistently estimates the true number of signals in the dimension fixed, large sample size limit and the effective number of identifiable signals, unlike Wax and Kailath’s MDL-based estimator, in the large dimension, (relatively) large sample size limit.
  • Raj Rao Nadakuditi and Jack Silverstein, Fundamental limit of sample eigenvalue based detection of signals in colored noise using relatively few samples.
My take is that it seems to provide a similar result as the one we have with respect to SNR in compressed sensing: i.e. as soon as there is noise - that is in effect coherent with the projections used in compressed sensing- then you are bound to have a recovery problem for the signal. That recovery problem pushes the Donoho-Tanner phase transition down and I wonder if Random Matrix Theory has an insight/asymptotic to provide there. RMT also seem to be a non-constructive theory in that it says what is possible and what is not where in CS, we produce a solution. Also the Random Matrix Theory does not seem to make a connection to sparsity, is it because it is implied in some fashion ? I don't know. Do you ?

No comments:

Printfriendly