Saturday, February 27, 2010

CS:Sparse Channel Separation wt Random Probes, Turbo Reconstruction of Structured Sparse Signals, Imaging through turbulence using coherence CS, TCSPC

I am away for a week, and then baam, not only do we get hit by Wired readers (i'll talk about this later) but we have a series of really interesting papers trying to make a clearer connection to some issues found in hardware making. First here is a connection to the calibration issue for the random lens imager or the more recent linear black box discovery problem featured in MMA17. We now have a bound on the number of measurements:

s · log^5(np),

In order to understand these parameters you need to read the full paper Sparse Channel Separation using Random Probes by Justin Romberg, Ramesh Neelamani. The abstract reads:

This paper considers the problem of estimating the channel response (or Green's function) between multiple source-receiver pairs. Typically, the channel responses are estimated one-at-a-time: a single source sends out a known probe signal, the receiver measures the probe signal convolved with the channel response, and the responses are recovered using deconvolution. In this paper, we show that if the channel responses are sparse and the probe signals are random, then we can significantly reduce the total amount of time required to probe the channels by activating all of the sources simultaneously. With all sources activated simultaneously, the receiver measures a superposition of all the channel responses convolved with the respective probe signals. Separating this cumulative response into individual channel responses can be posed as a linear inverse problem.
We show that channel response separation is possible (and stable) even when the probing signals are relatively short in spite of the corresponding linear system of equations becoming severely underdetermined. We derive a theoretical lower bound on the length of the source signals that guarantees that this separation is possible with high probability. The bound is derived by putting the problem in the context of finding a sparse solution to an underdetermined system of equations, and then using mathematical tools from the theory of compressive sensing. Finally, we discuss some practical applications of these results, which include forward modeling for seismic imaging, channel equalization in multiple-input multiple-output communication, and increasing the field-of-view in an imaging system by using coded apertures.

Thanks Justin and Neelsh. This is very timely, we just now need to make sure we are crossing the Donoho-Tanner border in the right direction.

There is also this paper that parallels some of the Imaging With Nature concepts described earlier, as once again, not only do we want to sense things through some random medium but we are also striving to find the structure of that random medium at the same time. This is just fascinating: Imaging through turbulence using compressive coherence sensing by Ashwin Wagadarikar, Daniel L. Marks, Kerkil Choi, Ryoichi Horisaki, and David Brady.The abstract reads:
Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This paper shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is sufficient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.

Phil Schniter also let me know of his upcoming work entitled:Turbo Reconstruction of Structured Sparse Signals. The abstract reads:
This paper considers the reconstruction of structured-sparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain/field/tree) is assumed on the pattern. Exact inference is discussed and an approximate inference scheme, based on loopy belief propagation (BP), is proposed. The proposed scheme iterates between exploitation of the observation-structure and exploitation of the pattern-structure, and is closely related to noncoherent turbo equalization, as used in digital communication receivers. An algorithm that exploits the observation structure is then detailed based on approximate message passing ideas. The application of EXIT charts is discussed, and empirical phase transition plots are calculated for Markov-chain structured sparsity.

Thanks Phil !

Finally, I found the following article on the interwebs entitled: Single-pixel imaging using 3D scanning time-of-flight photon counting by Robert Lamb and Gerald Buller. I need to add this tech to the compressive sensing hardware page.

Monday, February 22, 2010

CS: This week longest entry.


Here is this week's long post. I mentioned to you that I am really interested in a compressive sensing EEG system and it looks like I am not the only one. Last case in point is Compressive sampling of EEG signals with finite rate of innovation by Kok-Kiong Poh, and Pina Marziliano. The abstract reads:
Analyses of biomedical signals such as electroencephalogram and subsequent diagnoses can only be done effectively on long term recordings. Furthermore, the morphology of these signals must be preserved in the recordings. Therefore, a reliable, accurate and efficient compression and reconstruction technique is necessary to store and retrieve such data. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, which introduces ’extras’ - redundancies and irrelevancies - in the signals. Existing compression methods then remove these ’extras’, thereby achieving compression. In this article, we propose a compression scheme based on a sampling theory developed for signals with a finite rate of innovation. A signal acquisition scheme which compresses electroencephalographic signals during acquisition is described. The signals can then be effectively represented by a small set of Fourier coefficients corresponding to the signals’ rate of innovation. Using the new sampling theory, we can reconstruct the original signals using this small set of Fourier coefficients. Our scheme will be presented by firstly modelling the electroencephalographic signals as signals with finite rate of innovation and then sampling them at their rate of innovation. We examined our method on 3 sets of electroencephalographic signals comprising 72 hours of recording and present results based on factors commonly used in compression literature such as compression ratio, percent root difference and root mean square error. Metrics which are of greater significance to biomedical signals such as the cross correlation, maximum error and the peak amplitude related error criteria are also measured to study the morphological relations between the original and reconstructed electroencephalographic signals. It is observed that the proposed method achieves results comparable to that of wavelet compression methods, in terms of low reconstruction errors while preserving the morphological information of the electroencephalographic signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.



Compressive nonstationary spectral estimation using parsimonious randomsampling of the ambiguity function by Alexander Jung, Georg Tauböck, and Franz Hlawatsch. The abstract reads:
We propose a compressive estimator for the discrete Rihaczek spectrum (RS) of a time-frequency sparse, underspread, nonstationary random process. The new estimator uses a compressed sensing technique to achieve a reduction of the number of measurements. The measurements are randomly located samples of the ambiguity function of the observed signal. We provide a bound on the mean-square estimation error and demonstrate the performance of the estimator by means of simulation results. The proposed RS estimator can also be used for estimating the Wigner-Ville spectrum (WVS) since for an underspread process the RS and WVS are almost equal.

Information-theoretic limits on sparse signal recovery: Dense versus sparse measurement matrices by Wei Wang, Martin J. Wainwright, Kannan Ramchandran. The abstract of the technical report reads:
We study the information-theoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is high-dimensional in nature, in which the number of observations n, the ambient signal dimension p, and the signal sparsity k are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (non-Gaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields sharp characterizations of when the optimal decoder can recover a signal for various scalings of the sparsity k and sample size n including the important special case of linear sparsity (k = (p)) using a linear scaling of observations (n = (p)). Our second contribution is to prove necessary conditions on the number of observations n required for asymptotically reliable recovery using a class of -sparsified measurement matrices, where the measurement sparsity (n, p, k) 2 (0, 1] corresponds to the fraction of non-zero entries per row. Our analysis allows general scaling of the quadruplet (n, p, k, ), and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the information-theoretic limits of the subset recovery problem.

Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models by Philipp Schniter, Lee Potter, and Justin Ziniel. The abstract reads:
A low-complexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability models and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is O(MNK), with M observations, N coefficients, and K nonzero coefficients. For the case that hyperparameters are unknown, an approximate maximum likelihood estimator is proposed based on the generalized expectation-maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.

Video rate spectral imaging using a coded aperture snapshot spectral imager by Ashwin Wagadarikar, Nikos Pitsianis, Xiaobai Sun, and David Brady. The abstract reads:
We have previously reported on coded aperture snapshot spectral imagers (CASSI) that can capture a full frame spectral image in a snapshot. Here we describe the use of CASSI for spectral imaging of a dynamic scene at video rate. We describe significant advances in the design of the optical system, system calibration procedures and reconstruction method. The new optical system uses a double Amici prism to achieve an in-line, direct view configuration, resulting in a substantial improvement in image quality. We describe NeAREst, an algorithm for estimating the instantaneous three-dimensional spatio-spectral data cube from CASSI’s two-dimensional array of encoded and compressed measurements. We utilize CASSI’s snapshot ability to demonstrate a spectral image video of multi-colored candles with live flames captured at 30 frames per second.



From Wotao Yin, we have some new slides about the Bregman Methods

Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been--with few exceptions--on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the $(\infty,1)$ norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the $(\infty, 1)$ and $(\infty, 2)$ norms, and it uses a result of Latala to study approximation error in the spectral norm. These bounds hold for random sparsification schemes which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the $(\infty, 1)$ and $(\infty,2)$ error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.


Algorithmic linear dimension reduction in the ell-1 norm for sparse vectors (and attendant commentary), by Anna Gilbert, Martin Strauss, Joel Tropp, Roman Vershynin. The abstract reads:

We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or near-zeros, using little more than mlog(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several research communities are interested in techniques for measuring and recovering such signals and a variety of approaches have been proposed. We focus on two important properties of such algorithms.
• Uniformity. A single measurement matrix should work simultaneously for all signals.
• Computational Efficiency. The time to recover such an m-sparse signal should be close to
the obvious lower bound, mlog(d/m).
To date, algorithms for signal recovery that provide a uniform measurement matrix with approximately the optimal number of measurements, such as first proposed by Donoho and his collaborators, and, separately, by Cand`es and Tao, are based on linear programming and require time poly(d) instead of mpolylog(d). On the other hand, fast decoding algorithms to date from the Theoretical Computer Science and Database communities fail with probability at least 1/ poly(d), whereas we need failure probability no more than around 1/dm to achieve a uniform failure guarantee. This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(mlog2(m) log2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in `1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(mlog2(d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the `1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(mlog2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(mlog2(m) log2(d)). Furthermore, this reconstruction is stable and robust under small perturbations.


poolMC : Smart pooling of mRNA samples in microarray experiments. (Supp. Material here) by Raghu Kainkaryam, Angela Bruex, Anna Gilbert, Peter Woolf & John Schiefelbein.
Background: Typically, pooling of mRNA samples in microarray experiments implies mixing mRNA from several biological-replicate samples before hybridization onto a microarray chip. Here we describe an alternative smart pooling strategy in which different samples, not necessarily biological replicates, are pooled in an information theoretic efficient way. Further, each sample is tested on multiple chips, but always in pools made up of different samples. The end goal is to exploit the compressibility of microarray data to reduce the number of chips used and increase the robustness to noise in measurements.
Results: A theoretical framework to perform smart pooling of mRNA samples in microarray experiments was established and the software implementation of the pooling and decoding algorithms was developed in MATLAB. A proof-of-concept smart pooled experiment was performed using validated biological samples on commercially available gene chips.
Conclusions: The theoretical developments and experimental demonstration in this paper provide a useful starting point to investigate smart pooling of mRNA samples in microarray experiments. Important conditions for its successful implementation include linearity of measurements, sparsity in data, and large experiment size.

Sparse optimization with least-squares constraints by Michael Friedlander. The abstract reads:
The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.

Fast L1-Minimization Algorithms and An Application in Robust Face Recognition: A Review by Allen Yang, Arvind Ganesh, Shankar Sastry and Yi Ma. The abstract reads:
L1-minimization solves the minimum L1-norm solution to an underdetermined linear system y=Ax. It has recently received much attention, mainly motivated by the new compressive sensing theory that shows that under certain conditions an L1-minimization solution is also the sparsest solution to that system. Although classical solutions to L1-minimization have been well studied in the past, including primal-dual interior-point methods and orthogonal matching pursuit, they suffer from either expensive computational cost or insufficient estimation accuracy in many real-world, large-scale applications. In the past five years, many new algorithms have been proposed. We provide a comprehensive review of five representative approaches, namely, gradient projection, homotopy, iterative shrinkage-thresholding, proximal gradient, and alternating direction. The repository is intended to fill in a gap in the existing literature to systematically benchmark the performance of these algorithms using a consistent experimental setting. In addition, the experiment will be focused on the application of robust face recognition, where a sparse representation framework has recently been developed to recover human identities from facial images that may be affected by illumination, occlusion, and facial disguise. The paper also provides useful guidelines to practitioners working in similar fields.


There is a new workshop coming up at the Intractibility center:


“Barriers in Computational Complexity II” workshop


August 30 to September 3, 2010

Continuing the program that started with the first Barriers in Computational Complexity workshop in 2009, this workshop will focus on identifying and circumventing barriers that are preventing progress in several sub-fields of Computational Complexity.

This workshop will focus on five sub-fields that were not covered in the 2009 workshop: approximation algorithms and hardness of approximation, quantum computation, cryptography, communication and information in complexity and (geo)metric algorithms.

Organizing committee:
Michael Saks (chair), Scott Aaronson, Sanjeev Arora, Paul Beame, Moses Charikar, Shafi Goldwasser, Piotr Indyk, Yuval Ishai, Robert Tarjan, Umesh Vazirani

Tentative schedule of topics:

Monday, August 30
Approximaion Algorithms and Hardness of Approximation
Organizer: Moses Charikar
Tuesday, August 31
Quantum computation
Organizers: Scott Aaronson and Umesh Vazirani
Wednesday, September 1
Recent Advances and Challenges in Cryptography
Organizers: Shafi Goldwasser and Yuval Ishai
Thursday, September 2
Communication and information in complexity
Organizer: Paul Beame
Friday, September 3
(Geo)metric algorithms
Organizer: Piotr Indyk

And finally we have two thesis offers but they are both in French as I could not find an English
translation for them (I am not sure what that means). Here they are:

Offre de thèse: Compressed sensing pour la manipulation de bases de données multimédia volumineuses

Les contenus audiovisuels et multimédia génèrent de volumineux flux de données (audio, vidéo, texte, autres données associées, etc.). La manipulation de grandes bases de données de ce type de contenu nécessite le développement de techniques efficaces de détection, classification et apprentissage pour : segmenter les flux en séquences homogènes; les annoter en fonction des mots utilisé, de la langue, de l'identité du locuteur, et plus généralement du type de contenu; les indexer pour faciliter les requêtes et l'accès aux données, etc. En vue de développer les prochaines générations d'outils de recherche basés sur le contenu, il devient critique de réduire la complexité des tâches associées à la manipulation de volumineuses bases de données de ce type de contenu, ce qui nécessite une description concise des données préservant l'information recherchée, et des techniques d'apprentissage capables de travailler directement sur les données compressée.

Une approche émergente pour la compression de données est le nouveau paradigme du "compressed sensing", qui permet en particulier d'acquérir des données par échantillonnage bien au-dessous du taux d'échantillonnage de Shannon à condition que les données soient compressibles en un certain sens. Le principe consiste à exploiter le fait que les données à traiter, bien que de grande dimension, peuvent être décrites de façon concise à l'aide d'un petit jeu de paramètres. Concrètement, les données sont bien approchées par des combinaisons linéaires de quelques vecteurs de base appelés atomes et choisis dans une famille très redondante appelée dictionnaire [1]. On parle d'approximations parcimonieuses. Ces dernières années, des progrès important ont permis de comprendre les fondements statistiques d'algorithmes qui calculent des approximations parcimonieuses quasi-optimales. Dans des conditions contrôlées, on dispose de garanties de performance pour des algorithmes explicites de complexité limitée, notamment des approches basées sur l'optimisation convexe. Le compressed sensing en est une application émergente particulièrement prometteuse, qui permet de réduire la dimension des données par projection aléatoire en petite dimension, tout en préservant la capacité de capturer dans la projection obtenue l'essentiel de l'information contenue dans les données d'origine. La validité de cette approche découle de résultats théoriques fins en concentration de la mesure et en théorie des grandes matrices aléatoires.

L'objectif de cette thèse est d'explorer, à la fois numériquement et théoriquement, le potentiel du compressed sensing pour le traitement de grandes bases de données. Il s'agira notamment de proposer, mettre en oeuvre et évaluer des techniques d'apprentissage [2] travaillant sur une version fortement compressée d'un grand corpus de données, le but étant de remonter aux caractéristiques essentielles du corpus telles que la distribution statistique des données qui le composent. L'approche envisagée combinera des expérimentations sur des données synthétiques en petite dimension, la mise en oeuvre d'algorithmes d'optimisation en Matlab puis en C/C++, le développement d'une analyse théorique du problème, et l'évaluation de l'approche sur de grands corpus de données audiovisuelles.

Bibliographie:
[1] S. Mallat, Wavelet Tour of Signal Processing, 3ème édition: The Sparse Way. Academic Press, 2008.
[2] B. Schölkopf and A.J. Smola. Learning with Kernels. MIT Press, 2002.

Contact: Rémi Gribonval



Offre de thèse: Modélisation parcimonieuse de l'activité du cerveau pour l'imagerie cérébrale et les interfaces cerveau-machine


Les techniques d'imagerie cérébrale -qui mesurent la dynamique spatiale et temporelle de l'activité du cerveau- visent à comprendre le fonctionnement du cerveau, avec des applications fondamentales (la compréhension des mécanismes cognitifs) mais aussi médicales (observer, diagnostiquer et soigner des maladies telles que l'épilepsie) et technologiques (le développement d'interfaces cerveau-machine pour les jeux ou l'assistance aux handicapés). Les différentes modalités d'acquisition existantes ne permettent pas à ce jour d'obtenir des images à la fois bien résolues spatialement et temporellement, et certaines requièrent des équipements d'acquisition extrêmement lourds à mettre en oeuvre et coûteux.

Des études récentes, dont la thèse d'Alexandre Gramfort [1], ont montré le potentiel de techniques de modélisation parcimonieuse pour améliorer la résolution via la résolution de problèmes inverses de grande dimension, à partir de données acquises avec des équipements EEG de coût réduit. Cependant, outre la complexité algorithmique des techniques d'optimisation à mettre en oeuvre, qui demeure élevée, un important obstacle à l'utilisation pratique de ce type de techniques est sa sensibilité au choix d'un modèle parcimonieux, associé un "dictionnaire", qui soit adapté à la dynamique spatio-temporelle que l'on cherche à reconstruire tout en permettant des calculs rapides.

L'objectif de cette thèse est de s'attaquer à ces verrous afin de proposer, mettre en oeuvre et valider expérimentalement des techniques haute-résolution d'imagerie cérébrale. Il s'agira notamment de développer des techniques présentant un bon compromis résolution/complexité pour localiser des sources d'activité dans le cerveau à partir d'enregistrements EEG, en s'appuyant sur les algorithmes récents d'inversion parcimonieuse dont on cherchera à identifier et éliminer les principaux verrous de complexité. On s'intéressera également au problème de l'apprentissage automatique de dictionnaires [4] pour la modélisation parcimonieuse, et à évaluer son impact sur la résolution des résultats d'imagerie obtenue.

Ce stage se déroulera au sein de l'équipe METISS de l'IRISA, qui a développé des techniques précises de localisation et de séparation de sources [5] dans un contexte audio (sources sonores correspondant à différents locuteurs ou instruments de musique contribuant à un enregistrement) et possède une expérience reconnue en modélisation parcimonieuse. L'acquisition et le traitement de données EEG se feront notamment en collaboration avec l'équipe BUNRAKU de l'IRISA, qui possède l'équipement et l'expérience en interfaces cerveau-machine.

Bibliographie:
[1] Alexandre Gramfort, Mapping, timing and tracking cortical activations with MEG and EEG: Methods and application to human vision, These de l'ENST, 2009, http://tel.archives-ouvertes.fr/tel-00426852/fr/
[2] S. Mallat, Wavelet Tour of Signal Processing, 3ème édition: The Sparse Way. Academic Press, 2008.
[3] K. Kreutz-Delgado & al, Dictionary learning algorithms for sparse representation, Neural Comput., 2003, Vol. 15, No. 2, pp. 349--396
[4] K. Schnass & R. Gribonval, Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation, arXiv preprint 0904.4774,April 2009.
[5] Arberet, Simon and Gribonval, Rémi and Bimbot, Frédéric, A Robust Method to Count and Locate Audio Sources in a Multichannel Underdetermined Mixture, to appear in IEEE Trans. Signal Processing, 2010.

Contact: Rémi Gribonval

Thursday, February 18, 2010

CS: Crossing the Donoho-Tanner Border


I was having a discussion the other day about how one could apply compressive sensing to a specific problem featuring the sparse linear black box discovery. The sparse linear black box discovery problem is when one wants to find all the coefficients of a matrix that is known to be sparse (but not rank deficient nor do we know the sparsity pattern) . One also hope that this can be done with as few measurements as possible. We all know that a trivial solution is to evaluate this matrix through a projection of the canonical basis in N steps, where N is the number of columns of the matrix. But in the case when the matrix is known to be sparse, the number of measurements can be simply decreased in k*log(N) with high probability. See the MMA17 post and the attendant algorithm on how to do this.

Anyway, everything seems fine but sometimes you hit a rock: The matrix being sought is not highly sparse or compressible and one wonders what to do next. The obvious solution would be to go back to performing the N measurements as this is a safe bet and will work all the time. One could also, as I jokingly suggested, increase the size of the problem so that something that was barely sparse in one instance becomes highly sparse in the new setting. However this suggestion is only half of the story as can be seen in the very important paper of Jared Tanner and David Donoho entitled Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. In the paper, the authors show their discovery of a phase diagram that seems universal in that it fits many somewhat unrelated high dimensional problems. Perhaps the most important figure of interest to us is Figure 3 of that paper. It is included below. I just added five numbers to it in order to provide some context. Let us say that we have a not so sparse matrix to fill in. In the figure, this starting point is labeled 1 with coordinates delta at .52 and rho at .84. In other words, the sparsity of the matrix is a high 84% while the undersampling is 52% (that means that we would sample the linear black box with only 0.52N measurements). It's in the red, meaning that the use of l_1 solvers won't allow us to build a matrix with .52N measurements.



What happens when you increase the size of your problem i.e. increase N as I suggested? You go from 1 to 2...not a good path, you really are aiming to go blue so that an l_1 solver can really find that matrix using the algorithm featured on Monday last week. The only way you can go to 3 is by increasing both the size of your problem N and the amount of measurements n while hoping that the physics of the problem k will remain the same. That way n/N stays constant, while k/n decreases. You could also go to 4 with a path between 2 and 3, there, the increase in the number of measurements is not as linear as the number of measurements for 3. The element to worry about is whether the new number of measurements is as large as the older problem size N. Finally, let us note that the area highlighted by 5 is really the canonical construction (n=N), i.e. reconstruction works all the time and one doesn't care if the matrix is sparse.

Credit: NASA, The Cupola module was just installed yesterday on the International Space Station.

Out for a week

FYI: I'll be out for a week far away from the interwebs next week.

Wednesday, February 17, 2010

CS: Another postdoc, Spread spectrum MRI, Sparse Underwater Channel Estimation, Collaborative Filtering in a Non-Uniform World


Yves Wiaux just let me know of two items. The first one is a job for a postdoc in his lab, the text of the announcement is

BASP NODE ANNOUNCEMENT for a POSTDOCTORAL POSITION at LTS2:

Announcement February 2010.

In the context of new funding obtained by Dr Y. Wiaux at the Swiss National Science Foundation (SNSF) for the BASP research node, a two-year postdoctoral position is available as soon as in April 2010, on the theme "Compressed sensing imaging techniques for radio interferometry". This position will be hosted by the Signal Processing Laboratory 2 (LTS2) of the Signal Processing Laboratory (SP Lab) in the Institute of Electrical Engineering (IEL) of EPFL.

The position is opened to any dynamic and highly qualified candidate, Ph.D. in Electrical Engineering, Physics or equivalent and with a strong background in signal processing. Competence in programming (MATLAB, C) is required. Knowledge of compressed sensing, signal processing on the sphere, or radio-interferometric imaging is a plus. The successful candidate will be in charge of the collaborations between the BASP node and the international radio astronomy community. In addition to his/her main activities, he/she will also be welcome to collaborate with other researchers of the BASP node and of the SP Lab on signal processing for magnetic resonance imaging.

Note that EPFL offers very attractive salaries.

Requests for further information, and in a second stage applications, should be sent to Dr Y. Wiaux, directly by email.


I'll add it shortly to the compressive sensing jobs page. The second item is that the following paper reported on earlier:
"Spread spectrum for compressed sensing techniques in magnetic resonance imaging" by Y. Wiaux, G. Puy, R. Gruetter, J.-Ph. Thiran, D. Van de Ville, and P. Vandergheynst Preprint EPFL infoscience IEEE International Symp. on Biomedical Imaging: From Nano to macro (ISBI) (2010). IEEE Signal Process. Soc., in press

has been expanded in a journal paper:

Spread spectrum for accelerated acquisition in magnetic resonance imaging by Gilles Puy, Yves Wiaux, R. Gruetter, J-P. Thiran , D. Van De Ville, and Pierre Vandergheynst. The abstract reads:

We advocate the use of quadratic phase profiles in magnetic resonance imaging (MRI) with the aim of enhancing the achievable reconstruction quality from an incomplete k-space coverage for sparse or compressible signals, in the final perspective of accelerating the acquisition process relative to a standard complete coverage. The technique proposed amounts to the modulation of the image probed by a linear chirp. We analyze the related spread spectrum phenomenon in the context of the recent theory of compressed sensing, and we prove its effectiveness in enhancing the quality of image reconstruction, through a detailed analysis at each scale of a wavelet decomposition. We also establish the stability of this spread spectrum technique relative to noise, as well as its flexibility in terms of the chirp rate value required for optimal reconstruction. Our results rely both on theoretical considerations relative to the mutual coherence between the wavelet sparsity basis and the k-space sensing basis, as well as on extensive numerical simulations on the Shepp-Logan phantom.

The following two preprints just showed up on arxiv:

Compressed Sensing for Sparse Underwater Channel Estimation: Some Practical Considerations by Sushil Subramanian. the abstract reads:

We examine the use of a structured thresholding algorithm for sparse underwater channel estimation using compressed sensing. This method shows some improvements over standard algorithms for sparse channel estimation such as matching pursuit, iterative detection and least squares.

and Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm by Ruslan Salakhutdinov, Nathan Srebro. The abstract reads:
We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly. We introduce a weighted version of the trace-norm regularizer that works well also with non-uniform sampling. Our experimental results demonstrate that the weighted trace-norm regularization indeed yields significant gains on the (highly non-uniformly sampled) Netflix dataset.


Image credit: NASA/JPL/Space Science Institute, via the BadAstronomy blog

Tuesday, February 16, 2010

CS: Blind Compressed Sensing, some more opaque lens and a comment

Here is a new entry from Arxiv where the authors want to make compressive sensing truly universal ... or blind in Blind Compressed Sensing by Sivan Gleichman, Yonina C. Eldar. The abstract reads:
The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to make its solution unique. For each constraint we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. Under the uniqueness conditions, and as long as the signals are sparse enough, we demonstrate through simulations that without knowing the sparsity basis our methods can achieve results similar to those of standard compressed sensing, which relay on prior knowledge of the sparsity basis. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.


Sylvain Gigan pointed out to me the latest work on Opaque Lenses. The authors seem to now be able to perform some sort of superresolution in:

Optimal Concentration of Light in Turbid Materials by Elbert van Putten, Ad Lagendijk, Allard Mosk. The abstract reads:

In turbid materials it is impossible to concentrate light into a focus with conventional optics. Recently it has been shown that the intensity on a dyed probe inside a turbid material can be enhanced by spatially shaping the wave front of light before it enters a turbid medium. Here we show that this enhancement is due to concentration of light energy to a spot much smaller that a wavelength. We focus light on a dyed probe sphere that is hidden under an opaque layer. The light is optimally concentrated to a focus with an area that is for certain within 68% of the smallest focal area physically possible. A comparison between the intensity enhancements of both the emission and excitation light supports the conclusion of optimal light concentration.

Finally, Dick Gordon, one of the inventor of the ART algorithm sent me the following:
Steered microbeams and other active optics as compressive sensing
Dear Igor,
A search of Nuit Blanche archives on the words “steer” or “active” yielded surprisingly few relevant hits, except for the single pixel camera. I’ve listed a sampling of the active optics literature below. The sparse steered microbeam literature is reviewed in:

Richard Gordon (2010). Stop breast cancer now! Imagining imaging pathways towards search, destroy, cure and watchful waiting of premetastasis breast cancer [invited]. In: Breast Cancer - A Lobar Disease. T. Tot, Springer: in press.

The basic idea in applying a compressive sampling approach to these imaging problems is as follows. A detector or a radiation source/detector pair is aimed at the target. The data acquired is used to improve an estimate of the image. The image is evaluated for regions (in real or Fourier space) that are a) of interest, b) undersampled. Further data is acquired. Thus an iterative process occurs between the reconstructed image and data acquisition to improve it, in an image dependent manner. Greedy algorithm problems can be minimized by declaring any “uninteresting” region of at least temporary interest precisely for that reason, until proven indeed uninteresting. I’d like to suggest that the whole process has a lot in common with saccadic eye movements in building up understanding of a scene.

Let me give an example. A dark scene is illuminated one photon at a time, using turnstile photons that are emitted on command (Gordon, 2010). As the image is built up, a pattern recognition program starts to pick up interested features. Now these could be real, or due to random fluctuations. More photons are directed to these features. If they are real, their image is sharpened. If not, it is flattened. This nonlinear approach of active compressive sampling (ACS) may be the key to viewing a scene with the absolute minimum number of photons. For x-ray computed tomography (CT), ACS may be the key to orders of magnitude in dose reduction. Yours, -Dick Gordon gordonr@cc.umanitoba.ca

References:
Girkin, J.M., S. Poland & A.J. Wright (2009). Adaptive optics for deeper imaging of biological samples. Curr Opin Biotechnol 20(1), 106-110.

Hubin, N. & L. Noethe (1993). Active optics, adaptive optics, and laser guide stars. Science 262(5138), 1390-1394.

Hugot, E., M. Ferrari, G.R. Lemaitre, F. Madec, S. Vives, E. Chardin, D. Le Mignant & J.G. Cuby (2009). Active optics for high-dynamic variable curvature mirrors. Opt Lett 34(19), 3009-3011.

Lemaitre, G.R., P. Montiel, P. Joulie, K. Dohlen & P. Lanzoni (2005). Active optics and modified-Rumsey wide-field telescopes: MINITRUST demonstrators with vase- and tulip-form mirrors. Appl Opt 44(34), 7322-7332.

Macmynowski, D.G. (2009). Interaction matrix uncertainty in active (and adaptive) optics. Appl Opt 48(11), 2105-2114.

Signorato, R., O. Hignette & J. Goulon (1998). Multi-segmented piezoelectric mirrors as active/adaptive optics components. J Synchrotron Radiat 5(Pt 3), 797-800.

West, S.C. (2002). Interferometric Hartmann wave-front sensing for active optics at the 6.5-m conversion of the multiple mirror telescope. Appl Opt 41(19), 3781-3789.

Zhang, Y., D. Yang & X. Cui (2004). Measuring seeing with a Shack-Hartmann wave-front sensor during an active-optics experiment. Appl Opt 43(4), 729-734.

Three observations here:
  • I am sure much can be done when steering is feasible, but what would probably be very interesting is to be able to use the surrounding medium as a way to do the focusing (see Wavefront Coding for Random Lens Imagers ? for instance).
  • Adaptive Compressive Sensing is a subject that is being investigated that sometimes goes by the name of distilled sensing or simple adaptive compressive sensing. This is an area of much interest as results seem to show that adaptive solution seem to require fewer measurements at the price of an increased sampling complexity and attendant delays.
  • With regards to X-rays, one would need to be able to do some steering of X-rays. Unless one is looking at soft X-rays for which some coating materials can provide some equivalent of mirror there not much one can do with radiation except shielding from them. What is more important is the lessons to learn for 40 years of coded aperture as explained by Gerry Skinner . The discussion is very interesting but it does not fit exactly with compressive sensing. Let us note that Gerry is actively involved in gamma lenses thereby avoiding coded aperture altogether (see Coded Mask Imagers: What are they good for ? The George Costanza "Do the Opposite" Sampling Scheme. )

Monday, February 15, 2010

CS: This week's long entry

Today's long post is mix of hardware and theoretical papers. Woohoo, enjoy.

A (256x256) Pixel 76.7mW CMOS Imager/ Compressor Based on Real-Time In-Pixel Compressive Sensing by Vahid Majidzadeh, Laurent Jacques, Alexandre Schmid, Pierre Vandergheynst, Yusuf Leblebici. The abstract reads:
A CMOS imager is presented which has the ability to perform localized compressive sensing on-chip. In-pixel convolutions of the sensed image with measurement matrices are computed in real time, and a proposed programmable two- dimensional scrambling technique guarantees the randomness of the coefficients used in successive observation. A power and area- efficient implementation architecture is presented making use of a single ADC. A 256×256 imager has been developed as a test vehicle in a 0.18μm CIS technology. Using an 11-bit ADC, a SNR of 18.6dB with a compression factor of 3.3 is achieved after reconstruction. The total power consumption of the imager is simulated at 76.7mW from a 1.8V supply voltage.
I'll add the reference to the compressive sensing hardware page later this week.

Improved Constructions for Non-adaptive Threshold Group Testing by Mahdi Cheraghchi. The abstract reads:
The basic goal in combinatorial group testing is to identify a set of up to $d$ defective items within a large population of size $n \gt\gt d$ using a pooling strategy. Namely, the items can be grouped together in pools, and a single measurement would reveal whether there are one or more defectives in the pool. The threshold model is a generalization of this idea where a measurement returns positive if the number of defectives in the pool passes a fixed threshold $u$, negative if this number is below a fixed lower threshold $\ell \leq u$, and may behave arbitrarily otherwise. We study non-adaptive threshold group testing (in a possibly noisy setting) and show that, for this problem, $O(d^{g+2} (\log d) \log(n/d))$ measurements (where $g := u-\ell$) suffice to identify the defectives, and also present almost matching lower bounds. This significantly improves the previously known (non-constructive) upper bound $O(d^{u+1} \log(n/d))$. Moreover, we obtain a framework for explicit construction of measurement schemes using lossless condensers. The number of measurements resulting from this scheme is ideally bounded by $O(d^{g+3} (\log d) \log n)$. Using state-of-the-art constructions of lossless condensers, however, we come up with explicit testing schemes with $O(d^{g+3} (\log d) quasipoly(\log n))$ and $O(d^{g+3+\beta} \poly(\log n))$ measurements, for arbitrary constant $\beta \gt 0$.
Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels, by Waheed U. Bajwa, Jarvis Haupt, Akbar M. Sayeed, and Robert Nowak. The abstract reads:
High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising of linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing, can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional leastsquares-based training methods.

Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods by Jesse Berent, Pier Luigi Dragotti, and Thierry Blu. The abstract reads:
We consider the problem of sampling piecewise sinusoidal signals. Classical sampling theory does not enable perfect reconstruction of such signals since they are not band-limited. However, they can be characterized by a finite number of parameters, namely, the frequency, amplitude, and phase of the sinusoids and the location of the discontinuities. In this paper, we show that under certain hypotheses on the sampling kernel, it is possible to perfectly recover the parameters that define the piecewise sinusoidal signal from its sampled version. In particular, we show that, at least theoretically, it is possible to recover piecewise sine waves with arbitrarily high frequencies and arbitrarily close switching points. Extensions of the method are also presented such as the recovery of combinations of piecewise sine waves and polynomials. Finally, we study the effect of noise and present a robust reconstruction algorithm that is stable down to SNR levels of 7 [dB].

A Split Bregman Methodfor Non-Negative Sparsity Penalized Least Squares with Applications to Hyperspectral Demixing
, by Arthur Szlam, Zhaohui Guo and Stanley Osher. The abstract reads:

We will describe an alternating direction (aka split Bregman) method for solving problems of the form minu ||Au −f||2 + \eta||u||1 such that u \ge 0, where A is anm×n matrix, and \eta is a nonnegative parameter. The algorithm works especially well for solving large numbers of small to medium overdetermined problems (i.e. m \gt n) with a fixed A. We will demonstrate applications in the analysis of hyperspectral images.

I note the use of the term alternating direction for split Bregman :-)

We study the smallest singular value of a square random matrix with i.i.d. columns drawn from an isotropic log-concave distribution. An important example is obtained by sampling vectors uniformly distributed in an isotropic convex body. We deduce that the condition number of such matrices is of the order of the size of the matrix and give an estimate on its tail behavior.
From the Rice Repository, we have:

On performance of greedy algorithms by Vladimir N. Temlyakov , Pavel Zheltov. The abstract reads:
In this paper we show that for dictionaries with small coherence in a Hilbert space the Orthogonal Greedy Algorithm (OGA) performs almost as well as the best m-term approximation for all signals with sparsity almost as high as the best theoretically possible threshold s = 1 2 (M^-1 + 1) by proving a Lebesgue-type inequality for arbitrary signals. On the other hand, we present an example of a dictionary with coherence M and an s-sparse signal for which OGA fails to pick up any atoms from the support, thus showing that the above threshold is sharp. Also, by proving a Lebesgue-type inequality for Pure Greedy Algorithm (PGA), we show that PGA matches the rate of convergence of the best m-term approximation, even beyond the saturation limit of m^(-1/2) .
The general theory of greedy approximation is well developed. Much less is known about how speci c features of a dictionary can be used for our advantage. In this paper we discuss incoherent dictionaries. We build a new greedy algorithm which is called the Orthogonal Super Greedy Algorithm (OSGA). OSGA is more efficient than a standard Orthogonal Greedy Algorithm (OGA). We show that the rates of convergence of OSGA and OGA with respect to incoherent dictionaries are the same. Greedy approximation is also a fundamental tool for sparse signal recovery. The performance of Orthogonal Multi Matching Pursuit (OMMP), a counterpart of OSGA in the compressed sensing setting, is also analyzed under RIP conditions.

Finally, Massimo Fornasier and Holger Rauhut seem to be coming up with an introduction to compressive sensing aptly named: Compressive Sensing - An Introduction (feb11, 2010 version). Holger Rauhut is also writing some lecture notes entitled Compressive sensing and structured random matrices. Version of February 9, 2010. These Lecture Notes are still under construction. If you have comments of any sort, Holger would be happy to receive them.


On the interwebs, I just noticed a blog entitled Error Correcting Codes: Combinatorics, Algorithms and Applications for CSE 545 @ CSE SUNY Buffalo and a student project at Stanford in EE 360 entitled Primary User Detection in Cognitive Radios with Multi-Resolutional Bayesian Compressive Sensing by Steven Hong .




Image Credit: NASA, In a very unique setting over Earth's colorful horizon, the silhouette of the space shuttle Endeavour is featured in this photo by an Expedition 22 crew member on board the International Space Station, as the shuttle approached for its docking on Feb. 9 during the STS-130 mission. Via the Bad Astronomy blog.

Saturday, February 13, 2010

CS: Acoustic Kaleidoscope, reducing interference in EEG and visual resolution and cone spacing in the human fovea.


While attending the seminar of Maxime Dahan on Fluorescence imaging using compressed sensing, one of the audience member - I'm told it was Mickael Tanter - mentioned the time reversal kaleidoscope that has some very eery similarity with other concepts of using random materials to perform measurements like the Random Lens Imager. The paper that talks about it (behind a paywall) is The time reversal kaleidoscope: a new concept of smart transducers for 3d ultrasonic imaging by Gabriel Montaldo, Delphine Palacio, Mickael Tanter and Matthias Fink. The abstract reads:
The design of 2D arrays for 3D ultrasonic imaging is a major challenge in medical and non-destructive applications. Thousands of transducers are typically needed for beam focusing and steering in 3D volumes. Here, we report a completely new approach for producing 3D images with a small number of transducers using the combined concepts of time reversal mirrors and chaotic reverberating cavities. Due to multiple reverberations inside the cavity, a “kaleidoscopic” transducer array is created with thousands of virtual transducers equivalent to 2D matrices. Beyond the scope of 3D medical imaging, this work leads to the new concept of “smart” transducer.
It is interesting that they can do imaging in some analog fashion, I am sure that a compressive sensing step could provide a computational element that would (dramatically) enhance the resulting images and even provide some superresolution. We'll see.

As some of you know I am interested in a compressive sensing EEG system. This past week, the following interesting reference showed up on my radar screen: High-quality recording of bioelectric events. Part 1 Interference reduction, theory and practice. by A. C. Metting van Rijn A. Pepar C. A Grimbergen.

My webcrawler found the following paper published in Nature: The relationship between visual resolution and cone spacing in the human fovea by Ethan Rossi, Austin Roorda. [supplemental information] [video1][video2] [video3]. The abstract reads:
Visual resolution decreases rapidly outside of the foveal center. The anatomical and physiological basis for this reduction is unclear. We used simultaneous adaptive optics imaging and psychophysical testing to measure cone spacing and resolution across the fovea, and found that resolution was limited by cone spacing only at the foveal center. Immediately outside of the center, resolution was worse than cone spacing predicted and better matched the sampling limit of midget retinal ganglion cells.
This paper led me to another older one: Psychophysical estimate of extrafoveal cone spacing by Nancy J. Coletta and David R. Williams. The abstract reads:
In the extrafoveal retina, interference fringes at spatial frequencies higher than the resolution limit look like twodimensional spatial noise, the origin of which has not been firmly established. We show that over a limited range of high spatial frequencies this noise takes on a striated appearance, with the striations running perpendicular to the true fringe orientation. A model of cone aliasing based on anatomical measurements of extrafoveal cone position predicts that this orientation reversal should occur when the period of the interference fringe roughly equals the spacing between cones, i.e., when the fringe spatial frequency is about twice the cone Nyquist frequency. Psychophysical measurements of the orientation reversal at retinal eccentricities from 0.75 to 10 deg are in quantitative agreement with this prediction. This agreement implies that at least part of the spatial noise observed under these conditions results from aliasing by the cone mosaic. The orientation reversal provides a psychophysical method for estimating spacing in less regular mosaics, complementing another psychophysical technique for measuring spacing in the more regular mosaic of foveal cones [D. R. Williams, Vision Res. 25, 195 (1985); Vision Res. (submitted)].
In it one can read:
Previously, Yellott16 had proposed that disorder of the cone lattice prevents aliasing by smearing high spatial frequencies into broadband noise. The present data confirm that, although the aliasing noise is indeed smeared, it is still accessible to psychophysical observation. In fact, this aliasing noise can be used to draw inferences about the spacing of cones.
In light of this entry on Tilings, Islamic Art, Our Retina, Papoulis Sub-Nyquist Theorem, I had talked to Austin Roorda but never made that discussion into a blog entry. I am such a putz. I should unearth it sometimes.

Credit: NASA / JPL, pale blue dot, earth as seen from Voyager.

Friday, February 12, 2010

CS: A short op-ed, some comments,an internship and a postdoc

Can bold work make it through the peer-review process ? I am not sure. For instance, it is still a mystery to me as to why the Random Lens Imager paper has not been published. Another baffling example is the fate of Fast Sparse Representation based on Smoothed l_0 Norm by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten that features the following abstract:
In this paper, a new algorithm for Sparse Component Analysis (SCA) or atomic decomposition on over-complete dictionaries is presented. The algorithm is essentially a method for obtaining sufficiently sparse solutions of underdetermined systems of linear equations. The solution obtained by the proposed algorithm is compared with the minimum l1-norm solution achieved by Linear Programming (LP). It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
As advertized in the abstract, SL0 (the code is there) is indeed blazingly fast. In fact, in the cases I ran this week, SL0 was much faster than SPGL1 or GPSR and more accurate than these solvers and some L1-OMP codes. We all know that the peer review process has its flaws: this example of rejection by the IEEE Signal Processing Letters is one obvious instance of mismanagement by the editor(s). No matter what the reviewers were saying, because of the astounding claim, the editor should have asked the reviewers to verify the "two to three orders magnitude faster" claim. Obviously this is not how it went: this rejection is a disgrace.

I have had several comments this past week on a number of entries and since most of you read the blog through an RSS feed you are probably missing out on those. So here there are:

Regarding the NA Digest question I recently asked:

Subject: Question: Is there a name for this trivial "algorithm" ?

The following may sound like a dumb question because the "algorithm" is so trivial but I was wondering if there was a name for this algorithm:

You are trying to infer an m x n matrix A with unknown elements. The only method of learning about this matrix is through applying matrix vector products Ax = y. That is, you construct a vector x, perform a multiplication Ax, and finally analyze the resulting vector y of observations. The algorithm trivially revolves around using vectors of the canonical basis thereby providing direct access to the columns of A.

This algorithm can be directly implemented in n steps and therefore requires n matrix vector products.

I am interested in a compressive sensing approach to this matrix discovery algorithm when one knows in advance that the matrix A is sparse and one wants to reduce the number of matrix-vector steps from n to O(k log(n)) where k represents the sparsity of matrix A. See the following link: http://is.gd/7TrsH for more information on how to approach this problem.

If you have any ideas on the name of this algorithm (the trivial one), please send suggestions to: igorcarron@gmail.com

Cheers,

Igor.
http://nuit-blanche.blogspot.com/search/label/CS
I received some very insightful responses from the wise readers of the list. Here are few of them:

Trond Steihaug kindly responded first with:
The algorithm you describe using mastrix vector product and known sparsity was proposed by Curtis-Powell and Read, IMA Journal of Applied Mathematics 1974 13(1):117-119; On the Estimation of Sparse Jacobian Matrices, A. R. CURTIS, M. J. D. POWELL and J. K. REID. I enclose the paper.

Symmetry or other known structural properties can also be utilized.
then Robert Schreiber remarked:
Igor,

As far as I know, there is no name for computing A as A * I; probably shouldn't be.

The interesting issue you raise is this -- when A is sparse, if its sparsity pattern (i.e. the graph of A) is known and its elements are unknown, then can one choose fewer than n vectors {x_1 , ... x_m) with m \lt n.

In this case, it is well known that m = chi(G(A)) is possible, where chi(G) is the chromatic number of the graph G. Each x is a sum of standard basis vectors. You can figure out the rest of this... This technique is well known as a fast way to compute an approximate Jacobian with a smaller than n number of function evaluations of a nonlinear function.

and finally, Andreas Stathopoulos noticed:

Hi, I looked at your blog and the algorithm you are proposing. This looks like the classic Monte Carlo method used by many physicists to approximate various functions of A, eg., diag(inv(A)), trace(inv(A)), trace(log(A)), det(A), etc. Similar methods have been used in data mining.

The principle is that for random vectors the covariance Exp(XX^T) = Identity. So for large number of random vecs (normalized by their number) XX^T \approx I. Then A(XX^T) = (AX)X^T \approx A, so we need to compute only Y = AX, and YX^T. Looking at your method, it looks like your underdetermined system does exactly that without any assumptions about covariance. In the limit, it should turn out to be unnecessary. Moreover, the Monte Carlo methods use {-1,1} random entries in X.

I am unclear how the number of nonzeros in A or the log(n) enter in the runtime of the method. For the noise methods above it is known that they converge as O(1/sqrt(samplesize)). The constant depends on the variance of the function of A that is calculated.
I should have been clearer in my statement "one knows in advance that the matrix A is sparse" in that the sparsity pattern is unknown a priori. I liked Robert's "there is no name for computing A as A * I; probably shouldn't be" :-) Anyway, the implementation of the algorithm is featured in MMA17. As it turns out in the MMA17 blog entry, Dick Gordon, one of the authors of the ART algorithm wrote:

Re: Monte Carlo Algebra
Dear Igor,
Maybe this is train of thought, but your Monday Morning Linear Black Box random algorithm reminds me of:

Gordon, R. (1970). On Monte Carlo algebra. J. Appl. Prob. 7, 373-387.
Abstract: A Monte Carlo method is proposed and demonstrated for obtaining an approximate algebraic solution to linear equations with algebraic coefficients arising from first order master equations at steady state. Exact solutions are hypothetically obtainable from the spanning trees of an associated graph, each tree contributing an algebraic term. The number of trees can be enormous. However, because of a high degeneracy, many trees yield the same algebraic term. Thus an approximate algebraic solution may be obtained by taking a Monte Carlo sampling of the trees, which yields an estimate of the frequency of each algebraic term. The accuracy of such solutions is discussed and algorithms are given for picking spanning trees of a graph with uniform probability. The argument is developed in terms of a lattice model for membrane transport, but should be generally applicable to problems in unimolecular kinetics and network analysis. The solution of partition functions and multivariable problems by analogous methods is discussed.

This may be the first, if not only, example of Monte Carlo applied to symbolic math, rather than numbers or arrays of numbers. Perhaps it could be regarded as a compressive sampling of trees?
Yours, -Dick Gordon
In the Compressive Sensing Linear Algebra 101 thread, Stephen Becker, mentions the following:
If you change your prior and instead of assuming that A is sparse you now assume that it has rank r (presumably with rank r \lt size(A) ), then you can find A (with high probability) using only 2*r linear measurements.

You need to be able to measure Y = A*X (with X having r columns), as well as Z = A'*W (with W having r columns). You only need to perform a single QR decomposition on W or Z (very cheap) and a matrix multiply. There's no need for a solver like GPSR. In practice, you can do this for a matrix of size m = n = 2^30, and if the rank is small (say, 10) it's very fast (less than 1 second for rank = 10).
I realize that the recent rank minimization work provide some handy and powerful techniques but sometimes, the physics of the system being modeled is not amenable to this constraint.

Finally, I have a commenter who asked the following on the passive illumination set-up featured in the following post:

Hi all, I'm trying to implement a setup similar to the one you present here but I'm having some problems in replicating your results.

Regarding the photodetection setup, do you have any constraints there? Or is it a simple photodetector with an amplifier?

Any hints...? Help?
I sent an email out to some of the authors of this set up and it looks like this is what it is (simple photodetector with an amplifier) but I am looking forward for an answer from the people who physically built the system.


Finally, Petros Boufounos sent up two positions at MERL, they are also listed in the compressive sensing jobs page:

Feb 11th, 2010. Postdoctoral Researcher, Compressive Sensing, MERL, Cambridge, MA

MERL is seeking qualified and high-caliber applicants for a postdoctoral researcher position in the Multimedia Group. The successful candidate will be expected to perform original research in the area of Compressive Sensing, with particular application in remote sensing and radar signal processing. The candidate may also elect to devote part of his time to independent research. This is a 1-year appointment with possibility for extension.

Qualifications:
• Applicants are expected to hold a Ph.D. degree in Electrical Engineering, Computer Science, or closely related field.
• Background in signal and/or image processing, especially in the area of sparse approximation and Compressive Sensing.
• Experience with processing remotely sensed data (SAR, radar, multispectral imaging) and/or array processing.
• Applicants must have a strong publication record demonstrating novel research achievements.
• Excellent programming skills (C/C++, Matlab, etc) are required.
• Excellent presentation, written and inter-personal communication skills are expected.

Mitsubishi Electric Research Laboratories, Inc. is an Equal Opportunity Employer.
PRINCIPALS ONLY apply via email to petrosb@merl.com with "postdoctoral researcher" in the subject
No phone calls please.

Feb 11th, 2010. Internship: Compressive Sensing , MERL, Cambridge, MA

Development of new sensing technologies using principles from compressive sensing. The project involves development of hardware and/or simulation of physical systems. The ideal candidate should have significant signal processing experience, and should be comfortable with Matlab and C/C++. Experience in the areas of compressive sensing, sparse approximation, and radar signal processing is desirable but not necessary. Interested candidates apply to Petros Boufounos (petrosb@merl.com) with "summer internship" in the subject. No phone calls please.


Photo: 3D view of an Enceladan landscape. This 3D view of Enceladus is based upon a topographic model calculated using photoclinometry on a single Cassini image of the moon. It is looking along a newly named set of fractures named Bishangarh Fossae. Credit: Image data: NASA / JPL / SSI; 3D model: Bernhard Braun. Via the Planetary Society blog.

Wednesday, February 10, 2010

Dear Google

Here is my problem with your outstanding Gmail software. Below is the Standard Gmail view:


and here is the Basic HTML view


Did you notice that "Compose Mail" has disappeared in the Standard view ? By the way, the "More" option is really not an option.

[Update: It's been fixed, but I am now getting this Buzz thingy that I think I want to avoid ]

CS: MMA17, Quantum Hamiltonians via random measurements, Manifold-Based CS, CS Holography,Spectral reconstructions, Compressive Video Sensors

I asked a question to the Numerical Analysis E-mail list about the name of the algorithm mentioned on Monday (Monday Morning Algorithm 17). I was surprised to hear back that it looked like what's called "finite difference" (different from the technique used to discretize PDEs) or "sparse finite differencing". Except that in these techniques dating back to the early 1970's the sparsity pattern needs to be known. In our case, obviously that sparsity pattern is not known which makes the algorithm all the more interesting. More on that later....

Today, we two preprints from Arxiv, a thesis and two papers:

Efficient estimation of many-body quantum Hamiltonians via random measurements by Alireza Shabani, Masoud Mohseni, Seth Lloyd, Robert Kosut, Herschel Rabitz. The abstract reads:
Complete information about the Hamiltonian of a quantum system is sufficient to predict its future behavior with arbitrary accuracy. There is presently no method for efficient estimation of Hamiltonian parameters due to the exponentially rising number of the required experimental configurations with respect to the system degrees of freedom and inherent nonlinear dependence of measurement outcomes on the Hamiltonian parameters. Here, we develop a compressed sensing method for scalable Hamiltonian identification of multipartite quantum systems. We demonstrate that with only O(s log(d)) random measurement settings one can estimate the Hamiltonian of a d-dimensional system, provided that the Hamiltonian is known to be almost s-sparse in a basis. This approach is robust to noise and experimental imperfections in data extraction. We numerically simulate the performance of this algorithm for three- and four-body interactions in spin-coupled quantum dots and atoms in optical lattices. Furthermore, we apply the algorithm to characterize Hamiltonian fine structure and system-bath couplings in open quantum systems.

Manifold-Based Signal Recovery and Parameter Estimation from Compressive Measurements by Michael B. Wakin. The abstract reads:
A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in $\ell_2$ for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.
I found the following thanks to the Google: Compressed Sensing with off-axis, frequency-shifting holography by Marcio M. Marim, Michael Atlan, Elsa Angelini and Jean-Christophe Olivo-Marin. The abstract reads:
This work reveals an experiment microscopy acquisition scheme successfully combining Compressed Sensing (CS) and digital holography in off-axis and frequency-shifting conditions. CS is a recent data acquisition theory involving signal reconstruction from randomly undersampled measurements, exploiting the fact that most images present some compact structure and redundancy. We propose a genuine CS-based imaging scheme for sparse gradient images, acquiring a diffraction map of the optical field with holographic microscopy and reconvering the signal as little as 7% of random measurements. We report experiment results demonstrating how CS can lead to an elegant and effective way to reconstruct images, opening the door for new microscopy applications.
There is also a thesis defended last summer entitled: Optimal spectral reconstructions from deterministic and stochastic sampling geometries using compressive sensing and spectral statistical models by Oliver Jeromin. The abstract of the dissertation reads:
This dissertation focuses on the development of high-quality image reconstruction methods from a limited number of Fourier samples using optimized, stochastic and deterministic sampling geometries. Two methodologies are developed: an optimal image reconstruction framework based on Compressive Sensing (CS) techniques and a new, Spectral Statistical approach based on the use of isotropic models over a dyadic partitioning of the spectrum. The proposed methods are demonstrated in applications in reconstructing fMRI and remote sensing imagery. Typically, a reduction in MRI image acquisition time is achieved by sampling K-space at a rate below the Nyquist rate. Various methods using correlation between samples, sample averaging, and more recently, Compressive Sensing, are employed to mitigate the aliasing effects of under-sampled Fourier data. The proposed solution utilizes an additional layer of optimization to enhance the performance of a previously published CS reconstruction algorithm. Specifically, the new framework provides reconstructions of a desired image quality by jointly optimizing for the optimal K-space sampling geometry and CS model parameters. The effectiveness of each geometry is evaluated based on the required number of FFT samples that are available for image reconstructions of sufficient quality. A central result of this approach is that the fastest geometry, the spiral low-pass geometry has also provided the best (optimized) CS reconstructions. This geometry provided significantly better reconstructions than the stochastic sampling geometries recommended in the literature. An optimization framework for selecting appropriate CS model reconstruction parameters is also provided. Here, the term “appropriate CS parameters” is meant to infer that the estimated parameter ranges can provide some guarantee for a minimum level of image reconstruction performance. Utilizing the simplex search algorithm, the optimal TV-norm and Wavelet transform penalties are calculated for the CS reconstruction objective function. Collecting the functional evaluation values of the simplex search over a large data set allows for a range of objective function weighting parameters to be defined for the sampling geometries that were found to be effective. The results indicate that the CS parameter optimization framework is significant in that it can provide for large improvements over the standard use of non-optimized approaches. The dissertation also develops the use of a new Spectral Statistical approach for spectral reconstruction of remote sensing imagery. The motivation for pursuing this research includes potential applications that include, but are not limited to, the development of better image compression schemas based on a limited number of spectral coefficients. In addition, other applications include the use of spectral interpolation methods for remote sensing systems that directly sample the Fourier domain optically or electromagnetically, which may suffer from missing or degraded samples beyond and/or within the focal plane. For these applications, a new spectral statistical methodology is proposed that reconstructs spectral data from uniformly spaced samples over a dyadic partition of the spectrum. Unlike the CS approach that solves for the 2D FFT coefficients directly, the statistical approach uses separate models for the magnitude and phase, allowing for separate control of the reconstruction quality of each one. A scalable solution that partitions the spectral domain into blocks of varying size allows for the determination of the appropriate covariance models of the magnitude and phase spectra bounded by the blocks. The individual spectral models are then applied to solving for the optimal linear estimate, which is referred to in literature as Kriging. The use of spectral data transformations are also presented as a means for producing data that is better suited for statistical modeling and variogram estimation. A logarithmic transformation is applied to the magnitude spectra, as it has been shown to impart intrinsic stationarity over localized, bounded regions of the spectra. Phase spectra resulting from the 2D FFT can be best described as being uniformly distributed over the interval of -pi to pi. In this original state, the spectral samples fail to produce appropriate spectral statistical models that exhibit inter-sample covariance. For phase spectra modeling, an unwrapping step is required to ensure that individual blocks can be effectively modeled using appropriate variogram models. The transformed magnitude and unwrapped phase spectra result in unique statistical models that are optimal over individual frequency blocks, which produce accurate spectral reconstructions that account for localized variability in the spectral domain. The Kriging spectral estimates are shown to produce higher quality magnitude and phase spectra reconstructions than the cubic spline, nearest neighbor, and bilinear interpolators that are widely used. Even when model assumptions, such as isotropy, violate the spectral data being modeled, excellent reconstructions are still obtained. Finally, both of the spectral estimation methods developed in this dissertation are compared against one another, revealing how each one of the methods developed here is appropriate for different classes of images. For satellite images that contain a large amount of detail, the new spectral statistical approach, reconstructing the spectrum much faster, from a fraction of the original high frequency content, provided significantly better reconstructions than the best reconstructions from the optimized CS geometries. This result is supported not only by comparing image quality metrics, but also by visual assessment.

and finally: Compressive video sensors using multichannel imagers by Mohan Shankar, Nikos Pitsianis, and David Brady. The abstract reads:
We explore the possibilities of obtaining compression in video through modifiliged sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.



Credit: NASA/Jim Grossmann, STS-130, the last time you will ever see a Shuttle launch at night. Via the Planetary Society blog.

Printfriendly