## Friday, December 31, 2010

### Nuit Blanche on Five Continents, Phobos Eclipse on Mars.

I am still looking for the most interesting paper on Compressive Sensing you read in 2010 ? You can remain anonymous, just the title is enough, no need for full references.

This is a map of last 500 visitors to the site. wow.
For those of you close to the international dateline, Happy New Year,

Credit: NASA, JPL, Cornell, Texas A&M University. Martian Eclipse with Phobos and the Sun.

## Thursday, December 30, 2010

### Some Stats

The usual stats for a blog amount to pageviews and so forth. For Nuit Blanche we have:
While this looks like a vanity exercise, what does this bring you :-) ?
• Your video may go viral.

### CS: A comment on USPS as a continent sized sensor/ Diatoms and Flower Constellations / Concave regularizations and MAP priors for sparse topic models

I usually illustrate blog entries with the most recent raw images taken by the Cassini spacecraft as I want to be reminded that we are currently discovering new worlds as we speak. Two days ago, you may have noticed that Saturn had a feature that looked like a feature that could be found on Jupiter. The Bad Astronomy blog has a piece on this new feature. It looks like a storm ... a large one.

In a comment to yesterday's concept of using the USPS as a continent-sized sensor, Meena Mani said:
"It is simply amazing to have such a forward looking concept being articulated. " And now that it has been articulated it seems amazing that it hadn't been before. I think, it has to do with privacy concerns.
The privacy issue is being addressed in the report and I agree that it may indeed be a big deal. However, there are a lot of non-imaging capabilities that could be instantiated (electromagnetic, aerosol,...). Even some imaging capabilities could be implemented without much risk to privacy. For instance, what about using hyperspectral imaging in support to either pollution assessment or precision agriculture.Concepts based on the use of cheap CASSI Imagers would be ideal for that purpose. Well it looks like others are also thinking about giant sensors like the people behind the Living Earth Simulator. For one thing, I'd rather we use already existing infrastructure such as either the USPS or the electric grid.

On a totally different note, I came across the mention of diatoms (Dick Gordon is working on these things). According to wikipedia,

Diatoms[1] are a major group of algae, and are one of the most common types of phytoplankton. Most diatoms are unicellular, although they can exist as colonies in the shape of filaments or ribbons (e.g. Fragillaria), fans (e.g. Meridion), zigzags (e.g. Tabellaria), or stellate colonies (e.g. Asterionella). Diatoms are producers within the food chain. A characteristic feature of diatom cells is that they are encased within a unique cell wall made of silica (hydrated silicon dioxide) called a frustule. These frustules show a wide diversity in form, but usually consist of two asymmetrical sides with a split between them, hence the group name. Fossil evidence suggests that they originated during, or before, the early Jurassic Period. Diatom communities are a popular tool for monitoring environmental conditions, past and present, and are commonly used in studies of water quality.

I was stunned by the shape of these diatoms as they all resemble some instances of much larger objects.

For instance, the five stars shape diatoms occur only inside circles (l_2 norm = cste). This is probably a coincidence but when Daniele Mortari designs his Flower Constellation orbital trajectories for spacecratfs orbiting earth, the five star shape also appears in a circle envelope. Let us remember that his trajectories appear in a 1/(l_2 norm)^2 well. Matt Wilkins' thesis on the subject of flower constellations can be found here. There are many configurations for these orbital trajectories that seem to be dependent on six parameters and I clearly remember that you could produce those straight/convex/concave triangles and sqaure shown above.. Flower constellations around Earth have a size of about 50,000,000 meters while diatoms have size of the order of 2-200 micrometers, that's about 13 orders of magnitude!

I just found this final report for a course at Cornell: Concave regularizations and MAP priors for sparse topic models by Johan Ugander

## Wednesday, December 29, 2010

### CS: The USPS as a Continent Sized Sensor, Disparity-Compensated Compressed-Sensing Reconstruction for Multiview Images

Michael Ravnitzky, the chief counsel to the chairman of the Postal Regulatory Commission, sent me an email asking me if I had read his New York Times Op-ed entitled The Postman Always Pings Twice. He also sent me a more detailed version of his incredible idea (A first draft of that idea can be found here): Use the time-space distribution of the USPS fleet as a way to provide some sort of service (data use by companies, science based experiments,...) to a diverse set of stakeholders (Federal, states, companies, universities,...). It is simply amazing to have such a forward looking concept being articulated. From what Michael is telling me, this is the beginning of a (well known) process: The USPS has to buy off the idea, then Congress has to do the same... all in all, it will eventually require the input and the interest of several stakeholders to make it a reality.

In my response to him I pointed out that in order to help his project, he probably had to use as a model other types of network set up by the government that cost real money (as opposed to the marginal cost entailed here since the infrastructure in the case of the USPS fleet is already up and running). A famous example I can think of is the GPS constellation, the array of satellites that eventually provide you with directions on your iPhone. Up until president Clinton signed an Executive Order (i.e. marginal cost on top of the fleet development entailed the ability to deny services), GPS was basically scrambled for precision civilian use. Even after the sign off, I can specifically recall discussions with several people who would argue that GPS could never be used for civilian purposes. We know this is not true anymore as it has spawned a large set of services.

Michael also tells me that Popular Science magazine will also have an article in the February issue, which hits the newsstands on about January 15th. He thinks of the idea primarily as a new analytical tool for scientific discoveries and pollution reduction.

Finally, here is an "old" paper that just showed on my radar screen: Disparity-Compensated Compressed-Sensing Reconstruction for Multiview Images by Maria Trocan,  ; Thomas Maugey; James Fowler, Béatrice Pesquet-Popescu.. The abstract reads:
In a multiview-imaging setting, image-acquisition costs could be substantially diminished if some of the cameras operate at a reduced quality. Compressed sensing is proposed to effectuate such a reduction in image quality wherein certain images are acquired with random measurements at a reduced sampling rate via projection onto a random basis of lower dimension. To recover such projected images, compressed-sensing recovery incorporating disparity compensation is employed. Based on a recent compressed-sensing recovery algorithm for images that couples an iterative projection-based reconstruction with a smoothing step, the proposed algorithm drives image recovery using the projection-domain residual between the random measurements of the image in question and a disparity-based prediction created from adjacent, high-quality images. Experimental results reveal that the disparity-based reconstruction significantly outperforms direct reconstruction using simply the random measurements of the image alone.

## Tuesday, December 28, 2010

### CS: Some blog entries, Calibrationless Parallel Imaging with Structured Low-Rank Matrix Completion, Accelerated dynamic MRI and low-rank structure: k-t SLR , Sparse Recovery for Earth Mover Distance, CMUX for multi-channel CS, QP and LP in CS, MMDS 2010, A Sparser JLT, Low rank perturbations of large random matrices

Here are some blog entries of interest:

As I was reading Miki Lustig's seminar summary at the Technion this thursday:
Pixel Club Seminar: Practical Parallel Imaging Compresses Sensing MRI: Summary of Two Years of Experience Accelerating Body MRI of Pediatric Patients
Speaker:
Michael Lustig (Electrical Engineering and Computer Sciences, College of Engineering, UC Berkeley)
Date:
Thursday, 30.12.2010, 11:30
Place:
Room 337-8 Taub Bld.

Magnetic Resonance Imaging has revolutionized diagnostic medicine. It is an excellent tool for disease diagnosis and monitoring, offering superb soft tissue contrast and high anatomic resolution; unlike computed tomography (CT), it lacks of ionizing radiation. However MRI suffers from several shortcomings, one of which is the inherently slow data acquisition. This has limited the penetration of MRI to applications that require sharp images of fast moving small body parts, such as angiography, cardiovascular imaging, imaging small children and fetal imaging.

To overcome this limitation, many methods for fast imaging by reduced sampling have been proposed. These are based on exploiting various redundancies in the data. Parallel Imaging is the most noteworthy. It is a well-established accelerated imaging technique based on the spatial sensitivity of array receivers. Compressed sensing is an emerging accelerated imaging based on the compressibility of medical images. Synergistic combination of parallel imaging and compressed sensing offers much higher acceleration and better quality imagery.

For the last two years, we have been experimenting with applying compressed sensing parallel imaging for MR body imaging of pediatric patients. It is a joint-effort by teams from UC Berkeley, Stanford University and GE Healthcare. The talk aims to summarize our experience so far. I will start with some background on MR imaging, parallel imaging and compressed sensing MRI. I will then turn to describe our unique approach of data acquisition and reconstruction, our implementation on parallel processors (multi-core and GPGPU), applications and clinical studies. Our approach is implemented and installed at Lucile Packard Children's Hospital at Stanford. So far, it is the first and only place in the world in which compressed sensing is routinely used in clinical practice. We can routinely achieve clinical quality reconstructions of body pediatric MR, at more than 8-fold acceleration. These are reconstructed and displayed in about a minute using our GPU-based parallel processing reconstruction.

Time permitting I will also describe some recent advances in parallel imaging that leads to interesting low-rank structured matrix completion problems.
The last part of the summary caught my attention so I looked around and saw the following abstract

A new method for parallel imaging that requires no special autocalibration lines or calibration scans is presented. Instead, the method jointly calibrates, and synthesizes missing data from the entire acquired k-space. The proposed method is based on low-rank matrix completion, which is an extension of the compressed sensing theory to matrices. It is implemented by iterative singular value thresholding. The method can be used to reconstruct undersampled data, to generate calibration data for GRAPPA-like methods, or just to improve calibration when the calibration area is too small.

Mmnuh! Calibrationaless I like this paper already and look forward to it. Others are also looking at matrix completion issues here is a recent paper:

We introduce a novel algorithm to reconstruct dynamic MRI data from under-sampled k-t space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the data-dependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLT-based methods that rely on a two-step approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and in-vivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods.
Here some recent papers that I have somehow forgotten to feature:

Sparse Recovery for Earth Mover Distance by Rishi Gupta, Piotr Indyk, Eric Price. The abstract reads:
We initiate the study of sparse recovery problems under the Earth-Mover Distance (EMD). Speci cally, we design a distribution over m x n matrices A, for m n, such that for any x, given Ax, we can recover a k-sparse approximation to x under the EMD distance. We also provide an empirical evaluation of the method that, in some scenarios, shows its advantages over the \usual" recovery in the p norms.
The recently developed compressive sensing (CS) framework enables the design of sub-Nyquist analog-to-digital converters. Several architectures have been proposed for the acquisition of sparse signals in large swaths of bandwidth. In this paper we consider a more flexible multi-channel signal model consisting of several discontiguous channels where the occupancy of the combined bandwidth of the channels is sparse. We introduce a new compressive acquisition architecture, the compressive multiplexer (CMUX), to sample such signals. We demonstrate that our architecture is CS-feasible and suggest a simple implementation with numerous practical advantages.

Here we show that the problem of realizing a polytope with specified combinatorics is equivalent to a low rank matrix completion problem. This is comparable to known results reducing realizability to solving systems of multinomial equations and inequalities, but the conditions we give here are more simply stated. We see how this matrix is related to a matrix appearing in a similar result by D\'iaz.

Relations between $\beta$ and $\delta$ for QP and LP in Compressed Sensing Computations by Jun Zhang, Jun Wang, Guangwu Xu. The abstract reads:
In many compressed sensing applications, linear programming (LP) has been used to reconstruct a sparse signal. When observation is noisy, the LP formulation is extended to allow an inequality constraint and the solution is dependent on a parameter $\delta$, related to the observation noise level. Recently, some researchers also considered quadratic programming (QP) for compressed sensing signal reconstruction and the solution in this case is dependent on a Lagrange multiplier $\beta$. In this work, we investigated the relation between $\delta$ and $\beta$ and derived an upper and a lower bound on $\beta$ in terms of $\delta$. For a given $\delta$, these bounds can be used to approximate $\beta$. Since $\delta$ is a physically related quantity and easy to determine for an application while there is no easy way in general to determine $\beta$, our results can be used to set $\beta$ when the QP is used for compressed sensing. Our results and experimental verification also provide some insight into the solutions generated by compressed sensing.

The 2010 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2010) was held at Stanford University, June 15--18. The goals of MMDS 2010 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and Internet data sets; and (2) to bring together computer scientists, statisticians, applied mathematicians, and data analysis practitioners to promote cross-fertilization of ideas. MMDS 2010 followed on the heels of two previous MMDS workshops. The first, MMDS 2006, addressed the complementary perspectives brought by the numerical linear algebra and theoretical computer science communities to matrix algorithms in modern informatics applications; and the second, MMDS 2008, explored more generally fundamental algorithmic and statistical challenges in modern large-scale data analysis.

Of interest to us starts at page 5 of the document. To recall all the presentations made at MMDS 2010 can be found here.

A Sparser Johnson-Lindenstrauss Transform by Daniel M. Kane, Jelani Nelson. The abstract reads:
We give a Johnson-Lindenstrauss transform with column sparsity s = Theta(eps^{-1}log(1/delta)) into optimal dimension k = O(eps^{-2}log(1/delta)) to achieve distortion 1+eps with success probability 1-delta. This is the first distribution to provide an asymptotic improvement over the Theta(k) sparsity bound for all values of eps,delta. Previous work of [Dasgupta-Kumar-Sarlos, STOC 2010] gave a distribution with s = O~(eps^{-1}log^3(1/delta)), with tighter analyses later in [Kane-Nelson, CoRR abs/1006.3585] and [Braverman-Ostrovsky-Rabani, CoRR abs/1011.2590] showing that their construction achieves s = O~(eps^{-1}log^2(1/delta)). As in the previous work, our scheme only requires limited independence hash functions. In fact, potentially one of our hash functions could be made deterministic given an explicit construction of a sufficiently good error-correcting code.
Our linear dependence on log(1/delta) in the sparsity allows us to plug our construction into algorithms of [Clarkson-Woodruff, STOC 2009] to achieve the fastest known streaming algorithms for numerical linear algebra problems such as approximate linear regression and best rank-k approximation. Their reductions to the Johnson-Lindenstrauss lemma require exponentially small delta, and thus a superlinear dependence on log(1/delta) in s leads to significantly slower algorithms.
We consider the eigenvalues and eigenvectors of finite, low rank perturbations of random matrices. Specifically, we prove almost sure convergence of the extreme eigenvalues and appropriate projections of the corresponding eigenvectors of the perturbed matrix for additive and multiplicative perturbation models. The limiting non-random value is shown to depend explicitly on the limiting eigenvalue distribution of the unperturbed random matrix and the assumed perturbation model via integral transforms that correspond to very well known objects in free probability theory that linearize non-commutative free additive and multiplicative convolution. Furthermore, we uncover a phase transition phenomenon whereby the large matrix limit of the extreme eigenvalues of the perturbed matrix differs from that of the original matrix if and only if the eigenvalues of the perturbing matrix are above a certain critical threshold. Square root decay of the eigenvalue density at the edge is sufficient to ensure that this threshold is finite. This critical threshold is intimately related to the same aforementioned integral transforms and our proof techniques bring this connection and the origin of the phase transition into focus. Consequently, our results extend the class of spiked' random matrix models about which such predictions (called the BBP phase transition) can be made well beyond the Wigner, Wishart and Jacobi random ensembles found in the literature. We examine the impact of this eigenvalue phase transition on the associated eigenvectors and observe an analogous phase transition in the eigenvectors. Various extensions of our results to the problem of non-extreme eigenvalues are discussed.

Image Credit: NASA/JPL/Space Science Institute, W00065998.jpg was taken on December 24, 2010 and received on Earth December 27, 2010. The camera was pointing toward SATURN at approximately 1,793,709 kilometers away, and the image was taken using the MT3 and CL2 filters.

## Monday, December 27, 2010

### CS: IBAS, Superselectors, Compressive sensing for feedback reduction in MIMO broadcast channels, Reduced-Dimension Multiuser Detection, Efficient reconstruction method for L1 regularization in fluorescence molecular tomography, icalp2011gt

Want to read about a compressive sensing system used to detect unexpected events? Here is one: The INTEGRAL spacecraft has several coded aperture cameras on-board that are used to detect Gamma Ray Bursts. An arxiv summary version of the system is here. A longer version can be found in Diego Götz's thesis A study of Gamma-Ray Bursts and Soft Gamma Repeaters Detected with the INTEGRAL Burst Alert System. The INTEGRAL Burst Alert System webpage is here. The reconstruction is currently performed using traditional linear convolutions which do not provide superresolution. The alert system relies on detection performed on compressed measurements first. When those measurements reach a threshold, it  triggers some additional processing which includes the deconvolution and a comparison between the reconstructed image and the prior knowledge we have about that part of the sky.

Ferdinando Cicalese recently made a presentation in Slovenia entitled: Superselectors: Efficient Constructions and Applications
Superimposed codes represent the main tool for the efficient solution of several problems arising in compressed sensing, cryptography and data security, computational biology, multi-access communication, database theory, pattern matching,  distributed colouring, and circuit complexity, among the others.

It has also become apparent that combinatorial structures strictly related to superimposed codes lie at the heart of an even vaster series of problems. E.g., selectors were instrumental to obtain fast broadcasting algorithms in radio networks, (p,k,n)-selectors were the basic tool for the first two-stage group testing algorithm with an information theoretic optimal number of tests, (d,\ell)- disjunct matrices were a crucial building block for the efficiently decodable non-adaptive group testing procedures.

We shall focus on a new combinatorial structure, superselectors, which encompasses and unifies all of the combinatorial structures mentioned above (and  more). When appropriately instantiated, superselectors asymptotically match the best known constructions of (p,k,n)-selectors, (d, l)-list-disjunct matrices, monotone encodings and (k, alpha)-FUT families, MUT_k(r)-families for multi-access channel.

We shall show some implementations of superselectors which provide optimal approximate group testing schemes and quasi-optimal additive group testing schemes.

Slides from the lecture are available here
While on the 23rd, there was  a seminar at Tsinghua:
Group: Theory Lunch
Title: Instance Compression for NP Problems/Compressive Data Gathering/Deployment Problem in the Wireless Sensor Networks
Speaker: Bangsheng Tang, Liwen Xu, Xiaohong Hao
Date: 12:00 -- 13:30 Dec 23, 2010
Venue: FIT 1-222
Abstract:

Title: Instance Compression for NP problems
Abstract: The OR-SAT problem is to ask given m CNFs, whether at least one of them is satisfiable. In the work of Fortnow and Santhanam, it is proved that OR-SAT cannot be reduce to any set A where the length is bounded by a polynomial in n, unless PH collapses. This result implies upper bounds for several kernelization problem and size of PCPs for SAT, and also implies that there are no subexponential-size hard sets for NP.This is a follow-up of Harnik and Naor's work, and later followed by Dell and van Melkebeek. If time permits, I will try to explain some possible direction of extension of this work.
Title: Compressive Data Gathering

Abstract: Compressive sensing is a new technique in signal processing that collects a low-dimensional projection of a high-dimensional but sparse signal and recovers the original signal by adopting l1-norm optimization which takes the place of l0-norm optimization that turns out to be NP-hard. I will talk about a new scheme of data gathering in wireless sensor network by using compressive sensing distributedly and show some newly observed properties of matrix that satisfies RIP(Restricted Isometry Property).
Title: Deployment problem in the Wireless Sensor Networks
Abstract: Coverage is critical for wireless sensor networks to monitor a region of interest and provide a good quality of service. Here we have two descriptions of the problem. One is place minimum number of sensors to achieve coverage requirement for a region with obstacles. The problem is NP-hard here. I will introduce a algorithm based on deployment pattern and triangulation in the computational geometry. The other description is: for a given number of sensors, maximized the sensor field coverage.  I will present a Virtual Force algorithm as a sensor deployment strategy to enhance to coverage after an initial random placement of sensors.
Today, we have three papers:
Compressive sensing for feedback reduction in MIMO broadcast channels by Syed T. Qaseem, and Tareq Y. Al-Naffouri, Mohammed E. Eltayeb and Hamid Reza Bahrami The abstract reads:
We propose a generalized feedback model and compressive sensing based opportunistic feedback schemes for feedback resource reduction in MIMO Broadcast Channels under the assumption that both uplink and downlink channels undergo block Rayleigh fading. Feedback resources are shared and are opportunistically accessed by users who are strong, i.e. users whose channel quality information is above a certain fixed threshold. Strong users send the same feedback information on all shared channels. They are identified by the base station via compressive sensing. Both analog and digital feedbacks are considered. The proposed analog & digital opportunistic feedback schemes are shown to achieve the same sum-rate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing only logarithmically with number of users. Moreover, there is also a reduction in the feedback load. In the analog feedback case, we show that the proposed scheme reduces the feedback noise which eventually results in better throughput, whereas in the digital feedback case the proposed scheme in a noisy scenario achieves almost the throughput obtained in a noiseless dedicated feedback scenario. We also show that for a given fixed budget of feedback bits, there exists a trade-off between the number of shared channels and thresholds accuracy of the fed back SNR.
We present a new framework for reduceddimension multiuser detection (RD-MUD) that trades off complexity for bit-error-rate (BER) performance. This approach can significantly reduce the number of matched filter branches required by classic multiuser detection designs. We show that the RD-MUD can perform similarly to the linear MUD detector when M is sufficiently large relative to N and K, where N and K are the number of total and active users, respectively. We also study the inherent RD-MUD tradeoff between complexity (the number of correlating signals) and BER performance. This leads to a new notion of approximate sufficient statistics, whereby sufficient statistics are approximated to reduce complexity at the
expense of some BER performance loss.
This one is behind a paywall:  Efficient reconstruction method for L1 regularization in fluorescence molecular tomography by Dong Han, Xin Yang, Kai Liu, Chenghu Qin, Bo Zhang, Xibo Ma, and Jie Tian. The abstract reads;
Fluorescence molecular tomography (FMT) is a promising technique for in vivo small animal imaging. In this paper, the sparsity of the fluorescent sources is considered as the a priori information and is promoted by incorporating L1 regularization. Then a reconstruction algorithm based on stagewise orthogonal matching pursuit is proposed, which treats the FMT problem as the basis pursuit problem. To evaluate this method, we compare it to the iterated-shrinkage-based algorithm with L1 regularization. Numerical simulations and physical experiments show that the proposed method can obtain comparable or even slightly better results. More importantly, the proposed method was at least 2 orders of magnitude faster in these experiments, which makes it a practical reconstruction algorithm.

There is Call for papers for icalp2011gt
Algorithms and Data Structures for selection, identification and encoding.
Group testing, compressed sensing, multi access communications and more.

SCOPE

The workshop aims at bringing together researchers to exchange ideas and results related to the theoretical and practical aspects of group testing, compressed sensing and combinatorial identification in a broad sense. Papers presenting use of group testing and selection primitives in pattern matching, data structures for static membership, communication protocols, cryptographic protocols, streaming computation, bioinfomatics and computational biology, compressed sensing as well as papers focussing on theoretical aspects of combinatorial structures for identification and coding, like randomness extractors, superimposed codes, list-decodable codes, selectors, and the like are welcome.

All the above topics are open to both research and industry contributions. Papers reporting on original research unpublished elsewhere are primarily sought. Surveys of important results, especially recent ones, are also invited.

## Sunday, December 26, 2010

### CS: Moore's Law and Compressed Sensing Algorithm Development

A year ago, I wrote the following entry: Are We Hitting a Ceiling ?. But then this morning, I read about Moore's law and algorithms developmemnt. Here is what happened roughly in compressed sensing:

The find of Candes, Tao, Romberg and Donoho in 2004 just dropped the NP-hard issue into the P domain. Since then, several teams and algorithms have been going at reducing the time to reconstructing  spaarse signals. If we follow the numbers given in the report, it looks like if we had relied on just Moore's law, we would be about two orders of magnitudes off. I stopped at 2008, as we don't really have a good benchmark to compare all the new solvers. Another thing of interest is the appearance of GPU computing as a commodity. My rule of thumb is that GPU or multicore computing only bring about an order of magnitude improvement but I want to be convinced otherwise.

Anyway, good job y'all!

(From here)

## Saturday, December 25, 2010

### Does Anything Happen at Random?

Today, you get to learn a magic trick, find out about a phase transition and the fact there is a business case to be made for randomizing deck of cards in Vegas. Enjoy!

And here is a podcast on Coincidences (1998) from Persi Diaconis.

## Friday, December 24, 2010

### CS: Calibration for Ultrasound Breast Tomography Using Matrix Completion

As most of you know, I pay special attention to the issue of calibration, so this paper really looks like a very interesting way to go about it: Calibration for Ultrasound Breast Tomography Using Matrix Completion by Reza Parhizkar, Amin Karbasi, Sewoong Oh, Martin Vetterli. The abstract reads:
We study the calibration problem in circular ultrasound tomography devices for breast imaging, where the sensor positions deviate from the circumference of a perfect circle. We introduce a new method of calibration based on the time-of-flight (ToF) measurements between sensors when the enclosed medium is homogeneous. In the presence of all the pairwise ToFs, one can estimate the sensor positions using multi-dimensional scaling (MDS) method. In practice, however, we are facing two major sources of loss. One is due to the transitional behaviour of the sensors and the beam form of the transducers, which makes the ToF measurements for close-by sensors unavailable. The other is due to the random malfunctioning of the sensors, that leads to random missing ToF measurements. On top of the missing entries, in practice an unknown time delay is also added to the measurements. In this work, we show that a matrix defined from all the ToF measurements is of rank at most four. In order to estimate the missing ToFs, we apply a state-of-the-art low-rank matrix completion algorithm, OPTSPACE . Then we use MDS in order to find the correct positions of the sensors. To confirm the functionality of our method in practice, simulations mimicking the measurements of a circular ultrasound tomography device are performed.
The use of low rank matrix completion in this problem reminded me of the use of new techniques like isometric embedding in : Looking Around the Corner using Transient Imaging by Ahmed Kirmani, Tyler Hutchison, James Davis, and Ramesh Raskar. as mentioned a month ago. Both physics deal with time of flight but that's where the physics stops being the same.

From the same authors, we also have these two conference papers:

## Thursday, December 23, 2010

### CS: What's up ? Are we in the 100% or the 0.01% improvement business ? Two examples: Turbo-AMP and an illumination camera.

Nearly a year ago, I was asking if we were hitting a ceiling for reconstruction solvers. Well it looks we have a serious contender in Turbo-AMP as presented by Phil Schniter in his recent presentation entitled: Turbo-AMP: A Graphical-Models Approach to Compressive Inference

The only one I really know is IRWL1 which is pretty slow in the first place. So a tenfold improvement over a slow solver is probably nothing much to brag about but AMP, as the presentation shows, has the ability to add additional assumptions in the signal being sought besides sparsity. So if speed is not the main discriminator, it looks like being able to make more assumptions on the signal is the way to go. For pure kick, I'd love to see a comparison between Turbo-AMP and GPSR or SPGL1.  Obviously, we would not be having this conversation if something along the lines of an automatic benchmarking tool existed in the ethers, uh ? Anyway, this is a very interesting development in an area thought to have flat lined. I recall being excited about AMP but this new development (turbo-AMP) feels good.

In a totally different direction, I am not quite sure what to make of this paper: An Active Illumination Single-Pixel Camera Based on Compressive Sensing by Filipe Magalhaes, Francisco Araujo, Miguel Correia, Mehrdad Abolbashari, and Faramarz Farahi. The abstract reads:
In this paper an optical imaging system based on compressive sensing (CS) is presented along with its principal mathematical aspects. Although CS is undergoing significant advances and empowering many discussions and applications throughout various fields, this article focus on the analysis of a single-pixel camera. This work was the core for the development of a single-pixel camera approach based on active illumination. Therefore, the active illumination concept is described along with experimental results, which were very encouraging towards the development of compressive sensing based cameras for various applications, such as pixel level programmable gain imaging.
I guess this is the first publication of an illumination camera but I thought the Arizona folks had already done that. I recall that while the Rice folks featured it at a conference, they did not seem to have a publication on that. Anyway, I'll add it to the compressive sensing hardware page.

## Tuesday, December 21, 2010

### CS: The Nuit Blanche Chronicles, 50 Ways to Sample your Signal, Video Compressed Sensing with Multihypothesis, Fast orthogonal sparse approximation algorithms over local dictionaries, a postdoc

The Nuit Blanche Chronicles is a 1576 pages long / 52MB pdf file that features more than 900 entries related to compressive sensing from January 1, 2008 till December 20th, 2010 written on the blog. I did this because it has become cumbersome to do a search in the blog (the Google search feature does not work well anymore). The text of the abstract is not well delimited, the table of content is large and the titles of the entries are truncated, but it could be useful for those of you machiato-latte-drinking-iPad-reading yuppies. Get it here.

Today we also have a 50 MB presentation entitled: Sampling Theory and Practice: 50 Ways to Sample your Signal by Martin Vetterli.

But also three new papers:

Frequency extrapolation by nonconvex compressive sensing by Rick Chartrand, Emil Y. Sidky and Xiaochuan Pan. The abstract reads:
Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRI) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.
Fast orthogonal sparse approximation algorithms over local dictionaries by Boris Mailhe , Remi Gribonval , Pierre Vandergheynst , Frederic Bimbot. The abstract reads:
Abstract: In this work we present a new greedy algorithm for sparse approximation called LocOMP. LocOMP is meant to be run on local dictionaries made of atoms with much shorter supports than the signal length. This notably encompasses shift-invariant dictionaries and time-frequency dictionaries, be they monoscale or multiscale. In this case, very fast implementations of Matching Pursuit are already available. LocOMP is almost as fast as Matching Pursuit while approaching the signal almost as well as the much slower Orthogonal Matching Pursuit.
Video Compressed Sensing with Multihypothesis by Eric Tramel and Jim Fowler. The abstract reads:
The compressed-sensing recovery of video sequences driven by multihypothesis predictions is considered. Specifically, multihypothesis predictions of the current frame are used to generate a residual in the domain of the compressed-sensing random projections. This residual being typically more compressible than the original frame leads to improved reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an ill-posed least-squares optimization is proposed. This method is shown to outperform both recovery of the frame independently of the others as well as recovery based on single-hypothesis prediction.

and finally a job offer:
Postdocs in Signal Processing, Sensor Networks, and Compressive Sensing, Stevens Institute of Technology
* Date Posted Dec. 20, 2010
* Job Title
Postdocs in Signal Processing, Sensor Networks, and Compressive Sensing
* Department
Electrical and Computer Engineering
* Institution
Stevens Institute of Technology
Hoboken, NJ

* Application Deadline Open until filled
* Position Start Date Available immediately
* Apply By E-mail Hongbin.Li@stevens.edu

* Job Categories Post-Doc

* Academic Fields Electrical and/or Electronics
Computer Engineering
Computer Science
Engineering - General
View University Jobs in New Jersey NEW JERSEY
STEVENS INSTITUTE OF TECHNOLOGY

We seek multiple outstanding post-doctoral associates to work in the following areas:

1) statistical signal processing with emphasis on space-time adaptive processing (STAP) and multi-input multi-output (MIMO) radars

2) wireless sensor networks (distributed detection/estimation/computing, consensus, etc.)

3) compressive sensing.

Appointment is for two years starting immediately, and may be renewable after the two-year period. For qualification, a PhD degree in electrical engineering, computer engineering, applied math/statistics, or a related field is required. Strong publication record in recognized journals is highly desirable.

To apply, send your CV, including a list of publications and references, to (email preferred)

Dr. Hongbin Li, Professor
Department of Electrical and Computer Engineering
Stevens Institute of Technology
Hoboken, NJ 07030, USA
Phone: (201) 216-5604; Fax: (201) 216-8246
E-mail: Hongbin.Li@stevens.edu

Founded in 1870, Stevens is a premier private coeducational institution offering baccalaureate, masters and doctoral degrees in engineering, science, and management. The school is located in Hoboken, New Jersey; a historic town that is just minutes away from Manhattan, New York City.

* EEO/AA Policy
Stevens is an affirmative action/ equal opportunity employer.

### The Monday Nuit Blanche Mailbag: A need for an automatic benchmarking tool ?

Here is what I received in my mailbox recently. The author wants to remain anonymous:
Dear Igor.

Since your blog has become the go-to place for all information related to CS, it would help if you could maintain a up-to-date comparison of all the recovery algorithms (OMP family/Basis Pursuit/ ???) that are being proposed. This could take the form of a table that includes error bounds, time and space complexity, a chart showing the phase transitions for these algorithms, links to papers, source code etc.

The chart could compare algorithm performance for standard matrices (eg Gaussian or Bernoulli) with respect to a single set of parameters (sparsity, undersampling ratio, signal type etc).

This would be a great help to anyone who wants to compare their work with the state-of-the-art, which is rapidly changing. Also, someone might just want to pick the best algorithm for a particular application, without being aware of all the work done in the field.

I am aware that individual papers do include such comparisons, but they go out of date quickly, or are not comprehensive enough.

Two throughts come to my mind:
• One of you had something along the lines of what is being described running on the cloud. It has for the moment remained private.
• If this is just for greedy algorithm, there is a way to not have to rely on Matlab, so my question is really, how much are people willing to pay to have the convenience of finding out these results ? I can set up a cloud based solution for this problem, but are there customers ?
Relevant posts where this issue of benchmarks was mentioned:

### You're in the middle of an awesome experiment

In about an hour, at 4:53AM Central Standard Time, you can shoot the moon (provided you have a good cloud cover) and the blue-ish color on the moon is going to be a reflection of the troposphere above you (also check

More photos can be found on this Flickr stream.

First: Graphic: Fred Espenak/NASA's Goddard Space Flight Center, Path of the Moon through Earth's umbral and penumbral shadows during the Total Lunar Eclipse of Dec. 21, 2010.

Second Image above: This image shows the last bit of light creeping away as the Moon slips completely into the heart of the Earth's shadow during the Lunar Eclipse on August 28, 2007. The blue color in this image is caused by the Earth's ozone layer, which gives our planet's shadow a turquoise-colored fringe. This image of the Moon was taken by Brian Karczewski of Hemet, California. Credit: Brian Karczewski/SpaceWeather.com

## Monday, December 20, 2010

### CS: Autonomous Group Testing for FPGAs, Workshop on Random Matrices, Information Theory and Applications

This is probably a little old but here it is: Self-Healing Reconfigurable Logic using Autonomous Group Testing by Carthik Sharma, Ronald Demara and Alizera Sarvi. The abstract reads:
A self-healing, self-organizing evolvable hardware system is developed using SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) and group testing principles. It employs adaptive group testing techniques to autonomously maintain resource viability information as an organic means of transient and permanent fault resolution. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect’s role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Dedicated test vectors are unnecessary as the algorithm operates on the output response produced for real-time operational inputs. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. Results also include approaches for optimizing population size, resource redundancy, and availability. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control.

On December 13th, there was the Fourth EPFL-UPEMLV Workshop on Random Matrices, Information Theory and Applications in Paris. The program and some slides are listed below:

### Wednesday, December 15

Image above: This image shows the last bit of light creeping away as the Moon slips completely into the heart of the Earth's shadow during the Lunar Eclipse on August 28, 2007. The blue color in this image is caused by the Earth's ozone layer, which gives our planet's shadow a turquise-colored fringe. This image of the Moon was taken by Brian Karczewski of Hemet, California. Credit: Brian Karczewski/SpaceWeather.com