Monday, October 31, 2011

SpaRCS: Recovering Low-rank and Sparse matrices from Compressive Measurements

Aswin C. Sankaranarayanan sent me the following:

Hi Igor
here is a link to a paper that is to appear in NIPS this year.
the paper is on recovering sparse and low rank matrices from compressive measurements of their sum. Similar to Robust PCA, but with compressive measurements of the data matrix.
warm regards,
-as

We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) = A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm.
The extended version is here. The code can be downloaded here and is going to be featured on the Matrix Factorization Jungle. Rich provides additonal insight here.

Thanks Aswin !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A* Orthogonal Matching Pursuit papers and code implementation

I just received this email from Nazim Burak Karahanoglu:
....
We have recently launched the AStarOMP software for compressed sensing signal recovery via A* Orthogonal Matching Pursuit (A*OMP) algorithm. We will be glad if you could link to our software.The software, code, documentation and a brief introduction to A*OMP are available at

A suitable description for the link would be "AStarOMP: Compressed Sensing via A* search".
A*OMP was first presented in ICASSP 2011. It is a general sparse recovery algorithm employing A* search on a hypothesis tree where the paths are extended and evaluated in a way similar to OMP.


To avoid some possible misunderstanding, we would like to note that the tree search concept in A*OMP is completely general to all sparse signals. A*OMP is an algorithm that aims to find a closer result to the true L0 solution (regardless of the RIP condition), thus the objective is to improve reconstruction quality not to decrease computational complexity to find a greedy solution (such as in list decoding). Furthermore, A*OMP is neither specific for tree-sparse signals nor does it make use of a tree-structured over-complete basis (such as in the tree-based OMP algorithm). The algorithm is not specific for structured sparse signals as well. The algorithm is simply aiming to (approximately) solve the NP-complete L0 problem in an efficient manner. There are parameters that can be adjusted for reduced complexity or increased accuracy as desired. Experimental results show that A*OMP improves general recovery as compared to convex relaxation and greedy pursuits (for publications, see http://myweb.sabanciuniv.edu/karahanoglu/publications/).
The new software is capable of performing a fast A*OMP reconstruction via an efficient implementation of the search tree. Moreover, the code can also be easily expanded to perform A* search in any other subset search problem just by binding it to the new problem class that covers evaluation and growth of tree branches with respect to the new problem.

We would be glad to hear any comments and suggestions from you. Please also feel free to report any bugs or problems with the software.
Best regards,
Nazim Burak Karahanoglu and Hakan Erdogan

Here are the attendant publications:

Compressed sensing aims at reconstruction of sparse signals following acquisition in reduced dimensions, which makes the recovery process under-determined. Due to sparsity, required solution becomes the one with minimum $\ell_0$ norm, which is untractable to solve for. Commonly used reconstruction techniques include $\ell_1$ norm minimization and greedy algorithms. This manuscript proposes a novel semi-greedy approach, namely A* Orthogonal Matching Pursuit (A*OMP), which performs A* search for the sparsest solution on a tree whose paths grow similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree are evaluated according to an auxiliary cost function, which should compensate for different path lengths. For this purpose, we suggest three different structures. We show that novel dynamic cost functions provide improved results as compared to a conventional choice. Finally, we provide reconstruction results on both synthetically generated data and images showing that A*OMP outperforms well-known CS reconstruction methods, Basis Pursuit (BP), OMP and Subspace Pursuit (SP).

by Nazim Burak Karahanoglu and Hakan Erdogan. The abstract reads:
Reconstruction of sparse signals acquired in reduced dimensions requires the solution with minimum l0 norm. As solving the l0 minimization directly is unpractical, a number of algorithms have appeared for finding an indirect solution. A semi-greedy approach, A*Orthogonal Matching Pursuit (A*OMP), is proposed in [1] where the solution is searched on several paths of a search tree. Paths of the tree are evaluated and extended according to some cost function, for which novel dynamic auxiliary cost functions are suggested. This paper describes the A*OMP algorithm and the proposed cost functions briefly. The novel dynamic auxiliary cost functions are shown to provide improved results as compared to a conventional choice. Reconstruction performance is illustrated on both synthetically generated data and real images, which show that the proposed scheme outperforms well-known CS reconstruction methods.

Heuristic search has recently been utilized for compressed sensing (CS) signal recovery problem by A* Orthogonal Matching Pursuit (A*OMP) algorithm.A*OMP employs A* search on a tree with an OMP-based evaluation of the tree branches, where the search is terminated when the desired path length is achieved. Here, we utilize a different termination criterion, which stops the search when ℓ2 norm of the residue is small enough. We show this residue-based termination is superior to the other one in terms of both reconstruction accuracy and computation times. We demonstrate A*OMP in a noisy scenario, stating its better reconstruction performance than mainstream CS algorithms.


Thaniks Nazim and Hakan.


Sunday, October 30, 2011

Hanging out on Compressive Sensing

Muthu had a hangout session last week on compressive sensing but for technical reasons I simply could not connect correctly to that session. I think Hangout as a product has a long way to go before I would recommend it to anybody. In the meantime, Muthu listed some of the subjects mentioned then:



  • Functional CS. Minimize number of measurements needed to not reconstruct the signal, but estimate various functions of the signal. Streaming algorithms can be seen to be in this genre, but they dont provide the typical for-all signals guarantee or provide insights on what is a suitable notion of class of all ``compressible'' signals for a function of interest. Eric Tramel who was in the call and has image analysis background, proposed ``smoothness'' or total variation distance as a function to estimate. Defined as \sum_i (A[i]-A[i-1])^2, this does not seem to be a new problem: it is L_2 norm squared, and inner product. But some variation of this may be of interest. Some old thoughts on functional CS is here.
  • Linear measurements are the key to compressed sensing. What is the status on building hardware that will do linear measurements from analog signals that is faster/more efficiently than standard Nyquist sampling?
  • What is the status of CS for arbitrary dictionaries (not necessarily, orthonormal). Did any new algorithmic technique beyond usual pursuit + group testing algorithms get developed?
  • What are the latest developments in CS for matrix approximation?
  • What are recent CS conferences? Examples: 12, ..

You can join and enrich that conversation on Google+

Imaging With Nature: Wet Salt Lakes.

Yesterday, Imaging With Nature used the Moon as a reflector, today here is interesting Imaging With Nature example because the PSF is close to the identity "when covered with water, the Salar becomes one of the largest mirrors on Earth."( From Wikipedia )



additional mesmerizing photos can be found here.
Credit: Ezequiel Cabrera;


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Staying on the Edge of Space for a Long Time

What is the difference between flying to the edge of space on a platform like this one:






and this one ?





How about 15 hours (+) of stabilized flight at 120,000 feet above the ground. I did some computation of what it would mean to use a more recent camera on board that flight a year ago. But it looks like people are still amazed that five years ago, you could take a moderately high price camera to make some good looking shots. Take a look at what you can do when 100 of these shots are automatically stitched together into a panorama map. If you want to do better than we did five years ago and there are a lot of ways to do so with some camera like this one or the new GoPro2, some arduino kit, here is your chance in the new HASP program. From Greg Guzik:


Please find attached here the Call for Payloads (CFP) for the September 2012 flight of the High Altitude Student Platform (HASP). HASP can support up to 12 student payloads (providing power, telemetry and commanding) during a flight to an altitude of 124,000 feet for up to 20 hours. We anticipate flying HASP with the assistance of NASA Balloon Program Office for at least the next three years (2012 through 2014). There is no cost for launch and flight operations. Student teams will need to raise their own funds to support the development of their payload and for travel to Palestine, TX for HASP integration and Ft. Sumner, NM for flight operations.

Details about previous HASP flights and the student payloads flown can be found on the “Flight Information” page of the HASP website at http://laspace.lsu.edu/hasp/Flightinfo-2011.php Details on the payload constraints and interface with HASP as well as online access to the CFP materials can be found on the “Participant Info” page of the HASP website at http://laspace.lsu.edu/hasp/Participantinfo.php

Applications are due December 16, 2011 and selections will be announced by mid-January 2012.

If you have any questions about the application materials or HASP, feel free to contact us at guzik@phunds.phys.lsu.edu

We will also be conducting a Q&A Teleconference about HASP and the application process on Friday, November 11, 2011 at 10:00 am (central time). Groups who have previously flown on HASP as well as new organizations should plan on attending this teleconference. To participate, dial in to 1-866-717-2684 a few minutes prior to the conference time. When requested enter the conference ID number 6879021 followed by the # key.

Also please forward this e-mail to any others that you fee might be interested in applying.

Cheers,
Greg Guzik

All of our photos are stored here.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, October 29, 2011

Sharing Code and Data is Compound Interest

In the following video, Victoria Stodden tries to define a path by which science can be made reproducible, i.e find ways by which data and code can be shared so that science can grow smoothly. This may include top-down decision making at the level of OSTP and NSF. I am a firm believer of bottom-up solutions and if you look at the NIPS survey Victoria did, the compressive sensing community is doing a much better job at making their science known and reproducible. I believe this is in no small part due to some of the folks involved who have led by example. I can see some hold out in the matrix factorization community and it certainly is an issue I am addressing at the level of this blog: whoever shares her/his code generally gets one entry in the most interesting spot of the week. Remember, I want you to be rock stars.

But I also believe that the reason people share their code is self interest. Astute Principal Investigators will have noticed that this is the only sustainable way of using these codes in her/his own group long after the student behind that effort is gone. Like Bode via Hamming said,

What Bode was saying was this: ``Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.

The same goes with research groups. It's one thing to attract top talent to your group (ability), re-usable computational software is what allows you to go deeper. It's compound interest.







Strike Number 5 in the "Technologies That Do Not Exist" Section

You always wonder when some of these things will show up ... if ever.. Daniel Reetz pointed me to this new preprint that instantiate item 5 in the These Technologies Do Not Exist section, i.e. Imaging Earth using the Moon (see herehere and here for some background on Nuit Blanche). The authors do even more and this is fantastic!


Here is the paper: Diffuse Reflectance Imaging with Astronomical Applications by Samuel W. Hasinoff, Anat Levin, Philip R. Goode, William T. Freeman. The abstract reads:

Diffuse objects generally tell us little about the surrounding lighting, since the radiance they reflect blurs together incident lighting from many directions. In this paper we discuss how occlusion geometry can help invert diffuse reflectance to recover lighting or surface albedo. Selfocclusion in the scene can be regarded as a form of coding, creating high frequencies that improve the conditioning of diffuse light transport. Our analysis builds on a basic observation that diffuse reflectors with sufficiently detailed geometry can fully resolve the incident lighting. Using a Bayesian framework, we propose a novel reconstruction method based on high-resolution photography, taking advantage of visibility changes near occlusion boundaries. We also explore the limits of single-pixel observations as the diffuse reflector (and potentially the lighting) vary over time. Diffuse reflectance imaging is particularly relevant for astronomy applications, where diffuse reflectors arise naturally but the incident lighting and camera position cannot be controlled. To test our approaches, we first study the feasibility of using the moon as a diffuse reflector to observe the earth as seen from space. Next we present a reconstruction of Mars using historical photometry measurements not previously used for this purpose. As our results suggest, diffuse reflectance imaging expands our notion of what can qualify as a camera.




Maybe it is time to update both the Imaging With Nature and the These Technologies Do Not Exist pages. Thanks Daniel.

Friday, October 28, 2011

Biomedical and Astronomical Signal Processing Frontiers workshop presentations


The 2011 International Workshop on Biomedical and Astronomical Signal Processing (BASP) took place about two months ago now, the program is below but all the presentation material is in one zip file of 350MB. Enjoy. Here is the program:

MRI / Magnetic resonance imaging of the brain I (08:30-10:10)
08:30
[5] The impact of neuroimaging on neuroscience
Prof. FRACKOWIAK, R. (Lausanne University)
09:20
[6] Diffusion MRI imaging for brain connectivity analysis - principles and challenges
Prof. THIRAN, J.-Ph. (Swiss Federal Institute of Technology
(EPFL))
09:45
[7] Decoding fMRI functional connectivity - pattern recognition on a restricted class of graphs
Dr. RICHIARDI, J. (Swiss Federal Institute of Technology (EPFL) & UNIGE)
RI / Wide field of view radio interferometric imaging - (10:40-12:20)
time [id] title presenter
10:40
[8] Delivering Transformational Science with the new generation of radio telescopes
Prof. ALEXANDER, P. (University of Cambridge)
11:30
[9] Non Coplanar Radio Imaging Arrays and Computing Efficiency
Dr. COTTON, W. (National Radio Astronomy Observatory (NRAO))
11:55
[10] GPU-enabled Imaging of an irregular nonflat sky
Dr. GREENHILL, L. (Harvard University, Smithsonian Astrophysical Observatory)

SP / Sparsity and sampling in signal processing I - (13:30-15:10)
time [id] title presenter
13:30
[11] Wavelets and Filter Banks on Graphs
Prof. VANDERGHEYNST, P.
(Swiss Federal Institute of Technology (EPFL))
14:20
[15] A novel sampling theorem on the sphere with implications for compressive sensing
Dr. MCEWEN, J. (University College London)
14:45
[13] Robust Reconstruction Algorithms for Compressive Imaging
Dr. CARRILLO, R. (Swiss Federal Institute of Technology (EPFL))

SP / Sparsity and sampling in signal processing II - (15:40-17:20)
time [id] title presenter
15:40
[14] Separating Wheat from Chaff in Array Imaging
Prof. STROHMER, T. (University of California Davis)
16:30
[12] Universal compressive sampling by spread spectrum and application to magnetic resonance imaging
Mr. PUY, G. (Swiss Federal Institute of Technology (EPFL))
16:55
[16] The CLASH Operator
Prof. CEVHER, V. (Swiss Federal Institute of Technology)

RI / Direction dependent effects in radio interferometric imaging - (08:30-10:10)
time [id] title presenter
08:30
[17] Synthesis Imaging in Radio Astronomy - Imaging in the presence of direction dependent effects
Dr. BHATNAGAR, S. (National Radio Astronomy Observatory (NRAO))
09:20
[18] Calibrating and correcting direction-dependent effects in radio interferometric observations
Dr. SMIRNOV, O. (Netherlands Institute for Radio Astronomy (ASTRON))
09:45
[19] Synthesis-imaging in radio astronomy - Reconstructing spatial and spectral structure of an astronomical source
Dr. RAU, U. (National Radio Astronomy Observatory (NRAO))
MRI / Magnetic resonance imaging of the brain II - (10:40-12:20)
time [id] title presenter
10:40
[20] Brain dynamics and fractal behavior - about (fast) EEG microstates and (slow) fMRI resting-state networks
Prof. VAN DE VILLE, D. (University of Geneva & Swiss Federal Institute of Technology (EPFL))
11:05
[21] MR-Encephalography (MREG) - ultra-high temporal resolution functional MRI
Dr. LEVAN, P. (University Medical Center Freiburg)
11:30
[22] Structural neuroimaging - experience in the study of brain development
Dr. BACH CUADRA, M. (Radiology Department at Lausanne University Hospital)
11:55
[23] Motion Detection Using FID Navigators
Dr. KOBER, T. (Siemens - CIBM)

MRI / Magnetic resonance imaging technology - (13:30-15:10)
time [id] title presenter
13:30
[24] Technology for encoding with detectors in MRI
Prof. WALD, L. (Massachusetts General Hospital)
14:20
[38] Scanning at High Fields... 7T and Beyond
Dr. MARQUES, J. (Swiss Federal Institute of Technology (EPFL))
14:45
[26] Structural MRI – the good and the bad
Prof. KRUEGER, G. (Swiss Federal Institute of Technology)

SP / Various sparsity and sampling applications - (15:40-17:20)
time [id] title presenter
15:40
[27] A general framework for static and dynamic tomography with regularity constraints
Prof. BOURSIER, Y. (Aix-Marseille University & CPPM, CNRS/IN2P3)
16:05
[28] Ultrasound tomography - inverse problem and regularization
Dr. JOVANOVIC, I. (Medical Imaging Processing Lab.)
16:30
[29] Irregular Wideband Array - a proposal for a radio astronomical and biomedical imaging interferometer
Dr. CAROZZI, T. (University of Glasgow)
16:55
[30] The finite harmonic oscillator and its applications
Prof. SOCHEN, N. (Tel-Aviv University)

MRI / Sparsity and sampling in magnetic resonance imaging I - (08:30-10:10)
time [id] title presenter
08:30
[61] AutoCalibrating Parallel MRI, with or without calibration lines, using Eigen-Vector analysis and structured low-rank matrix completion
Prof. LUSTIG, M. (UC Berkeley)
08:55
[31] Mend It, Don't End It - Improving GRAPPA using Simultaneous Sparsity
Prof. GOYAL, V. (Massachusetts Institute of Technology)
09:20
[33] Improving sub-Nyquist MRI reconstruction performance
Mr. AELTERMAN, J. (University of Gent)
09:45
[34]


MRI / Sparsity and sampling in magnetic resonance imaging II - (10:40-12:20)
time [id] title presenter
10:40
[32] Adaptive Sampling Optimization for Magnetic Resonance Imaging by Bayesian Experimental Design
Prof. SEEGER, M. (Swiss Federal Institute of Technology (EPFL))
11:05
[35] Spatial encoding in MRI with nonlinear fields
Dr. GALLICHAN, D. (University Medical Center Freiburg)
11:30
[36] Multi-dimensional radial self-navigation with non-linear reconstruction for free-breathing coronary MRI
Mr. BONANNO, G. (CIBM/CHUV)
11:55
[37] The Application of Interferometry to Magnetic Resonance Imaging
Dr. JOHNSON, K. (University of Virginia)

Public Lecture - (19:30-20:30)
time [id] title presenter
19:30
[39] Exact Phase Retrieval via Convex Optimization
Prof. CANDèS, E. (California Institute of Technology (on leave) & Stanford University)




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Interpreting Simplicity

In A Low Complexity Solver ? I mentioned the possibility of finding low complexity solutions to an underdetermined linear system of equations through the use of a combination of the TV and the l_infty norm. Dror Baron kindly sent me the following on the matter of complexity:

I think that you might be interpreting complexity (or simplicity) in an overly simplified way (no pun intended). Yes, a signal whose values are all {-const,+const} as illustrated in this discussion is simple, because its entropy (complexity) is upper bounded by 1 bit per signal entry. Another signal that takes on 10 or so values but is zero 90% of the time also has an entropy below 1 bit per entry, because 90% of the time the signal entry has little information complexity. And Figure 1 in our paper uses a source where each of 2 values happens half the time (technically 1 bit per entry), but they occur in such a predictable pattern that the entropy was only 0.19 bits per entry. My point here is that low entropy corresponds to high probability, and different different types of structures - even those that appear complex to the naked eye - might have high probability. Hope that this clarification helps.
By the way, experts can now compress English text at just a bit above 1 bit per character. That is, you get all the capital letters, punctuation, and many different letters for 1 bit per character, because there is a lot of higher order structure.

Thanks Dror.!

Videos of Tutorials on Low Rank Modeling, Compressive Sensing and the Netflix prize (Machine Learning Summer School 2011 in Bordeaux and SAHD at Duke)



Thanks to Olivier Grisel for the heads up. Here are several videos of tutorials from the Machine Learning Summer School 2011 in Bordeaux:


check out the other tutorials here. Enjoy!

The Story of the Netflix prize will take place in DC on November 4th. but if you cannot attend, Robert Bell  had two videos at the SAHD meeting at Duke earlier:


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

I am not amused




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, October 27, 2011

A Low Complexity Solver ?


The penalization in ||x||_∞ limits the range of the coefficients which in turn tend to ‘stick’ their value to ±||x||_∞ [9].

....We will provide a Matlab package to reproduce the analysis comparisons reported in this paper.

.....[9] J-J. Fuchs, “Spread representations,” in ASILOMAR Conference on Signals, Systems, and Computers, November 2011.
Since then, Hervé made the L_infinity solver of Jean-Jacques Fuchs available here. What is curious is that the solution extracted through the l_infty regularization could be made a low complexity one if a second regularizer like the TV norm were to applied during the reconstruction stage. The idea is that according to [9] the number of elements different from ±||x||_∞ is pretty small and so an additional TV regularization would in effect make most of the component either -||x||_∞ or +||x||_∞ . Could this implementation marrying a TV and l_infty regularization solver be the equivalent to the low complexity solver of Dror Baron and Marco Duarte ? Could this reach the asymptotics described by Shirin Jalali and Arian Maleki ?



Image Credit: NASA/JPL/Space Science Institute. N00177639.jpg was taken on October 25, 2011 and received on Earth October 26, 2011. The camera was pointing toward TITAN at approximately 1,803,967 kilometers away, and the image was taken using the CL1 and CB3 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2012.



Wednesday, October 26, 2011

Compressive Sensing Literature This Week

I knew that one of the side effect of Napoleon was the invention of photography but did not realize that another was the Fourier transform. Napoleon removed Fourier from office in March 1815 which led him to take a position back in Paris.

Before we get onto the important stuff, Yaroslav Bulatov has an internship opening at Google on Machine Vision.


Today we have several papers and a Master's thesis:


This paper proposes a binarization scheme for vectors of high dimension based on the recent concept of anti-sparse coding, and shows its excellent performance for approximate nearest neighbor search. Unlike other binarization schemes, this framework allows, up to a scaling factor, the explicit reconstruction from the binary representation of the original vector. The paper also shows that random projections which are used in Locality Sensitive Hashing algorithms, are significantly outperformed by regular frames for both synthetic and real data if the number of bits exceeds the vector dimensionality, i.e., when high precision is required.

Hervé let me know that the package which includes the L_infinity solver of Jean-Jacques Fuchs is now online here. Thanks Hervé.


To lower the sampling rate in a spread spectrum communication system using Direct Sequence Spread Spectrum (DSSS), we show that compressive signal processing can be applied to demodulate the received signal. This may lead to a decrease in power consumption in wireless receivers using spread spectrum technology. In classical compressive sensing, the receiver must mix the received signal with a pseudo-random noise signal. Our receiver structure does not require this, as we exploit the spreading of the signal already done in the transmitter. Our theoretical work is exemplified with a numerical experiment using the IEEE 802.15.4 standard and the 2.4 GHz band specification. The numerical results support our theoretical findings and indicate that compressive sensing may be used successfully in spread spectrum communication systems. The penalty is the noise folding that occurs.


In this paper, we address the problem of stabilization in continuous time linear dynamical systems using state feedback when compressive sampling techniques are used for state measurement and reconstruction. In [5], we had introduced the concept of using l1 reconstruction technique, commonly used in sparse data reconstruction, for state measurement and estimation in a discrete time linear system. In this work, we extend the previous scenario to analyse continuous time linear systems. We investigate the effect of switching within a set of sparsifiers, introduced in [5], on the stability of a linear plant in continuous time settings. Initially, we analyze the problem of stabilization in low dimensional systems, following which we generalize the results to address the problem of stabilization in systems of arbitrary dimensions.

(1+eps)-approximate Sparse Recovery by Eric PriceDavid P. Woodruff. The abstract reads:
The problem central to sparse recovery and compressive sensing is that of stable sparse recovery: we want a distribution of matrices A in R^{m\times n} such that, for any x \in R^n and with probability at least 2/3 over A, there is an algorithm to recover x* from Ax with ||x* - x||_p \le C min_{k-sparse x'} ||x - x'||_p for some constant C \lt 1 and norm p. The measurement complexity of this problem is well understood for constant C \lt 1. However, in a variety of applications it is important to obtain C = 1 + eps for a small eps \gt 0, and this complexity is not well understood. We resolve the dependence on eps in the number of measurements required of a k-sparse recovery algorithm, up to polylogarithmic factors for the central cases of p = 1 and p = 2. Namely, we give new algorithms and lower bounds that show the number of measurements required is (1/eps^{p/2})k polylog(n). For p = 2, our bound of (1/eps) k log(n/k) is tight up to constant factors. We also give matching bounds when the output is required to be k-sparse, in which case we achieve (1/eps^p) k polylog(n). This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.

On the Power of Adaptivity in Sparse Recovery by Piotr Indyk, Eric Price, David P. Woodruff. The abstract reads:
The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x*$ such that ||x-x*||_p \le C min_{k-sparse x'} ||x-x'||_q for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k \log (n/k))$ non-adaptive measurements [CRT06] and that this bound is tight [DIPW10,FPRU10,PW11]. In this paper we show that if one is allowed to perform measurements that are adaptive, then the number of measurements can be considerably reduced. Specifically, for $C=1+eps$ and $p=q=2$ we show - A scheme with $m=O((1/eps)k log log (n eps/k))$ measurements that uses $O(log* k \log \log (n eps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. - A scheme with $m=O((1/eps) k log (k/eps) + k \log (n/k))$ measurements that uses /two/ rounds. This improves over the best possible non-adaptive bound. To the best of our knowledge, these are the first results of this type. As an independent application, we show how to solve the problem of finding a duplicate in a data stream of $n$ items drawn from ${1, 2, ..., n-1}$ using $O(log n)$ bits of space and $O(log log n)$ passes, improving over the best possible space complexity achievable using a single pass.

In this paper, we propose clipping noise cancellation scheme using compressed sensing (CS) for orthogonal frequency division multiplexing (OFDM) systems. In the proposed scheme, only the data tones with high reliability are exploited in reconstructing the clipping noise instead of the whole data tones. For reconstructing the clipping noise using a fraction of the data tones at the receiver, the CS technique is applied. The proposed scheme is also applicable to interleaved orthogonal frequency division multiple access (OFDMA) systems due to the decomposition of fast Fourier transform (FFT) structure. Numerical analysis shows that the proposed scheme performs well for clipping noise cancellation of both OFDM and OFDMA systems.

Turan's method and compressive sampling by Jean-Pierre Kahane. The abstract reads:
Turan's method, as expressed in his books, is a careful study of trigonometric polynomials from different points of view. The present article starts from a problem asked by Turan: how to construct a sequence of real numbers x(j) (j= 1,2,...n) such that the almost periodic polynomial whose frequencies are the x(j) and the coefficients are 1 are small (say, their absolute values are less than n d, d  given) for all integral values of the variable m between 1 and M= M(n,d) ? The best known answer is a random choice of the x(j) modulo 1. Using the random choice as Turan (and before him Erd\"os and Renyi), we improve the estimate of M (n, d) and we discuss an explicit construction derived from another chapter of Turan's book. The main part of the paper deals with the corresponding problem when R / Z is replaced by Z / NZ, N prime, and m takes all integral values modulo 1 except 0. Then it has an interpretation in signal theory, when a signal is representad by a function on the cyclic goup G = Z / NZ and the frequencies by the dual cyclic group G^ : knowing that the signal is carried by T points, to evaluate the probability that a random choice of a set W of frequencies allows to recover the signal x from the restriction of its Fourier tranform to W by the process of minimal extrapolation in the Wiener algebra of G^(process of Cand\`es, Romberg and Tao). Some random choices were considered in the original article of CRT and the corresponding probabilities were estimated in two preceding papers of mine. Here we have another random choice, associated with occupancy problems.

Recovering a Clipped Signal in Sparseland by Alejandro Weinstein, Michael Wakin. The abstract reads:
In many data acquisition systems it is common to observe signals whose amplitudes have been clipped. We present two new algorithms for recovering a clipped signal by leveraging the model assumption that the underlying signal is sparse in the frequency domain. Both algorithms employ ideas commonly used in the field of Compressive Sensing; the first is a modified version of Reweighted $\ell_1$ minimization, and the second is a modification of a simple greedy algorithm known as Trivial Pursuit. An empirical investigation shows that both approaches can recover signals with significant levels of clipping
Computational electromagnetic problems are becoming exceedingly complex and traditional computation methods are simply no longer good enough for our technologically advancing world. Compressive sensing theory states that signals, such as those used in computational electromagnetic problems have a property known as sparseness. It has been proven that through under sampling, computation runtimes can be substantially decreased while maintaining sufficient accuracy. Lawrence Carin and his team of researchers at Duke University developed an in situ compressive sensing algorithm specifically tailored for computational electromagnetic applications. This algorithm is known as the discrete cosine transform (DCT) method. Using the DCT algorithm, I have developed a compressive sensing software implementation. Through the course of my research I have tested both the accuracy and runtime efficiency of this software implementation proving its potential for use within the electromagnetic modeling industry. This implementation, once integrated with a commercial electromagnetics solver, reduced the computation cost of a single simulation from seven days to eight hours. Compressive sensing is highly applicable to practical applications and my research highlights both its benefits and areas for improvement and future research.


Sparse coding, which is the decomposition of a vector using only a few basis elements, is widely used in machine learning and image processing. The basis set, also called dictionary, is learned to adapt to specific data. This approach has proven to be very effective in many image processing tasks. Traditionally, the dictionary is an unstructured "flat" set of atoms. In this paper, we study structured dictionaries which are obtained from an epitome, or a set of epitomes. The epitome is itself a small image, and the atoms are all the patches of a chosen size inside this image. This considerably reduces the number of parameters to learn and provides sparse image decompositions with shiftinvariance properties. We propose a new formulation and an algorithm for learning the structured dictionaries associated with epitomes, and illustrate their use in image denoising tasks.

Analyse et synthèse harmoniques by Jean-Pierre Kahane. The abstract reads:
This is the material for two lectures given at Ecole Polytechnique in May 2011 for the math teachers of "classes pr\'eparatoires"(parallel to the undergraduate classes in universities). The introduction is a personal overview on Fourier analysis, its history, the terms in use and a few references. The first part is also an overview but on a restricted subject : the link between statistics and Fourier, from Legendre and the role of l^2 to Donoho and the role of l^1. The third part starts from a theorem of Cand\`es, Romberg and Tao on compressive sampling and applies the basic idea to a quite different question : the reconstruction of a function whose unknown spectrum has large gaps from Its restriction to an interval. The description and extention of the method of Cand\`es, Romberg and Tao in Appendix 2 was developed later in a note aux Comptes rendus and an article submitted to Annales de l'Institut Fourier.



Tuesday, October 25, 2011

Insight on very large SVD computations

First things first, tomorrow, starting at this time of the day, Muthu and I will be having a video Hangout on Google+. you are welcome to join. For that you may want to follow the instructions Muthu give on his blog.

A discussion with Danny Bickson and Olivier Grisel on the Matrix Factorization Group on LinkedIn led to this insightful discussion with Joel Tropp and Piotr Indyk. Here is the email I sent out to them:

Dear Piotr and Joel,

Recently Danny Bickson at CMU has implemented a full SVD on mutlicore using the GraphLab framework . During a discussion on the Matrix Factorization group on LinkedIn , we wondered if the randomized version could be implemented also on GraphLab and show additional speed ups. The answer from Danny, when reading the Matlab code was the following:

"...Thanks for the link, it is highly relevant and interesting. In fact, I believe it will be pretty straightforward to implement this algorithm in Graphlab, since it is based on SVD as a subroutine. There is only one question I am not sure about and maybe someone from the
group can help. I see in the matlab code that:


%
% Apply A to a random matrix, obtaining H.
%
rand('seed',rand('seed'));

if(isreal(A))
H = A*(2*rand(n,l)-ones(n,l));
end


Namely, a random Gaussian matrix is generated and its mean (1) is subtracted. I wonder if there is some trick to make this matrix sparse? By having a fully dense matrix we loose some of the power of graphlab to scale to very large matrices. ..."

The issue at hand seems to be the use of full gaussian matrices, do you know of any results that would allow us to use sparser matrices (RIP-1, Achlioptas' matrices...). Any additional insight would be welcome. Thank you in advance for your time,

Cheers,

Igor.

Joel responded first with:

Igor,

If you have a serious interest in implementing large-scale randomized SVDs, the survey paper "Finding Structure with Randomness" discusses in detail the numerical analysis issues related to this problem. It would be best to refer people to this work; the paper is comprehensive.

There are only a few types of transformations that have been studied in detail, such as random Gaussians and randomized Fourier transforms. Some other transforms work well empirically, but there is very little analysis beyond these two examples. See Edo Liberty's dissertation for additional results.

A serious computational issue is that most of the transformations available lead to dense intermediate matrices. I am not aware of any transformation that is guaranteed to preserve sparsity and to work with general input matrices. When the singular vectors of the input matrix are incoherent, it is possible to construct approximations using randomized row and column sampling. These algorithms tend to work in machine learning applications, but they require coherence assumptions.

A couple other caveats:

Randomized SVD methods only make sense for computing partial SVDs, not full SVDs. Most people want partial SVDs, so this is not necessarily an issue.

For large sparse matrices, Lanczos methods can be superior to randomized algorithms, but there are serious implementation difficulties that cannot be overlooked. Indeed, unless the Lanczos method has been implemented by an expert, it is very doubtful that the algorithm actually produces accurate spectral information.

regards,
Joel

Piotr then added:

Hi Igor,

Good question. To add a few more points to Joel's response: as far as I understand, a sufficient condition is that the random matrix satisfies Johnson-Lindenstrauss lemma. You can construct such matrices (which also support fast matrix-vector multiplication) by "randomizing" Hadamard matrices, see Joel's survey for more. Of course, such matrices are dense.

There is a recent construction of random *sparse* matrices that satisfy Johnson-Lindenstrauss lemma. See this paper:

Sparser Johnson-Lindenstrauss Transforms
Daniel M. Kane, Jelani Nelson
Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA
2012),
http://arxiv.org/abs/1012.1577

To get provable guarantees, the matrices cannot be *very* sparse. But you can of course run it beyond what is guaranteed. The matrix construction itself is quite simple.

Piotr
Thanks Joel and Piotr.



Newer Versions: Verifiable and computable performance analysis of sparsity recovery, Subspace Methods for Joint Sparse Recovery

Today, I am featuring several papers that have gone through several versions and may not look like "new" but in fact they are. I am very happy to be able to do this because being able to ping versioning on arxiv is non-existent and so it is difficult for me to figure out when something new has occurred in a new version of a preprint. Thanks to the authors for letting me know about these new and better versions.

Gongguo Tang let me know of two papers that evaluate performance of sensing systems (i.e. measurement matrices) associated with block sparsity recovery. Gongguo also mentions on his website to feel free to email him if you'd like to try the codes featured in the papers:

In this paper, we employ fixed point theory and semidefinite programming to compute the performance bounds on convex block-sparsity recovery algorithms. As a prerequisite for optimal sensing matrix design, a computable performance bound would open doors for wide applications in sensor arrays, radar, DNA microarrays, and many other areas where block-sparsity arises naturally. We define a family of goodness measures for arbitrary sensing matrices as the optimal values of certain optimization problems. The reconstruction errors of convex recovery algorithms are bounded in terms of these goodness measures. We demonstrate that as long as the number of measurements is relatively large, these goodness measures are bounded away from zero for a large class of random sensing matrices, a result parallel to the probabilistic analysis of the block restricted isometry property. As the primary contribution of this work, we associate the goodness measures with the fixed points of functions defined by a series of semidefinite programs. This relation with fixed point theory yields efficient algorithms with global convergence guarantees to compute the goodness measures.


In this paper, we develop verifiable and computable performance analysis of sparsity recovery. We define a family of goodness measures for arbitrary sensing matrices as a set of optimization problems, and design algorithms with a theoretical global convergence guarantee to compute these goodness measures. The proposed algorithms solve a series of second-order cone programs, or linear programs. As a by-product, we implement an efficient algorithm to verify a sufficient condition for exact sparsity recovery in the noise-free case. We derive performance bounds on the recovery errors in terms of these goodness measures. We also analytically demonstrate that the developed goodness measures are non-degenerate for a large class of random sensing matrices, as long as the number of measurements is relatively large. Numerical experiments show that, compared with the restricted isometry based performance bounds, our error bounds apply to a wider range of problems and are tighter, when the sparsity levels of the signals are relatively low.

Kiryung Lee and Yoram Bresler sent me the following:

Dear Igor,
We would like to share our preprint on joint sparse recovery over "Nuit Blanche".
http://arxiv.org/abs/1004.3071
The main idea is to improve the work by Ping Feng and Yoram Bresler in 1997,
the first work that showed the link between sensor array processing and compressed sensing.
We proposed the main ideas on arxiv about an year and half ago, but we would like to present improvements in both algorithms and analysis. We would appreciate your comments, suggestions, or questions.
Thanks so much.
Best regards,
Kiryung Lee and Yoram Bresler

We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.
Kiryung let me know that an implementation of SA-MUSIC should be available shortly. Stay tuned.


Image Credit: NASA/JPL/Space Science Institute
N00177486.jpg was taken on October 21, 2011 and received on Earth October 21, 2011. The camera was pointing toward LOGE, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 24, 2011

Compressive Sensing Hardware Implementations This Week

Here is a compressive sensing hardware entry today. Thinking some more about yesterday's entry, the main crux of the issue is that the system presented has too sparse of a PSF. Anyway, I found it very refreshing the ability for the authors of this paper to get some feedback thanks to a tweet. More on the discussion can be found here. If you haven't read this week-end's entries, let me remind you that Muthu and I will be having a video Hangout on Google+ this Wednesday and you are welcome to join.

In the meantime, last week saw two presentations on hardware related instantiation of compressive sensing:



as well as two papers that will soon be part of the compressive sensing hardware implementation page.


Compressive Sensing in Holography by Adrian Stern, Yair Rivenson. The abstract reads:
Compressive sensing provides a new framework for simultaneous sampling and compression of signals. According to compressive sensing theory one can recover compressible signals and images from far fewer samples or measurements that traditional methods use. Applying compressive sensing theory for holography comes natural since three-dimensional (3D) data is typically very redundant, thus it is also very compressible. Holographic imaging itself is inherently a compressive process that encodes 3D data in 2D images. The application of compressive sensing for holography may reduce the acquisition effort in terms of acquisition time and hardware; it reduces the overall captured data thus reducing storage requirements and transmission bandwidth. We present a survey of compressive sensing techniques proposed for coherent and incoherent holography. We also present theoretical results that provide applicative practical rules for using of compressive sensing theory in holography.

I have mentioned this paper before, but let me do so again:


Photon-counting compressive sensing laser radar for 3D imaging by Gregory Howland, Paul Dixon, and John Howell.. The abstract reads:
We experimentally demonstrate a photon-counting, single-pixel, laser radar camera for 3D imaging where transverse spatial resolution is obtained through compressive sensing without scanning. We use this technique to image through partially obscuring objects, such as camouflage netting. Our implementation improves upon pixel-array based designs with a compact, resource-efficient design and highly scalable resolution.

Sunday, October 23, 2011

You keep using that word



I got interested in reading this paper [1] because of the title, alas, the authors keep using that word but I do not think it means what they think it means.Here is the interesting bit [italic is my emphasis]

Recent work has investigated the concept of compressive sensing for imaging applications. Compressive sensing is similar in concept to image compression except that it reduces the number of samples acquired rather than remove redundancy from the image post-acquisition. In an example of this Robucci et al describe an imaging system that uses a custom sensor to perform ‘noiselet’ transforms (a form of pseudo-random wavelet transform) during image acquisition [14]. This is an interesting approach, but it requires specialist sensor hardware that is not yet commercially available and primarily concentrates on image, rather than video, acquisition.

In this paper, we present a camera system that attempts to overcome the limitations of these strategies by utilizing a form of compressed imaging. The camera system adaptively concentrates most of the available bandwidth on a subset of pixels while sampling the remainder of the pixels at low-speed. In this way the camera is able to use most of its available bandwidth to capture images of the ROI at a relatively high-speed while simultaneously acquiring high-resolution images at low-speed using the rest of the bandwidth. In addition, the camera is able to use this full-frame information to calculate the positions of targets and update the high-speed ROIs without interrupting acquisition. This allows the camera to track and image moving targets at high-speed while simultaneously imaging the whole frame at a much lower frame rate. While the overall bandwidth of the camera is low it is utilised efficiently and this allows changing ROIs to be acquired continuously at high speed. Because the total volume of data is reduced considerably by only collecting the necessary information at the source, pressure is reduced on down-stream systems; including real-time data analysis.

Let me first emphasize something, this implementation of a camera is great but there is absolutely no need to capture the wording compressive sampling in it when the work stands on its own. The initial and most important aspect of compressive sensing is that the whole field of the image is sensed all the time. Let us be more specific:

Tracking high-speed ROI with low-speed full-frame.
Finally, after simultaneously acquiring the ROI at high speed and the full-frame at a lower speed, the full-frame image can be analyzed to determine the most appropriate ROI (Figure 9). The Analysis loop can then pass any changes to the ROI addresses to the Acquisition loop. The Acquisition loop gets the most recent set of ROI addresses from the FIFO buffer when it is ready and so continuous acquisition is not interrupted. In this way moving targets can be tracked.

Here the ROI updating goes as slow as the low-speed full frame acquisition. This is where the process described in this paper cannot be described as compressive sensing. In CS, the ROI (in the case of adaptive compressive sensing) is updated at the full speed of the acquisition. So let me restate, this is a great implementation but this is not a compressive sensing implementation.



[1] A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications, Bo Fu, Mark C. Pitter, Noah A. Russell.


Printfriendly