Friday, July 22, 2011

Compressive Sensing Literature This Week

In the coming weeks, I'll be on a more reflective mode. Entries might become sparse.

Today's entry is long, it features in particular two hardware implementations (a new imaging device and lightweight GPS receiver), a thesis, several arxiv recent papers, some conference abstracts and a talk. Enjoy.


Photoacoustic imaging method based on arc-direction compressed sensing and multi-angle observation by  Mingjian Sun, Naizhang Feng, Yi Shen, Xiangli Shen, Liyong Ma, Jiangang Li, and Zhenghua Wu. The abstract reads (also here):
In photoacoustic imaging (PAI), the photoacoustic (PA) signal can be observed only from limit-view angles due to some structure limitations. As a result, data incompleteness artifacts appear and some image details lose. An arc-direction mask in PA data acquisition and arc-direction compressed sensing (CS) reconstruction algorithm are proposed instead of the conventional rectangle CS methods for PAI. The proposed method can effectively realize the compression of the PA data along the arc line and exactly recover the PA images from multi-angle observation. Simulation results demonstrate that it has the potential of application in high-resolution PAI for obtaining highly resolution and artifact-free PA images.




GPS Signal Acquisition via Compressive Multichannel Sampling by Xiao Li, Andrea Rueetschi, Yonina C. Eldar, Anna Scaglione. The abstract reads:
In this paper, we propose an efficient acquisition scheme for GPS receivers. It is shown that GPS signals can be effectively sampled and detected using a bank of randomized correlators with much fewer chip-matched filters than those used in existing GPS signal acquisition algorithms. The latter use correlations with all possible shifted replicas of the satellite-specific C/A code and an exhaustive search for peaking signals over the delay-Doppler space. Our scheme is based on the recently proposed analog compressed sensing framework, and consists of a multichannel sampling structure with far fewer correlators.
The compressive multichannel sampler outputs are linear combinations of a vector whose support tends to be sparse; by detecting its support one can identify the strongest satellite signals in the field of view and pinpoint the correct code-phase and Doppler shifts for finer resolution during tracking. The analysis in this paper demonstrates that GPS signals can be detected and acquired via the proposed structure at a lower cost in terms of number of correlations that need to be computed in the coarse acquisition phase, which in current GPS technology scales like the product of the number of all possible delays and Doppler shifts. In contrast, the required number of correlators in our compressive multichannel scheme scales as the number of satellites in the field of view of the device times the logarithm of number of delay-Doppler bins explored, as is typical for compressed sensing methods.



Sparse Vector Distributions and Recovery from Compressed Sensing by Bob L. Sturm. The abstract reads:
It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal.


Infeasible-Point Subgradient Algorithm and Computational Solver Comparison for l1-Minimization by Dirk A. Lorenz, Marc E. Pfetsch, Andreas M. Tillmann. The abstract reads:
We propose a new subgradient method for finding the minimum-l1-norm solution to an underdetermined linear system - an important problem in Compressed Sensing, where it is also known as Basis Pursuit. The algorithm is a specialization of the Infeasible-Point subgradient Algorithm and employs adaptive approximate projections using the method of conjugate gradients. Moreover, we propose the so-called heuristic support evaluation as a general tool for l1-minimization which often allows for early termination by `guessing' a primal-dual optimal pair based on an approximate support. Finally we provide an extensive numerical comparison of various state-of-the-art l1-solvers that have been proposed during the last decade.

The key idea behind Cognitive Radio (CR) is to allow opportunistic access to temporally and/or geographically unused licensed bands, avoiding conflicts with the rightful license owners in those bands. To achieve this, novel interference management algorithms are required to limit the interference seen by the primary (licensed) users. A key aspect of any interference management scheme is spectrum monitoring, that allows to detect and track primary users. This PhD. Thesis contributes to the eld of CR in two di erent ways. First, we address the problem of primary user monitoring using novel detection schemes which exploit multiple antennas, wideband processing, and the available a priori knowledge about primary transmissions. Then, we propose a general framework for interference management in cognitive radio networks in which certain interaction is allowed between primary and secondary systems. Speci cally, the detection problems investigated in this thesis include multi- antenna detection exploiting a priori spectral information when the noise statistics are assumed known. In this setting we will also derive novel diversity order analysis of the proposed detectors. The case of multiantenna detection under unknown noise statistics is covered under di erent hypotheses, including both the detection of pri- mary signals with spatial rank larger than one and detection in presence of spatially unstructured noise. Additionally we study the problem of multichannel monitoring In this context, wideband acquisition can be performed using traditional analog to digital converters or the recently proposed analog to information converters. When the channelization of the primary network is assumed known, we show that guard bands and weak channels can be used to improve detection performance, both when the detection is performed from a set of samples at Nyquist rate, or from a set of compressed measurements. Finally, we propose a general framework for interference management in cognitive radio networks in which the primary network is allowed to dynamically adjust the tolerable interference margin to be met by the secondary system. In particular, we propose a game theoretical formulation which allows us to study the performance gain which can be expected from this limited interaction between primary and secondary systems. Moreover, we show that certain architectures ful lling these requirements are implementable in practice and present good performance in both static and dynamic environments.


Bursty Impulse Noise Detection by Compressed Sensing by Lutz Lampe. The abstract reads:
Impulse noise (IN) is one of the major impairments for data transmission over power lines. For power line communications (PLC) systems with bandwidths in the high kHz to MHz range, IN occurs in bursts. As long as those bursts are sufficiently short compared to a signal-processing (e.g. coding) frame, there is hope to successfully mitigate IN. In this paper, we introduce a new IN mitigation technique that is based on the application of block-based compressed sensing (CS). It makes use of nullsubcarriers in PLC orthogonal frequency division multiplexing (OFDM) transmission systems and the burst structure of IN to detect the location and the values of IN samples in the received signal. We also devise a semi-analytical error-rate performance evaluation for coded OFDM over IN channels, which allows insights into how CS-based IN detection can be used for improved reliability of transmission. Numerical results for typical PLC transmission settings demonstrate the efficacy of the proposed application of CS for IN detection.

A Compressed Sensing Receiver for Bursty Communication with UWB Impulse Radio by Anand Oka and Lutz Lampe. The abstract reads:
We propose a novel receiver for Ultra-Wideband Impulse-Radio communication in bursty applications like Wireless Sensor Networks. It is based on the principle of Compressed Sensing, and exploits the sparsity of the transmitted signal to achieve reliable demodulation. Instead of a full-fledged high-rate A/D, a modest number of projections of the received signal are acquired using analog correlators, and a joint decoding of the time of arrival and the data bits is performed from these undersampled measurements via an efficient quadratic program. The receiver does not use wideband analog delay lines, and is robust to large timing uncertainty, hence the transmitter need not waste power on explicit training headers for timing synchronization. Moreover, the receiver can operate in a regime of heavy intersymbol interference (ISI), and allows a very high baud rate (close to the Nyquist rate). Its performance is shown to remain close to the maximum likelihood receiver under every scenario of undersampling, timing uncertainty, ISI, and delay spread.


We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, super-resolution, signal separation, and the recovery of signals that are corrupted by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a recently developed uncertainty relation and provide corresponding recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge on the support sets of signal and noise.


Recovery of sparsely corrupted signals by Christoph Studer, Patrick Kuppinger, Graeme Pope, and Helmut Bölcskei. The abstract reads:
We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, super-resolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge on the support sets of signal and noise. We finally identify situations under which the recovery guarantees are tight.


Compressed-Sensing (Decision-Feedback) Differential Detection in Impulse-Radio Ultra-Wideband Systems by Andreas Schenk and Robert F.H. Fischer. The abstract reads:
Building upon recent approaches for detection of impulse-radio ultra-wideband (IR-UWB) based on compressed sensing (CS), we combine this approach with techniques known from advanced autocorrelation-based detection of IR-UWB. Thereby, we avoid the respective drawbacks, in particular analog delay lines and complex algorithms for the reconstruction, thus, retain the low hardware complexity of CS-based detection and the low computational complexity of autocorrelation-based detection. We give an analytical analysis of CS-based differential detection and extend this scheme following the principle of decision-feedback differential detection. By means of numerical simulations, we prove that this approach enables to design lowcomplexity, yet power-efficient receivers for realistic IR-UWB scenarios.


This article 1 presents the design of a networked system for joint compression, rate control and error correction of video over resource-constrained embedded devices based on the theory of compressed sensing. The objective of this work is to design a cross-layer system that jointly controls the video encoding rate, the transmission rate, and the channel coding rate to maximize the received video quality. First, compressed sensing based video encoding for transmission over wireless multimedia sensor networks (WMSNs) is studied. It is shown that compressed sensing can overcome many of the current problems of video over WMSNs, primarily encoder complexity and low resiliency to channel errors. A rate controller is then developed with the objective of maintaining fairness among video streams while maximizing the received video quality. It is shown that the rate of compressed sensed video can be predictably controlled by varying only the compressed sensing sampling rate. It is then shown that the developed rate controller can be interpreted as the iterative solution to a convex optimization problem representing the optimization of the rate allocation across the network. The error resiliency properties of compressed sensed images and videos are then studied, and an optimal error detection and correction scheme is presented for video transmission over lossy channels. Finally, the entire system is evaluated through simulation and testbed evaluation. The rate controller is shown to outperform existing TCP-friendly rate control schemes in terms of both fairness and received video quality. Testbed results also show that the rates converge to stable values in real channels.

Matching pursuits are a family of greedy algorithms widely used in signal processing to solve sparse approximation and recovery problems. They rely on an atom selection step that requires the calculation of numerous projections, which can be computationally costly for big dictionaries and burdens their competitivity in coding applications. Existing bypassing strategies are based on dictionary subsampling and local optimization. We propose to use a non adaptive random sequence of subdictionaries in the decomposition process, thus browsing a larger dictionary space in a probabilistic fashion with no additional projection cost nor parameter estimation. We present some theoretical justifications based on order statistics and experimental evaluation of behavior for sparse approximation and sparse recovery problems. An application to audio signal compression with multiscale time-frequency dictionaries is presented, along with a discussion of the algorithm's complexity and practical implementations.


Sequential Lasso for feature selection with ultra-high dimensional feature space by Shan Luo, Zehua Chen. The abstract reads:
We propose a novel approach, Sequential Lasso, for feature selection in linear regression models with ultra-high dimensional feature spaces. We investigate in this article the asymptotic properties of Sequential Lasso and establish its selection consistency. Like other sequential methods, the implementation of Sequential Lasso is not limited by the dimensionality of the feature space. It has advantages over other sequential methods. The simulation studies comparing Sequential Lasso with other sequential methods are reported.


Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful.
This paper proposes the \emph{compressive mechanism}, a novel solution on the basis of state-of-the-art compression technique, called \emph{compressive sensing}. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from $O(\sqrt{n})$ to $O(\log(n))$, when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.

Measurement Design for Detecting Sparse Signals by Ramin Zahedi, Ali Pezeshki, Edwin K. P. Chong. The abstract reads:
We consider the problem of testing for the presence (or detection) of an unknown sparse signal in additive white noise. Given a fixed measurement budget, much smaller than the dimension of the signal, we consider the general problem of designing compressive measurements to maximize the measurement signal-to-noise ratio (SNR), as increasing SNR improves the detection performance in a large class of detectors. We use a lexicographic optimization approach, where the optimal measurement design for sparsity level $k$ is sought only among the set of measurement matrices that satisfy the optimality conditions for sparsity level k-1. We consider optimizing two different SNR criteria, namely a worst-case SNR measure, over all possible realizations of a k-sparse signal, and an average SNR measure with respect to a uniform distribution on the locations of the up to k nonzero entries in the signal. We establish connections between these two criteria and certain classes of tight frames. We constrain our measurement matrices to the class of tight frames to avoid coloring the noise covariance matrix. For the worst-case problem, we show that the optimal measurement matrix is a Grassmannian line packing for most---and a uniform tight frame for all---sparse signals. For the average SNR problem, we prove that the optimal measurement matrix is a uniform tight frame with minimum sum-coherence for most---and a tight frame for all---sparse signals.

Uncovering Evolutionary Ages of Nodes in Complex Networks by Zhu Guimei, Yang Huijie, Yang Rui, Ren Jie, Li Baowen, Lai Ying-Cheng. The abstract reads:
In a complex network, different groups of nodes may have existed for different amounts of time. To detect the evolutionary history of a network is of great importance. We present a general method based on spectral analysis to address this fundamental question in network science. In particular, we argue and demonstrate, using model and real-world networks, the existence of positive correlation between the magnitudes of eigenvalues and node ages. In situations where the network topology is unknown but short time series measured from nodes are available, we suggest to uncover the network topology at the present (or any given time of interest) by using compressive sensing and then perform the spectral analysis. Knowledge of ages of various groups of nodes can provide significant insights into the evolutionary process underpinning the network.

Tail estimates for norms of sums of log-concave random vectors by Radosław Adamczak, Rafał Latała, Alexander E. Litvak, Alain Pajor, Nicole Tomczak-Jaegermann. The abstract reads:
We establish new tail estimates for order statistics and for the Euclidean norms of projections of an isotropic log-concave random vector. More generally, we prove tail estimates for the norms of projections of sums of independent log-concave random vectors, and uniform versions of these in the form of tail estimates for operator norms of matrices and their sub-matrices in the setting of a log-concave ensemble. This is used to study a quantity $A_{k,m}$ that controls uniformly the operator norm of the sub-matrices with $k$ rows and $m$ columns of a matrix $A$ with independent isotropic log-concave random rows. We apply our tail estimates of $A_{k,m}$ to the study of Restricted Isometry Property that plays a major role in the Compressive Sensing theory.

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions the method converges linearly. This extends recent results of Nesterov [Efficiency of coordinate descent methods on huge-scale optimization problems, CORE Discussion Paper #2010/2], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized least squares and support vector machine problems with a billion variables.

Recently, it has been observed that terminated low-density-parity-check (LDPC) convolutional codes (or spatially-coupled codes) appear to approach capacity universally across the class of binary memoryless channels. This is facilitated by the "threshold saturation" effect whereby the belief-propagation (BP) threshold of the spatially-coupled ensemble is boosted to the maximum a-posteriori (MAP) threshold of the underlying constituent ensemble.
In this paper, we consider the universality of spatially-coupled codes over intersymbol-interference (ISI) channels under joint iterative decoding. More specifically, we empirically show that threshold saturation also occurs for the considered problem. This can be observed by first identifying the EXIT curve for erasure noise and the GEXIT curve for general noise that naturally obey the general area theorem. From these curves, the corresponding MAP and the BP thresholds are then numerically obtained. With the fact that regular LDPC codes can achieve the symmetric information rate (SIR) under MAP decoding, spatially-coupled codes with joint iterative decoding can universally approach the SIR of ISI channels. For the dicode erasure channel, Kudekar and Kasai recently reported very similar results based on EXIT-like curves [1].

The next paper points to the fact that in forward problems, one generally solves equation on a grid which lead to sensing matrices that are difficult to fit into the traditional RIP/NSP... framework. While the authors of the next paper go about one way to solve this, it reminds me that we ought to do more digging in the Boogie-Woogie grid concept. The paper is Coherence-Pattern Guided Compressive Sensing with Unresolved Grids by Albert Fannjiang, Wenjng Liao. The abstract reads:
Highly coherent sensing matrices arise in discretization of continuum imaging problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold. Algorithms based on techniques of band exclusion (BE) and local optimization (LO) are proposed to deal with such coherent sensing matrices. These techniques are embedded in the existing compressed sensing algorithms such as Orthogonal Matching Pursuit (OMP), Subspace Pursuit (SP), Iterative Hard Thresholding (IHT), Basis Pursuit (BP) and Lasso, and result in the modified algorithms BLOOMP, BLOSP, BLOIHT, BP-BLOT and Lasso-BLOT, respectively. Under appropriate conditions, it is proved that BLOOMP can reconstruct sparse, widely separated objects up to one Rayleigh length in the Bottleneck distance {\em independent} of the grid spacing. One of the most distinguishing attributes of BLOOMP is its capability of dealing with large dynamic ranges. The BLO-based algorithms are systematically tested with respect to four performance metrics: dynamic range, noise stability, sparsity and resolution. With respect to dynamic range and noise stability, BLOOMP is the best performer. With respect to sparsity, BLOOMP is the best performer for high dynamic range while for dynamic range near unity BP-BLOT and Lasso-BLOT with the optimized regularization parameter have the best performance. In the noiseless case, BP-BLOT has the highest resolving power up to certain dynamic range. The algorithms BLOSP and BLOIHT are good alternatives to BLOOMP and BP/Lasso-BLOT: they are faster than both BLOOMP and BP/Lasso-BLOT and shares, to a lesser degree, BLOOMP's amazing attribute with respect to dynamic range.Detailed comparisons with existing algorithms such as Spectral Iterative Hard Thresholding (SIHT) and the frame-adapted BP are given.


Scattering Function Estimation for Overspread Radar Targets by Onur Oktay, Götz Pfander, Pavel Zheltov. The abstrsct reads:
In many radar scenarios, the radar target or the medium is assumed to possess randomly varying parts. The properties of a target are described by a random process known as the spreading function. Its second order statistics under the WSSUS assumption are given by the scattering function. Recent developments in the operator identification theory suggest a channel sounding procedure that allows to determine the spreading function given complete statistical knowledge of the operator echo. We show it is theoretically possible to identify a scattering function of an overspread target from a single received echo and suggest an estimator for the scattering function.


Adaptive Bayesian Quantum Tomography by Ferenc Huszár, Neil M. T. Houlsby. The abstract reads:
In this letter we revisit the problem of optimal design of quantum tomographic experiments. In contrast to previous approaches where an optimal set of measurements is decided in advance and then kept unchanged during the experiment, we allow for measurements to be adaptively re-optimised depending on data collected so far. We develop an adaptive statistical framework based on Bayesian inference and Shannon's information, and demonstrate a ten-fold reduction in the total number of measurements required as compared to non-adaptive methods, including mutually unbiased bases.

The following papers mentioned compressed sensing:

Optimal ambiguity functions and Weil's exponential sum bound by John J. Benedetto, Robert L. Benedetto, Joseph T. Woodworth

Here is also A term paper Compressed Sensing and its Applications in UWB Communication Systems by Ankit Singh Rawat, Kartik Venkat

The CODING, COMPLEXITY AND SPARSITY WORKSHOP has a list of abstracts of some its speakers.

Finally here is a talk at university of South Carolina:
"Compressed Sensing and Approximation of the Partial Sum of the Dirichlet Series" Dr. Boris Kashin, Russian Academy of Science at university of South Carolina.
Tuesday, August 9
2:30 - 3:30 PM
LeConte 312
In this talk, I’ll show how the techniques from approximation theory (widths estimates) can be used in the study of Dirichlet polynomials. We obtain the result about the uniform approximation of the partial sum of the Dirichlet series by a shorter sum. The theorem I will present is similar to some results from number theory.


Credit NASA, ISS028-E-018199 (21 July 2011) --- This unprecedented view of the space shuttle Atlantis, appearing like a bean sprout against clouds and city lights, on its way home, was photographed by the Expedition 28 crew of the International Space Station. Airglow over Earth can be seen in the background.

No comments:

Printfriendly