Friday, July 08, 2011

The CLASH and ALPS reconstruction solvers

Volkan Cevher just sent me the following:

Hi Igor,

I hope things are going well.

I wanted to let you know of our new work the CLASH operator:


CLASH solves the Lasso problem *with hard thresholding.* So, it combines the geometrical properties of the ell1-ball with our model-based CS approach. The recovery results are quite encouraging. ...

The problem CLASH solves is quite similar to the problem Sina Jafarpour, Rob Schapire, and I are solving via game theoretic arguments. So, Sina and I will soon provide a comparison of the GAME algorithm and the CLASH on some real world problems involving face recognition.

By the way, I would appreciate if you could re-feature our ALPS codes (http://lions.epfl.ch/ALPS). We came up with some nice theory for step size selection etc., which makes the algorithms quite fast without sacrificing stability. My favorites are 1-ALPS(0) or 1-ALPS(2).

best,
Volkan


Volkan also let me know that the ICML slides corresponding to the CLASH algorithm will be put on the site in the near future. In the meantime, the description of CLASH reads:

CLASH: Combinatorial Selection and Least Absolute Shrinkage Algorithm 
CLASH (Combinatorial selection and Least Absolute SHrinkage) is a new sparse signal approximation algorithm that leverages prior information beyond sparsity to obtain approximate solutions to the Lasso problem. The algorithm is presented in "Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm", Technical Report, by Anastasios Kyrillidis and Volkan Cevher.

CLASH algorithm alters the selection process of Lasso algorithm by exploiting additional signal structure, dubbed as combinatorial sparsity model, and introduces approximate combinatorial selection with provable convergence guarantees.

More information can be found here. The solver can be downloaded here.

With regards to the ALPS solver, his favorites algorithms are mentioned in the description page here:

Description
ALPS (ALgebraic PursuitS) is a MATLAB implementation of a series of accelerated Iterative Hard Thresholding (IHT) methods for sparse signal reconstruction from dimensionality reducing, linear measurements.

ALPS includes three accelerated IHT methods designed to solve the following sparse linear regression problem:

where u is the M-tupled real valued observation vector, A is the MxN real measurement matrix (M less than N) and x is a K-sparse N-tuple vector signal.


Hard thresholding methods for sparse recovery
0-ALPS(#) is a memoryless IHT algorithm that uses an adaptive step size selection rule.
1-ALPS(#) employs memory for acceleration in addition to the adaptive step size selection. The algorithm uses a linear combination of past estimates during each iteration with a momentum term, which is automatically computed.
infinity-ALPS(#) uses a weighted sum of all the previously computed gradients during each iteration. The gradient weights are automatically calculated.
For all these schemes, we follow the naming convention [0,1,infinity]-ALPS(#), as described in "On Accelerated Hard Thresholding Methods for Sparse Approximation", where (#) is the binary number generated by the following options: SolveNewtonb, GradientDescentx, and SolveNewtonx. Please see the paper for their description.



Image Credit: NASA/Goddard Space Flight Center/Arizona State University, Sunrise on the Moon. On June 10, 2011, NASA's Lunar Reconnaissance Orbiter angled its orbit 65° to the west, allowing the spacecraft's cameras to capture a dramatic sunrise view of the moon's Tycho crater.

No comments:

Printfriendly