Tuesday, January 10, 2012

Implementation of Mirror Prox: ℓ1 Minimization via Randomized First Order Algorithm

Anatoli JuditskyFatma Kilinc Karzan, and Arkadi Nemirovski let me know that their MATLAB First Order software for solving min_x{∥x∥_1 : ∥Ax − b∥_p ≤ \delta }   with p=2 or p=\infty is available here. The attendant manual for the Mirror Prox Solver is here. They mention that all comments are highly welcome. As a reminder one of the attendant paper is:

In this paper we propose randomized first-order algorithms for solving bilinear saddle points problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of solution quality is crucial. We present the theoretical efficiency estimates of our algorithms and discuss a number of applications, primarily to the problem of $\ell_1$ minimization arising in sparsity-oriented Signal Processing. We demonstrate, both theoretically and by numerical examples, that when seeking for medium-accuracy solutions of large-scale $\ell_1$ minimization problems, our randomized algorithms outperform significantly (and progressively as the sizes of the problem grow) the state-of-the art deterministic methods.

From the paper:

"...In our opinion, the preliminary numerical results we have reported suggest that “acceleration via randomization” possesses a significant practical potential when solving extremely large-scale convex programs of appropriate structure..."
Arkadi reminded me that the current package featured today 

"... implements deterministic Mirror Prox algorithm. The reference to the paper on randomized minimization is related to certain ``upper-level'' routine (the technique for solving generalized saddle point problems), common for both deterministic and randomized Mirror Prox, and presented in the paper on randomization. In this release, the ``working horse'' -- the Mirror Prox solver for saddle point subproblems -- is deterministic, which is better for ``medium'' problems (sizes of matrices in the range of few thousands) and for problems with much larger matrices allowing for reasonably fast matrix-vector multiplications, as is the case when the matrix is sparse, or represents DFT, or convolution, etc. The manual explains exactly what is inside. While we do have randomized version of the solver, its release, provided it is needed, will take some time..."

The algorithm will be added to the compressive sensing reconstruction page. Thanks  AnatoliFatma, and Arkadi

Credit: NASA, GOES 15 view of the Sun.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly