Wednesday, April 29, 2015

Complete Dictionary Recovery over the Sphere - implementation -

Ju just sent me the following:

Dear Igor,

Would you help mention our new paper on dictionary learning in your blog?

http://arxiv.org/abs/1504.06785

We show one can recover complete dictionaries with efficient algorithms when the coefficient matrix has up to linear sparsity (under suitable probability model).

Our earlier work

http://arxiv.org/abs/1412.4659

is also relevant, in the sense in both papers the core technical problem we try to tackle is how to retrieve the sparest vector(s) (directions) from a linear subspace.

Both papers demonstrate the power of nonconvex relaxation and optimization.


Thanks, 
Absolutely Ju, specially if it includes implementations !  Complete Dictionary Recovery over the Sphere  by Ju Sun, Qing Qu, John Wright
We consider the problem of recovering a complete (i.e., square and invertible) matrix A0, from Y∈Rn×p with Y=A0X0, provided X0 is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A0 when X0 has O(n) nonzeros per column, under suitable probability model for X0. In contrast, prior results based on efficient algorithms provide recovery guarantees when X0 has only O(n−−√) nonzeros per column.
Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals.
The implementation is here: https://github.com/sunju/dl_focm


 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly