Wednesday, November 16, 2016

Manopt 3.0 released - implementation -


Nicolas just sent me the following:
Dear Igor,

Bamdev (cc) and I just released Manopt 3.0, our Matlab toolbox for optimization on manifolds:

We would be delighted if you could announce this major release on your blog once again.

Manopt is a toolbox for optimization on manifolds, with major applications in machine learning and computer vision (low rank constraints, orthogonal matrices, ...). Of course, Manopt can also optimize over linear spaces (and it's quite good at it).

This is non-convex optimization. Yet, more and more papers show that certain classes of non-convex problems on manifolds can be solved to global optimality with local algorithms such as the ones implemented in Manopt. These tools are also routinely used to refine solutions obtained by convex or spectral relaxations, with excellent results.

The toolbox is user friendly, requiring little knowledge about manifolds to get started. See our tutorial and the many examples in the release:

Best,
Nicolas
 
 Sure Nicolas ! As a side note, Manopt even has a tag on Nuit Blanche and it is manopt.

 Here are the changes from Manopt 2.0
 
  • Manopt 3.0, packaged November 12, 2016.
    • Code moved to GitHub! Now accepting pull requests, and accelerating distribution of patches.
    • Bugs caught
      • Logic bug in linesearch: lsmem handling corrected thanks to Wen Huang. The default line-search algorithm for steepest descent should now be much faster.
      • Logic bug in getGradient when using problem.grad with a different number of inputs compared to problem.cost.
      • Corrected logic in plotting step of example low_rank_dist_completion
      • obliquefactory, in transposed mode, had an incorrect M.log
    • Modifications to core engine
      • Added capability to obtain a partial gradient (Euclidean or Riemannian) of a cost function by specifying problem.partialgrad or problem.partialegrad coupled with problem.ncostterms. This is an important step to simplify the future addition of stochastic gradient methods. Use cases are: if problem.cost is expressed as a sum of problem.ncostterms terms, then problem.partialgrad accepts a point x and an index set I so that only the gradient with respect to terms indexed in I is computed and returned.
      • Added possibility to define problem.approxgrad, to provide an approximation of the gradient. This can be populated with a generic gradient approximation based on finite differences via approxgradientFD. Solvers do this by default if they need a gradient and none is given. This feature is slow, but may be useful for prototyping. It is slow because Manopt generates an orthonormal basis of the tangent space, and compute a finite difference approximation of the directional derivative along each basis vector to get an approximate gradient (see also next item and new example thomson_problem.)
      • getGradient now knows how to compute the gradient if the directional derivatives are accessible. This involves generating an orthonormal basis of the tangent space at the current point, then evaluating the directional derivative along each basis vector and taking the appropriate linear combination. This is very slow, especially for high dimensional manifolds.
    • New tools
      • lincomb for a generic way of computing a long linear combination of tangent vectors.
      • grammatrix to compute the Gram matrix of a collection of tangent vectors.
      • orthogonalize to orthogonalize a basis of tangent vectors.
      • tangentorthobasis to obtain a random orthonormal basis of a tangent space, generically.
      • smallestinconvexhull to compute the smallest tangent vector in the convex hull of a given collection of tangent vectors.
      • hessianmatrix to get a matrix representing the Hessian at a point in an orthonormal tangent basis.
      • checkretraction allows, for manifolds which have a correct exponential implemented, to verify the order of agreement between the retraction and the exponential, in order to determine numerically if the retraction is first- or second-order.
    • New examples
      • elliptope_SDP solves SDP's over positive semidefinite matrices with diagonal of 1's. This should run faster than the Max-Cut example for quite a few things.
      • elliptope_SDP_complex, same as above for complex matrices. This solves the SDP which appears in PhaseCut and phase synchronization, for example.
      • thomson_problem to illustrate the new features that allow to not specify the gradient of the cost (slow, but good for prototyping.)
    • New geometries
      • skewsymmetricfactory for skew-symmetric matrices (Euclidean geometry
      • obliquecomplexfactory, to work with complex matrices whose columns (or rows) all have unit norm
    • Modifications to previous behavior
      • symfixedrankYYcomplexfactory now has a Riemannian metric matching that of euclideanfactory (it was scaled down by 2 as compared to previous Manopt versions.) This makes it easier to switch between those two geometries. Relevant changes propagated to radio_interferometric_calibration.
      • hessianextreme now returns the info structure returned by the internal solver call. The helper tool tangentspherefactory now incorporates extra projections to ensure the vector returned by hessianextreme is indeed a tangent vector (former version could suffer from numerical drift.)
      • At the end of generalized_eigenvalue_computation, added a rotation of Xsol to match the definition of generalized eigenvectors (the eigenvalues were fine.)
    • Numerous minor improvements; highlights:
      • rotationsfactory now has a function M.retr2 which is a second-order retraction.
      • spherefactory and related sphere geometries now have a distance function M.dist which is orders of magnitude more accurate for close-by points.
      • neldermead now respects options.verbosity less than 2.
      • plotprofile / surfprofile have now mostly optional inputs, making them easier to call for a quick glimpse at the cost function.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly