Monday, February 27, 2017

A Random Matrix Approach to Neural Networks




A Random Matrix Approach to Neural Networks by Cosme Louart, Zhenyu Liao, Romain Couillet

This article studies the Gram random matrix model G=1TΣTΣΣ=σ(WX), classically found in random neural networks, where X=[x1,,xT]Rp×T is a (data) matrix of bounded norm, WRn×p is a matrix of independent zero-mean unit variance entries, and σ:RR is a Lipschitz continuous (activation) function --- σ(WX) being understood entry-wise. We prove that, as n,p,T grow large at the same rate, the resolvent Q=(G+γIT)1, for γ>0, has a similar behavior as that met in sample covariance matrix models, involving notably the moment Φ=TnE[G], which provides in passing a deterministic equivalent for the empirical spectral measure of G. This result, established by means of concentration of measure arguments, enables the estimation of the asymptotic performance of single-layer random neural networks. This in turn provides practical insights into the underlying mechanisms into play in random neural networks, entailing several unexpected consequences, as well as a fast practical means to tune the network hyperparameters.


Reproducibility: Python 3 codes used to produce the results of Section 4 are available at https://github.com/Zhenyu-LIAO/RMT4ELM 

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, February 25, 2017

Saturday Morning Videos: Interactive Learning workshop, Simons Institute, Berkeley



The Simons Institute at Berkeley sponsored a workshop on Interactive Learning the week before last. The videos are in. Thank you to the organizers Nina Balcan, Emma Brunskill, Robert Nowak , Andrea Thomaz . Here is the introduction to the workshop:
Interactive learning is a modern machine learning paradigm of significant practical and  theoretical interest, where the algorithm and the domain expert engage in a two-way dialog to facilitate more accurate learning from less data compared to the classical approach of passively observing labeled data. This workshop will explore several topics related to interactive learning broadly defined, including active learning, in which the learner chooses which examples it wants labeled; explanation-based learning, in which the human doesn't merely tell the machine whether its predictions are right or wrong, but provides reasons in a form that is meaningful to both parties; crowdsourcing, in which labels and other information are solicited from a gallery of amateurs; teaching and learning from demonstrations, in which a party that knows the concept being learned provides helpful examples or demonstrations; and connections and applications to recommender systems, automated tutoring and robotics. Key questions we will explore include what are the right learning models in each case, what are the demands on the learner and the human interlocutor, and what kinds of concepts and other structures can be learned. A main goal of the workshop is to foster connections between theory/algorithms and practice/applications.
Videos:








Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 24, 2017

The ICLR2017 program is out



ICLR2017 just released their program ( the open review for the Workshop site is open and here)
Monday April 24, 2017
Morning Session

8.45 - 9.00 Opening Remarks
9.00 - 9.40 Invited talk 1: Eero Simoncelli
9.40 - 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 - 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.00 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Benjamin Recht
15.10 - 15.30 Contributed Talk 3: Understanding deep learning requires rethinking generalization - BEST PAPER AWARD
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2
Tuesday April 25, 2017
Afternoon Session

9.00 - 9.40 Invited talk 1: Chloe Azencott
9.40 - 10.00 Contributed talk 1: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data - BEST PAPER AWARD
10.00 - 10.20 Contributed talk 2: Learning Graphical State Transitions
10.20 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Riccardo Zecchina
15.10 - 15.30 Contributed Talk 3: Learning to Act by Predicting the Future
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2
19.00 - 21.00 Gala dinner offered by ICLR
Wednesday April 26, 2017
Morning Session

9.00 - 9.40 Invited talk 1: Regina Barzilay
9.40 - 10.00 Contributed talk 1: Learning End-to-End Goal-Oriented Dialog
10.00 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Alex Graves
15.10 - 15.30 Contributed Talk 3: Making Neural Programming Architectures Generalize via Recursion - BEST PAPER AWARD
15.50 - 16.10 Contributed Talk 5: Optimization as a Model for Few-Shot Learning
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2






Credit photo: Par BaptisteMPM — Travail personnel, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37629070


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

The Rare Eclipse Problem on Tiles: Quantised Embeddings of Disjoint Convex Sets

  Here is some analysis for the quantised compressive classification problem.



The Rare Eclipse Problem on Tiles: Quantised Embeddings of Disjoint Convex Sets by Valerio Cambareri, Chunlei Xu, Laurent Jacques

Quantised random embeddings are an efficient dimensionality reduction technique which preserves the distances of low-complexity signals up to some controllable additive and multiplicative distortions. In this work, we instead focus on verifying when this technique preserves the separability of two disjoint closed convex sets, i.e., in a quantised view of the "rare eclipse problem" introduced by Bandeira et al. in 2014. This separability would ensure exact classification of signals in such sets from the signatures output by this non-linear dimensionality reduction. We here present a result relating the embedding's dimension, its quantiser resolution and the sets' separation, as well as some numerically testable conditions to illustrate it. Experimental evidence is then provided in the special case of two ℓ2-balls, tracing the phase transition curves that ensure these sets' separability in the embedded domain.  
 
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 23, 2017

Automatic Parameter Tuning for Image Denoising with Learned Spasifying Transforms

A first step toward automating dictionary learning ! Ican see some potential that slowly but surely Luke will come to the other side of the Deep Learning Force :-)



Automatic Parameter Tuning for Image Denoising with Learned Spasifying Transforms by Luke Pfister and Yoram Bresler

Data-driven and learning-based sparse signal models outperform analytical models (e.g, wavelets), for image denoising, but require careful parameter tuning to reach peak performance. In this work, we provide a solution to the problem of parameter tuning for image denoising with transform sparsity regularization. We show that by viewing a learned sparsifying transform as a filter bank we can utilize the SURELET denoising algorithm to automatically tune parameters for an image denoising task. Numerical experiments show that combining SURELET with a learned sparsifying transform provides the best of both worlds. Our approach requires no parameter tuning for image denoising, yet outperforms SURELET with analytic transforms and matches the performance of transform learning denoising with hand-tuned parame-ters 

incidently I just noticed the Transform Learning page aiming to provide Sparse Representations at Scale. 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 21, 2017

Ce soir: Paris Machine Learning #6 season 4, Symbolic AI, Recommendations & Naïve Bayes

  The video  of the streaming is here:



 We will be hosted and sponsored by Societe Generale.
The program (slides will be coming up soon):
Franck Bardol, Igor Carron, What's happening.

Fabrice PopineauSymbolic computation : where does it fit in today's Artificial Intelligence ? 
Symbolic computation was a very popular way to build AI (Artificial Intelligence) agents even two decades ago. But since the advent of statistical approaches of AI and the amazing success of deep learning, symbolic computation seems to have fallen into oblivion. We will see that symbolic computation can still be of great help in various situations. Also, we will look at some promising works of hybrid AI architectures.

Mehdi Sakji, Développement d'un système de recommandation de contenu
Dans le cadre de la refonte du site extranet de Davidson Consulting, plusieurs nouvelles composantes y seront inclues dont une recommandation personnalisée de contenu (Blogs, forums, articles, formations etc ...) aux consultants. C'est une problématique au coeur du Machine Learning et qui fera appel à des techniques de Text Mining et d'algorithmes automatisés de Classification, Clustering et de Recherche d’information. Le tout doit être structuré autour d’un entrepôt de données convenablement choisie. Dans cette présentation, nous abordons l’architecture de données, l’architecture applicative et les différents algorithmes et techniques utilisés ainsi que le stade d’avancement actuel des travaux.

Sylvain Ferrandiz, Soyons naïfs, mais pas idiots
Comment mettre en valeur l'hypothèse naïve bayésienne ? Quelques pistes de réponse en 15 slides. 






 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly