Friday, February 24, 2017

The ICLR2017 program is out



ICLR2017 just released their program ( the open review for the Workshop site is open and here)
Monday April 24, 2017
Morning Session

8.45 - 9.00 Opening Remarks
9.00 - 9.40 Invited talk 1: Eero Simoncelli
9.40 - 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 - 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.00 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Benjamin Recht
15.10 - 15.30 Contributed Talk 3: Understanding deep learning requires rethinking generalization - BEST PAPER AWARD
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2
Tuesday April 25, 2017
Afternoon Session

9.00 - 9.40 Invited talk 1: Chloe Azencott
9.40 - 10.00 Contributed talk 1: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data - BEST PAPER AWARD
10.00 - 10.20 Contributed talk 2: Learning Graphical State Transitions
10.20 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Riccardo Zecchina
15.10 - 15.30 Contributed Talk 3: Learning to Act by Predicting the Future
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2
19.00 - 21.00 Gala dinner offered by ICLR
Wednesday April 26, 2017
Morning Session

9.00 - 9.40 Invited talk 1: Regina Barzilay
9.40 - 10.00 Contributed talk 1: Learning End-to-End Goal-Oriented Dialog
10.00 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
12.30 - 14.30 Lunch provided by ICLR
Afternoon Session

14.30 - 15.10 Invited talk 2: Alex Graves
15.10 - 15.30 Contributed Talk 3: Making Neural Programming Architectures Generalize via Recursion - BEST PAPER AWARD
15.50 - 16.10 Contributed Talk 5: Optimization as a Model for Few-Shot Learning
16.10 - 16.30 Coffee Break
16.30 - 18.30 Poster Session 2






Credit photo: Par BaptisteMPM — Travail personnel, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37629070


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

The Rare Eclipse Problem on Tiles: Quantised Embeddings of Disjoint Convex Sets

  Here is some analysis for the quantised compressive classification problem.



The Rare Eclipse Problem on Tiles: Quantised Embeddings of Disjoint Convex Sets by Valerio Cambareri, Chunlei Xu, Laurent Jacques

Quantised random embeddings are an efficient dimensionality reduction technique which preserves the distances of low-complexity signals up to some controllable additive and multiplicative distortions. In this work, we instead focus on verifying when this technique preserves the separability of two disjoint closed convex sets, i.e., in a quantised view of the "rare eclipse problem" introduced by Bandeira et al. in 2014. This separability would ensure exact classification of signals in such sets from the signatures output by this non-linear dimensionality reduction. We here present a result relating the embedding's dimension, its quantiser resolution and the sets' separation, as well as some numerically testable conditions to illustrate it. Experimental evidence is then provided in the special case of two ℓ2-balls, tracing the phase transition curves that ensure these sets' separability in the embedded domain.  
 
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 23, 2017

Automatic Parameter Tuning for Image Denoising with Learned Spasifying Transforms

A first step toward automating dictionary learning ! Ican see some potential that slowly but surely Luke will come to the other side of the Deep Learning Force :-)



Automatic Parameter Tuning for Image Denoising with Learned Spasifying Transforms by Luke Pfister and Yoram Bresler

Data-driven and learning-based sparse signal models outperform analytical models (e.g, wavelets), for image denoising, but require careful parameter tuning to reach peak performance. In this work, we provide a solution to the problem of parameter tuning for image denoising with transform sparsity regularization. We show that by viewing a learned sparsifying transform as a filter bank we can utilize the SURELET denoising algorithm to automatically tune parameters for an image denoising task. Numerical experiments show that combining SURELET with a learned sparsifying transform provides the best of both worlds. Our approach requires no parameter tuning for image denoising, yet outperforms SURELET with analytic transforms and matches the performance of transform learning denoising with hand-tuned parame-ters 

incidently I just noticed the Transform Learning page aiming to provide Sparse Representations at Scale. 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 21, 2017

Ce soir: Paris Machine Learning #6 season 4, Symbolic AI, Recommendations & Naïve Bayes

  The video  of the streaming is here:



 We will be hosted and sponsored by Societe Generale.
The program (slides will be coming up soon):
Franck Bardol, Igor Carron, What's happening.

Fabrice PopineauSymbolic computation : where does it fit in today's Artificial Intelligence ? 
Symbolic computation was a very popular way to build AI (Artificial Intelligence) agents even two decades ago. But since the advent of statistical approaches of AI and the amazing success of deep learning, symbolic computation seems to have fallen into oblivion. We will see that symbolic computation can still be of great help in various situations. Also, we will look at some promising works of hybrid AI architectures.

Mehdi Sakji, Développement d'un système de recommandation de contenu
Dans le cadre de la refonte du site extranet de Davidson Consulting, plusieurs nouvelles composantes y seront inclues dont une recommandation personnalisée de contenu (Blogs, forums, articles, formations etc ...) aux consultants. C'est une problématique au coeur du Machine Learning et qui fera appel à des techniques de Text Mining et d'algorithmes automatisés de Classification, Clustering et de Recherche d’information. Le tout doit être structuré autour d’un entrepôt de données convenablement choisie. Dans cette présentation, nous abordons l’architecture de données, l’architecture applicative et les différents algorithmes et techniques utilisés ainsi que le stade d’avancement actuel des travaux.

Sylvain Ferrandiz, Soyons naïfs, mais pas idiots
Comment mettre en valeur l'hypothèse naïve bayésienne ? Quelques pistes de réponse en 15 slides. 






 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, February 20, 2017

Videos: #NIPS2016 Workshop on Adversarial Training

Woohoo, David just made available the videos of the #NIPS2016 Workshop on Adversarial Training

31:25

31:59

34:22

45:26

31:42

36:39

1:05






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thesis: Sparse Grids for Big Data: Exploiting Parsimony for Large-Scale Learning by Valeriy Khakhutskyy

Congratulations Dr. Khakhutskyy !
 


Sparse Grids for Big Data: Exploiting Parsimony for Large-Scale Learning by Valeriy Khakhutskyy
High-dimensional data analysis becomes ubiquitous in both science and industry. An important tool for data analysis is supervised learning with non-parametric models, which estimates the dependency between target and input variables without imposing explicit assumptions on the data. This generality, however, comes at a price of computational costs that grow exponentially with the dimensionality of the input. In general, nonparametric models cannot evade this curse of dimensionality unless the problem exhibits certain properties. Hence, to facilitate large-scale supervised learning, this thesis focuses on two such properties: the existence of a low-dimensional manifold in the data and the discounting importance of high-order interactions between input variables. Often a problem would exhibit both these properties to a certain degree. To identify and exploit these properties, this work extends the notion of parsimony for hierarchical sparse grid models. It develops learning algorithms that simultaneously optimise the model parameters and the model structure to befit the problem at hand.
The new algorithms for adaptive sparse grids increase the range of computationally feasible supervised learning problems. They decrease the computation costs for training sparse grid models and the memory footprint of the resulting models. Hence, the algorithms can be used for classification and regression on high-dimensional data. Furthermore, they improve the interpretability of the sparse grid model and are suitable for learning the structure of the underlying
data distribution.

 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 17, 2017

ICLR 2017 workshop track open review



The list of accepted and rejected papers as well as papers invited to the ICLR 2017 workshop track is now here

Deadline submission for the ICLR workshop track is today at 5PM EST. The current stack of submission is as follows: 
Online Multi-Task Learning Using Biased SamplingSahil Sharma, Balaraman Ravindran17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesAdapting Distance Kernel to Domain adaptation for sentimental analysis Saerom Park, Jaewook Lee, Woojin Lee17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
On Improving the Numerical Stability of Winograd Convolutions
Kevin Vincent, Kevin Stephano, Michael Frumkin, Boris Ginsburg, Julien Demouth
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Fast Generation for Convolutional Autoregressive Models
Prajit Ramachandran, Tom Le Paine, Pooya Khorrami, Mohammad Babaeizadeh, Shiyu Chang, Yang Zhang, Mark A. Hasegawa-Johnson, Roy H. Campbell, Thomas S. Huang
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Online Structure Learning for Sum-Product Networks with Gaussian Leaves
Wilson Hsu, Agastya Kalra, Pascal Poupart
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Similarity preserving compressions of high dimensional sparse data
Raghav Kulkarni, Rameshwar Pratap
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
Ryan Lowe, Michael Noseworthy, Iulian V. Serban, Nicholas Angelard-Gontier, Yoshua Bengio, Joelle Pineau
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples
Beilun Wang, Ji Gao, Yanjun Qi
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Factorization tricks for LSTM networks
Oleksii Kuchaiev, Boris Ginsburg
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Exploring LOTS in Deep Neural Networks
Andras Rozsa, Manuel Gunther, Terrance E. Boult
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Shake-Shake regularization of 3-branch residual networks
Xavier Gastaldi
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Trace Norm Regularised Deep Multi-Task Learning
Yongxin Yang, Timothy M. Hospedales
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Deep Learning with Sets and Point Clouds
Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
INCREMENTAL LEARNING WITH PRE-TRAINED CONVOLUTIONAL NEURAL NETWORKS AND BINARY ASSOCIATIVE MEMORIES
Ghouthi Boukli Hacene, Vincent Gripon, Nicolas Farrugia, Mattieu Arzel, Michel Jezequel
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Dataset Augmentation in Feature Space
Terrance DeVries, Graham W. Taylor
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Evaluating Dimensionality Reduction of 2D Histogram Data from Truck On-board Sensors
Evaldas Vaiciukynas, Matej Ulicny, Sepideh Pashami, Slawomir Nowaczyk
17 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD
Sahil Garg, Irina Rish, Guillermo Cecchi, Aurelie Lozano
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
Class-based Prediction Errors to Categorize Text with Out-of-vocabulary Words
Joan Serrà, Ilias Leontiadis, Dimitris Spathis, Gianluca Stringhini, Jeremy Blackburn
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Delving Into Adversarial Attacks on Deep Policies
Jernej Kos, Dawn Song
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Annealed Generative Adversarial Networks
Arash Mehrjou, Saeed Saremi
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Learning a Metric for Relational Data
Jiajun Pan, Hoel Le Capitaine, Philippe Leray
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
The High-Dimensional Geometry of Binary Neural Networks
Alexander G. Anderson, Cory P. Berg
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Discovering objects and their relations from entangled scene representations
D. Raposo, A. Santoro, D.G.T. Barrett, R. Pascanu, T. Lillicrap, P. Battaglia
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Multiplicative LSTM for sequence modelling
Ben Krause, Iain Murray, Steve Renals, Liang Lu
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Learning to Discover Sparse Graphical Models
Eugene Belilovsky, Kyle Kastner, Gael Varoquaux, Matthew B. Blaschko
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
A Differentiable Physics Engine for Deep Learning in Robotics
Jonas Degrave, Michiel Hermans, Joni Dambre, Francis wyffels
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Revisiting Batch Normalization For Practical Domain Adaptation
Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Coupling Distributed and Symbolic Execution for Natural Language Queries
Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Transferring Knowledge to Smaller Network with Class-Distance Loss
Seung Wook Kim, Hyo-Eun Kim
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations
Philip Blair, Yuval Merhav, Joel Barry
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently
Muthuraman Chidambaram, Yanjun Qi
16 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Infinite Dimensional Word Embeddings
Eric Nalisnick, Sachin Ravi
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
Precise Recovery of Latent Vectors from Generative Adversarial Networks
Zachary C. Lipton, Subarna Tripathi
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans
Ramin M. Hasani, Magdalena Fuchs, Victoria Beneder, Radu Grosu
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
Xun Huang, Serge Belongie
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Recurrent Normalization Propagation
César Laurent, Nicolas Ballas, Pascal Vincent
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Deep Adversarial Gaussian Mixture Auto-Encoder for Clustering
Warith Harchaoui, Pierre-Alexandre Mattei, Charles Bouveyron
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Adversarial Examples for Semantic Image Segmentation
Volker Fischer, Mummadi Chaithanya Kumar, Jan Hendrik Metzen, Thomas Brox
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
RenderGAN: Generating Realistic Labeled Data
Leon Sixt, Benjamin Wild, Tim Landgraf
15 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Tuning Recurrent Neural Networks with Reinforcement Learning
Natasha Jaques, Shixiang Gu, Richard E. Turner, Douglas Eck
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Generalization to new compositions of known entities in image understanding
Yuval Atzmon, Jonathan Berant, Amir Globerson, Vahid Kazemi, Gal Chechik
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Adaptive Feature Abstraction for Translating Video to Language
Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Generalizable Features From Unsupervised Learning
Mehdi Mirza, Aaron Courville, Yoshua Bengio
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Neural Style Representations of Fine Art
Jeremiah Johnson
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Exploring loss function topology with cyclical learning rates
Leslie N. Smith, Nicholay Topin
14 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters
Joan Serrà, Alexandros Karatzoglou
13 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
Perception Updating Networks: On architectural constraints for interpretable video generative models
Decoupled "what" and "where" variational statistical framework and equivalent multi-stream network
12 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Semi-supervised deep learning by metric embedding
Elad Hoffer, Nir Ailon
11 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein
11 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Adversarial examples in the physical world
Alexey Kurakin, Ian J. Goodfellow, Samy Bengio
11 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Variational Reference Priors
Eric Nalisnick, Padhraic Smyth
9 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Development of JavaScript-based deep learning platform and application to distributed training
Masatoshi Hidaka, Ken Miura, Tatsuya Harada
9 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Song From PI: A Musically Plausible Network for Pop Music Generation
Hang Chu, Raquel Urtasun, Sanja Fidler
8 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Gated Multimodal Units for Information Fusion
John Arevalo, Thamar Solorio, Manuel Montes-y-Gómez, Fabio A. González
8 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Adjusting for Dropout Variance in Batch Normalization and Weight Initialization
Dan Hendrycks, Kevin Gimpel
7 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
Methods for Detecting Adversarial Images and a Colorful Saliency Map
Dan Hendrycks, Kevin Gimpel
7 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Charged Point Normalization: An Efficient Solution to the Saddle Point Problem
Armen Aghajanyan
7 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
Compositional Kernel Machines
Robert Gens, Pedro Domingos
7 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 RepliesOriginal
ICLR 2017 Conference Invite to Workshop
DL-gleaning: An approach for Improving inference speed and accuracy
HyunYong Lee and Byung-Tak Lee
6 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies
Marwin Segler, Mike Preuß, Mark P. Waller
2 Feb 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
CommAI: Evaluating the first steps towards a useful general AI
Marco Baroni, Armand Joulin, Allan Jabri, Germàn Kruszewski, Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov
31 Jan 2017ICLR 2017 workshop submissionreaders: everyone0 Replies
Summarized Behavioral Prediction
Shih-Chieh Su
20 Jan 2017ICLR 2017 workshop submissionreaders: everyone0 Replies



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly