So it looks like there will be more than 5700 people attending NIPS this year in Barcelona.

@IgorCarron @cmarschner my figure is already out of date ...— Neil Lawrence (@lawrennd) 3 décembre 2016

I should be one of them starting Thursday. If you are on your way to NIPS you want to

- read Neil Lawrence's "Don't Panic: Deep Learning will be Mostly Harmless" blog post.
- schedule your way around NIPS with the NIPS2016 scheduler:
- check the Twitter hashtag #NIPS2016 to see who's coming
- read the Salon des Réfusés papers (List of papers that were rejected at NIPS 2016 are also listed below). It's a shame there isn't a list of refused Workshops because I know of at least one.

- you are probably attending the 13th European Workshop on Reinforcement Learning (EWRL 2016). Here is the program. All the accepted papers are here. The hashtag seems to be #EWRL2016
- or that you have already downloaded the 170 pages conference book, the 70 pages Workshop book since it would put a strain on the conference wifi if you were to do once inside the convention center.The pre-proceedings that links to some papers is here.

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization

Differential response of the retinal neural code with respect to the sparseness of natural images

On Enumerating Stable Configurations of Cellular Automata with the MAJORITY update rule

[abs]
[ResearchGate]

Word2Vec is a special case of Kernel Correspondence Analysis and Kernels for Natural Language Processing

Neural Sampling by Irregular Gating Inhibition of Spiking Neurons and Attractor Networks

Node-Adapt, Path-Adapt and Tree-Adapt: Model-Transfer Domain Adaptation for Random Forest

Exploring and measuring non-linear correlations: Copulas, Lightspeed Transportation and Clustering

Quantifying the probable approximation error of probabilistic inference programs

A performance-based approach to design the stimulus presentation paradigm for the P300-based BCI

[abs]
[ResearchGate]

On Minimal Accuracy Algorithm Selection in Computer Vision and Intelligent Systems

Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings

[abs]
[ResearchGate]

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data

Unsupervised Learning of Word-Sequence Representations from Scratch via Convolutional Tensor Decomposition

The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary Outputs

One Class Splitting Criteria for Random Forests with Application to Anomaly Detection

Predictive Coding for Dynamic Vision: Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model

Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks

Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention

Convergence Rate Analysis of a Stochastic Trust Region Method for Nonconvex Optimization

Kernel regression, minimax rates and effective dimensionality: beyond the regular case

Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class

### Accepted Papers

*Non-Deterministic Policy Improvement Stabilizes Approximated Reinforcement Learning.*Wendelin Böhmer, Rong Guo and Klaus Obermayer*Batch policy iteration algorithms for continuous domains.*Bilal Piot, Matthieu Geist and Olivier Pietquin*Corrupt Bandits.*Pratik Gajane, Tanguy Urvoy and Emilie Kaufmann*Exploration Potential.*Jan Leike*Toward a data efficient neural actor-critic.*Matthieu Zimmer, Yann Boniface and Alain Dutech*Bayesian Optimal Policies for Asynchronous Bandits with Known Trends*. Mohammed Amine Alaoui, Tanguy Urvoy and Fabrice Clérot*Online Linear Programming with Unobserved Constraints.*Wenzhuo Yang, Shie Mannor and Huan Xu*Consistent On-Line Off-Policy Evaluation*. Assaf Hallak and Shie Mannor*Automatic Representation for Life-Time Value Recommender Systems.*Assaf Hallak, Elad Yom-Tov and Yishay Mansour*A Lower Bound for Multi-Armed Bandits with Expert Advice.*Yevgeny Seldin and Gábor Lugosi*Principled Option Learning in Markov Decision Processes.*Roy Fox, Michal Moshkovitz and Naftali Tishby*Accelerated Gradient Temporal Difference Learning.*Yangchen Pan, Adam White and Martha White*Memory Lens: How Much Memory Does an Agent Use?*Christoph Dann, Katja Hofmann and Sebastian Nowozin*Spatio-Temporal Abstractions in Reinforcement Learning Through Neural Encoding.*Nir Baram, Tom Zahavy and Shie Mannor*Iterative Hierarchical Optimization for Misspecified Problems.*Daniel J. Mankowitz, Timothy Mann and Shie Mannor.*Situational Awareness by Risk-Conscious Skills.*Daniel J. Mankowitz, Aviv Tamar and Shie Mannor*A Deep Hierarchical Approach to Lifelong Learning in Minecraft.*Chen Tessler, Shahar Givony, Daniel J. Mankowitz, Tom Zahavy and Shie Mannor*Deep Reinforcement Learning Solutions for Energy Microgrids Management.*Vincent Francois-Lavet, David Taralla, Damien Ernst and Raphael Fonteneau*Exploration–Exploitation in MDPs with Options.*Ronan Fruit and Alessandro Lazaric*Using Policy Gradients to Account for Changes in Behavior Policies under Off-policy Control.*Lucas Lehnert and Doina Precup*Linear Thompson Sampling Revisited.*Marc Abeille and Alessandro Lazaric*Magical Policy Search: Data Efficient Reinforcement Learning with Guarantees of Global Optimality.*Philip Thomas and Emma Brunskill*Robust Kalman Temporal Difference.*Shirli Di-Castro Shashua and Shie Mannor*Approximations of the Restless Bandit Problem.*Steffen Grunewalder and Azadeh Khaleghi*Decoding multitask DQN in the world of Minecraft.*Lydia Liu, Urun Dogan and Katja Hofmann*Why is Posterior Sampling Better than Optimism for Reinforcement Learning?*Ian Osband and Benjamin Van Roy*Value-Aware Loss Function for Model Learning in Reinforcement Learning.*Amir-Massoud Farahmand, Andre Barreto and Daniel Nikovski

**Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.