Sunday, September 25, 2016

Sunday Morning Video: Bay Area Deep Learning School Day 2 Live streaming

The Bay Area Deep Learning School Day 2 starts streaming in 30 minutes at 9:00AM PST / 12PM EST / 5:00PM London time / 6:00PM Paris time and it's all here. The whole schedule is here. Yesterday's video of day 1 has already garnered 16,000 views.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday Morning Videos: HORSE2016, On “Horses” and “Potemkin Villages” in Applied Machine Learning

 
While waiting for day 2 of the Bay Area Deep Learning school in three hours, here are the ten videos of presentations made at the HORSE2016 workshop (On “Horses” and “Potemkin Villages” in Applied Machine Learning) organized by Bob Sturm. Bob came to present something around that theme at the 9th meetup of Season 3 of the Paris Machine Learning meetup. With this workshop, the field and attendant issues are becoming more visible: this is outstanding! as this has bearing on algorithm bias and explainability.  To make the video-watching more targeted, Bob even included commentaries with embedded videos in his blog post, here is the beginning of the whole blog entry, you should go there:
 
 
September 19, 2016 saw the successful premier edition of HORSE2016: On “Horses” and “Potemkin Villages” in Applied Machine Learning. I have now uploaded videos to the HORSE2016 YouTube channel, and posted slides to the HORSE2016 webpage. I embed the videos below with some commentary.
HORSE2016 had 10 speakers expound on a variety of interesting topics, and about 60 people in the audience. I am extremely pleased that the audience included several people from outside academia, including industry, government employees and artists. This shows how many have recognised the extent to which machine learning and artificial intelligence are impacting our daily lives. The issues explored at HORSE2016 are essential to ensuring this impact remains beneficial and not detrimental.
Here is my introductory presentation, “On Horse Taxonomy and Taxidermy”. This talk is all about “horses” in applied machine learning: what are they? Why is this important and relevant today? Why the metaphor, and why is it appropriate? I present an example “horse,” uncovered using an intervention experiment and a generation experiment. Finally, I discuss what a researcher should do if someone demonstrates their system is a “horse”.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, September 24, 2016

Saturday Morning Video: Bay Area Deep Learning School Day 1 Live streaming


So the Bay Area Deep Learning has already started streaming and it's all here. The whole schedule is here.


Saturday
​9:00-10:00 


Hugo Larochelle​

I will cover the design of convolutional neural network (ConvNet) architectures for image understanding, the history of state of the art models on the ImageNet Large Scale Visual Recognition Challenge, and some of the most recent patterns of developments in this area. I will also talk about ConvNet architectures in the context of related visual recognition tasks such as object detection, segmentation, and video processing.


10:15-11:45 

Deep Learning for Computer Vision



Andrej Karpathy

12:45-2:15 

Deep Learning for NLP



I will describe the foundations of deep learning for natural language processing: word vectors, recurrent neural networks, tasks and models influenced by linguistics. I will end with some recent models that put together all these basic lego blocks into a very powerful deep architecture called dynamic memory network.


Richard Socher


2:45-3:45​
Tensorflow Tutorial



Sherry Moore


4:00-5:30
Foundations of Deep Unsupervised Learning
Building intelligent systems that are capable of extracting meaningful 

representations from high-dimensional data lies at the core of solving many Artificial Intelligence tasks, including visual object recognition, information retrieval, speech perception, and language understanding. In this tutorial I will discuss mathematical basics of many popular unsupervised models, including Sparse Coding, Autoencoders, Restricted Boltzmann Machines (RBMs), Deep Boltzmann Machines (DBMs), and Variational Autoencoders (VAE). I will furtherdemonstrate that these models are capable of extracting useful hierarchical representations from high dimensional data with applications in visual object recognition, information retrieval, and natural language processing. Finally, time permitting, I will briefly discuss models that can generate natural language descriptions (captions) of images, as well as generate images from captions using attention mechanism.


Ruslan Salakhutdinov








6:00-7:00 Nuts and bolts of applying deep learning


Andrew Ng


Saturday.

Sunday.






9:00-10:30


Policy Gradients and Q-Learning: Rise to Power, Rivalry, and Reunification

I'll start by providing an overview of the state of the art in deep reinforcement learning, including recent applications to video games (e.g., Atari), board games (AlphaGo) and simulated robotics. Then I'll give a tutorial introduction to the two methods that lie at the core of these results: policy gradients and Q-learning. Finally, I'll present a new analysis that shows the close similarity between these two methods. A theme of the talk will be to not only ask "what works?", but also "when does it work?" and "why does it work?"; and to find the kind of answers that are actionable for tuning one's implementation and designing better algorithms.

John Schulman






10:45-11:45
Theano Tutorial



Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently, on CPU or GPU. Since its introduction, Theano has been one of the most popular frameworks in the machine learning community, and multiple frameworks for deep learning have been built on top of it (Lasagne, Keras, Blocks, ...). This tutorial will focus first on the concepts behind Theano and how to build and evaluate simple expressions, and then we will see how more complex models can be defined and trained.


Patrice Lamblin

12:45-2:15
Deep Learning for Speech

Traditional speech recognition systems are built from numerous modules, each requiring its own challenging engineering. With deep learning it is now possible to create neural networks that perform most of the tasks of a traditional engine "end to end", dramatically simplifying the development of new speech systems and opening a path to human-level performance. In this tutorial, we will walk through the steps for constructing one type of end-to-end system similar to Baidu's "Deep Speech" model. We will put all of the pieces together to form a "scale model" of a state of the art speech system; small-scale versions of the neural networks now powering production speech engines.

Adam Coates

2:45-3:45
Torch Tutorial



Torch is an open platform for scientific computing in the Lua language, with a focus on machine learning, in particular deep learning. Torch is distinguished from other array libraries by having first-class support for GPU computation, and a clear, interactive and imperative style. Further, through the "NN" library, Torch has broad support for building and training neural networks by composing primitive blocks or layers together in compute graphs. Torch, although benefitting from 

extensive industry support, is a community owned and community developed ecosystem. All neural net libraries, including Torch NN, TensorFlow and Theano, rely on automatic differentiation (AD) to manage the computation of gradients of complex compositions of functions. I will present some general background on automatic differentiation (AD), which is the fundamental abstraction of gradient based optimization, and demonstrate 

Twitter's flexible implementation of AD in the library torch-autograd


Alex Wiltschko

4:00-5:30
Sequence to Sequence Learning for NLP and Speech



I will first present the foundations of sequence to sequence (seq2seq) learning and attention models, and their applications in machine translation and speech recognition. Then I will discuss attention with pointers and functions. Finally I will describe how reinforcement learning can play a role in seq2seq and attention models.


Quoc Le

6:00-7:00
Foundations and Challenges of Deep Learning


Why is deep learning working as well as it does? What are some big challenges that remain ahead? This talk will first survey some key factors in the success of deep learning. First, from the context of the no-free lunch theorem, we will discuss the expressive power of deep netwroks to capture abstract distributed representations. Second, we will discuss our surprising ability to actually optimize the parameters of neural networks in spite of their non-convexity. We will then consider a few challenges ahead, including the core representation question of disentangling the underlying explanatory factors of variation, especially with unsupervised learning, why this is important for bringing reinforcement learning to the next level, and optimization questions that remain challenging, such as learning of long-term dependencies, understanding the optimization landscape of deep networks, and how learning in brains remain a mystery worth attacking from the deep learning perspective.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 23, 2016

It's Friday, it's Hamming's time: Call for deep learning research problems from your "problem surplus" stack


Francois Chollet the creator of Keras tweeted the following interesting proposition:
Here it is:

Are you a researcher? Then you probably have a "problem surplus": a list of interesting and important research problems that you don't have time to work on yourself. What if you could outsource some of these problems to distributed teams of motivated students and independent researchers looking to build experience in deep learning?You just have to submit the description of your problem, some pointers as to how to get started, and provide lightweight supervision along the way (occasionally answer questions, provide feedback, suggest experiments to try...).
What you get out of this:
  • Innovative solutions to research problems that matter to you.
  • Full credits for the value you provide along the research process.
  • New contacts among bright people outside of your usual circles.
  • A fun experience.
We are looking for both deep learning research problems, and problems from other fields that could be solved using deep learning.
Note that the information you submit here may be made public (except for your contact information). We will create a website listing the problems submitted, where people will be able to self-organize into teams dedicated to specific problems. You will be in contact with the people working on your problem via a mailing list. The research process will take place in the open, with communications being publicly available and code being released on GitHub.

 Here are some problems:

      Enhanced NAVCAM image of Comet 67P/C-G taken on 18 September 2016, 12.1 km from the nucleus centre. The scale is 1.0 m/pixel and the image measures about 1.1 km across. Credits: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0
       
      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      CSjobs: Lecturers (Assistant Professors) in: Machine Learning & Computer Vision; Robot Vision & Autonomous Systems

      Mark just sent me the following:

      Dear Igor, I thought that the Lecturer (Assistant Professor) jobs below may be of interest to Nuit Blanche readers. Best wishes, Mark
      Here is the announcement:
      ----
      Lecturers (Assistant Professors) in: Machine Learning & Computer Vision; Robot Vision & Autonomous Systems

      Centre for Vision, Speech and Signal Processing (CVSSP)
      University of Surrey, UK

      Salary:  GBP 39,324 to 46,924 per annum

      Closing Date:  Monday 31 October 2016

      https://jobs.surrey.ac.uk/070216

      The University offers a unique opportunity for two individuals with outstanding research and leadership to join the Centre for Vision, Speech and Signal Processing (CVSSP).

      The successful candidate is expected to build a research project portfolio to complement existing CVSSP strengths. The centre seeks to appoint two individuals with an excellent research track-record and international profile to lead future growth of research activities in one or more of the following areas:

       * Machine Learning & Pattern Recognition
       * Computer Vision
       * Robot Vision & Autonomous Systems
       * Intelligent Sensing and Sensor Networks
       * Audio-Visual Signal and Media Processing
       * Big Visual Data Understanding
       * Machine Intelligence

      We now seek individuals with strong research track-records and leadership potential who can develop the existing activities of CVSSP and exploit the synergetic possibilities that exist within the centre, across the University and regionally with UK industry. You will possess proven management and leadership qualities, demonstrating achievements in scholarship and research at a national and international level, and will have experience of teaching within HE.

      CVSSP is one of the primary centres for computer vision & audio-visual signal processing in Europe with over 120 researchers, a grant portfolio of £18M and a track-record of pioneering research leading to technology transfer in collaboration with UK industry.  CVSSP forms part of the Department of Electronic Engineering, recognised as a top department for both Teaching and Research: surrey.ac.uk/ee.

      For an informal discussion, please contact Professor Adrian Hilton, Director of CVSSP (a.hilton@surrey.ac.uk). Further details of CVSSP:www.surrey.ac.uk/cvssp

      Interviews are expected to take place in the week commencing 21st November 2016.

      Further details: https://jobs.surrey.ac.uk/070216

      We can offer a generous remuneration package, which includes relocation assistance where appropriate, an attractive research environment, the latest teaching facilities, and access to a variety of staff development opportunities.

      We acknowledge, understand and embrace diversity.

      --
      Prof Mark D Plumbley
      Professor of Signal Processing
      Centre for Vision, Speech and Signal Processing (CVSSP)
      University of Surrey
      Guildford, Surrey, GU2 7XH, UK
      Email: m.plumbley@surrey.ac.uk





      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      Thursday, September 22, 2016

      Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases

       We just had a meetup on Stan where Eric mentioned Michael Betancourt's video presentation on Hamiltonian Monte Carlo (see below) and I note the Random Features expansion can speed some of these computations:


      Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases by Cheng Zhang, Babak Shahbaba, Hongkai Zhao

      For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian Monte Carlo (HMC). The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the art methods.



      The Geometric Foundations of Hamiltonian Monte Carlo
      M. J. Betancourt, Simon Byrne, Samuel Livingstone, Mark Girolami

      Although Hamiltonian Monte Carlo has proven an empirical success, the lack of a rigorous theoretical understanding of the algorithm has in many ways impeded both principled developments of the method and use of the algorithm in practice. In this paper we develop the formal foundations of the algorithm through the construction of measures on smooth manifolds, and demonstrate how the theory naturally identifies efficient implementations and motivates promising generalizations.


      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      Wednesday, September 21, 2016

      Paris Machine Learning Meetup, Hors série #2: Scalable Machine Learning with H2O


       

      Tonight, we will have Paris Machine Learning Meetup, Hors série #2 on Scalable Machine Learning with H2O. Jiqiong Qiu is a co-organizer of this meetup with Franck Bardol and Igor Carron. This "Hors Série 2" will be focused on a few presentations of H2O -H20.ai- by Jo-fai Chow and Jakub Háva. The meetup will be hosted and sponsored by Murex.

      If you want to try the H2o, feel free to follow this installation guide. We provide not only the stand alone installation guide but also a dockerfile with H2o+Tensorflow+Spark.

      All the presentations are at: http://bit.ly/h2o_paris_1


      1. Introduction to Machine Learning with H2O (Joe - 30 mins)
      In this talk, I will walk you through our company (H2O.ai), our open-source machine learning platform (H2O) and use cases from some of our users. This will be useful for attendees who are not familiar with H2O.

      2. Project “Deep Water” (H2O integration with TensorFlow and other deep learning libraries) (Joe - 30 mins)
      The “Deep Water" project is about integrating our H2O platform with other open-source deep learning libraries such as TensorFlow from Google. We are also working GPU implementation for H2O.In this talk about the motivation and potential benefits of our recent project named “Deep Water”. 

      3. Sparkling Water 2.0 (Jakub - 45 mins)
      Sparkling Water integrates the H2O open source distributed machine learning platform with the capabilities of Apache Spark. It allows users to leverage H2O’s machine learning algorithms with Apache Spark applications via Scala, Python, R or H2O’s Flow GUI which makes Sparkling Water a great enterprise solution. Sparkling Water 2.0 was built to coincide with the release of Apache Spark 2.0 and introduces several new features. These include the ability to use H2O frames as Apache Spark’s SQL datasource, transparent integration into Apache Spark machine learning pipelines, the power to use Apache Spark algorithms via the Flow GUI and easier deployment of Sparkling Water in a Python environment. In this talk we will introduce the basic architecture of Sparkling Water and provide an overview of the new features available in Sparkling Water 2.0. The talk will also include a live demo showing how to integrate H2O algorithms into Apache Spark pipelines – no terminal needed!

      About Joe :https://uk.linkedin.com/in/jofaichow 
      Jo-fai (or Joe) is a data scientist at H2O.ai. Before joining H2O, he was in the business intelligence team at Virgin Media where he developed data products to enable quick and smart business decisions. He also worked (part-time) for Domino Data Lab as a data science evangelist promoting products via blogging and giving talks at meetups. Joe has a background in water engineering. Before his data science journey, he was an EngD researcher at STREAM Industrial Doctorate Centre working on machine learning techniques for drainage design optimization. Prior to that, he was an asset management consultant specialized in data mining and constrained optimization for the utilities sector in UK and abroad. He also holds a MSc in Environmental Management and a BEng in Civil Engineering.

      About Jakub :https://cz.linkedin.com/in/havaj
      Jakub (or “Kuba”) finished his bachelors degree in computer science at Charles University in Prague, and is currently finishing his master’s in software engineering as well. As a bachelors thesis, Kuba wrote a small platform for distributed computing of tasks of any type. On his current masters studies he’s developing a cluster monitoring tool for JVM based languages which should make debugging and reasoning about performance of distributed systems easier using a concept called distributed stack traces. At H2O, Kuba mostly works on Sparkling Water project.

       



      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      Tuesday, September 20, 2016

      Paris Machine Learning Meetup, Hors série #1: Introduction to Bayesian Inference with Stan and R




      For this first Hors série Paris Machine Learning meetup of the season, Dataiku is co-organizing this meetup. To register for the even you need to go there. Streaming video is here and will start at around 7:00PM Paris time.

       

      Presentation slides should be available before the meetup:

      Eric Novik, Introduction to Bayesian Inference with Stan and R
      Stan is a modern, high-performance probabilistic programming language that interfaces with R, Python, Julie, Matlab, and Stata. Eric Novik, one of the organizers of the NYC Bayesian Data Analysis meetup and a founder of Stan Group, will be in town to give a brief talk on Stan.
      We will build up a simple Stan model from scratch to demonstrate various parts of the Stan program and will also present a multi-level, overdispersed Poison model that can be used for pricing products in a retail setting. We fit the latter model with the rstanarm package, which can be thought of as a fully Bayesian equivalent to lme4.
      After the talk, Eric and Stan core developers Daniel Lee and Michael Betancourt will answer questions from the audience and offer some thoughts on the future of statistical inference and Bayesian computing.
       
      Eric Kramer, DSS + Stan



       
      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      CSjobs: Two Ph.D. positions at TU Wien (Austria) and Aalto University (Finland)

      Alex just sent me the following:

      Hi Igor, 

      We have two Phd positions on the application of Compressed Sensing to Complex Networks available at TU Vienna and Aalto University. It would be great if you could circulate the ad on your blog. 

      Thanks, Alex 

       

      Sure Alex !

      ----- 

      The Institute of Telecommunications (Institute of Telecommunications) at TU Wien (Vienna, Austria) has two openings for full-time Ph.D. positions to conduct cutting-edge research in the general area of signal processing for big data in close collaboration with the Machine Learning for Big Data Group (http://research.cs.aalto.fi/MLBigDat) at Aalto University (Finland). 

      In the context of the WWTF project Co3-iGraB, we develop foundations for extracting useful information from massive data sets. The key idea is to tackle big data problems by a combination of graph signal processing for complex networks and compressed sensing. The project is anticipated to lead to novel and versatile inference tools that are applicable to numerous real-world problems. 

      DESIRED SKILLS AND EXPERIENCE We look for talented and passionate researchers to work on problems in statistical/sparse graph signal processing, graph learning, and distributed/non-convex optimization. Interested candidates should hold a master degree (or equivalent) in Electrical Engineering, Mathematics, Computer Science, or a related discipline, and show a strong background in mathematics and analytical thinking. Expertise in signal processing, statistics, optimization, information theory, or distributed computing is advantageous. Excellent written and oral communication skills in English are a must.

       Applications including a cover letter, a detailed curriculum vitae, course transcripts, and the names of two referees should be sent by email to Prof. Gerald Matz (gerald.matz@tuwien.ac.at) with cc to Prof. Alex Jung (alexander.jung@aalto.fi) no later than Sept. 30, 2016. 

      ABOUT THE EMPLOYER The selected candidates will be offered a three-year Ph.D. contract with competitive salary and will have the opportunity to be part of a dynamic and internationally recognized team in an inspiring research environment. Multiple long-term stays with our collaboration partner at Aalto University (Finland) are envisaged. TU Wien is a leading academic institution, located in Austria's capital Vienna, offering superior quality of living in the heart of Europe


       More information can be found here.



      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      Monday, September 19, 2016

      CfP: SPARS 2017 Lisbon, Portugal, June 5-8, 2017



      The SPARS 2017 workshop will take place in Lisbon and they have a call for papers:

      The Signal Processing with Adaptive Sparse Structured Representations (SPARS) workshop aims at bringing together people from statistics, engineering, mathematics, and computer science, fostering the exchange and dissemination of new ideas and results, both applied and theoretical, on the general area of sparsity-related techniques and computational methods, for high dimensional data analysis, signal processing, and related applications. 
      SPARS 2017 will be held at Instituto Superior Técnico (IST), the engineering school of the University of Lisbon, on June 5-8, 2017.

      In addition to 8 plenary lectures, the workshop will feature a single track format with approximately 30 standard (20min) talks, and 3 poster/demo sessions.
               Main topics: 

      • Sparse coding and representations, and dictionary learning.
      • Sparse and low-rank approximation algorithms.
      • Compressive sensing and learning.
      • Dimensionality reduction and feature extraction.
      • Sparsity measures in approximation theory, information theory, and statistics.
      • Low-complexity/low-dimensional regularization.
      • Statistical and Bayesian models and algorithms for sparsity.
      • Sparse network theory and analysis.
      • Sparsity and low-rank regularization.
      • Applications.

      Invited  speakers:
      • Yoram Bresler, Beckman Institute, University of Illinois, USA.
      • Volkan Cevher, École Polytechnique Fédérale de Lausanne, Switzerland.
      • Jalal Fadili, École Nationale Supérieure d'Ingénieurs de Caen, France.
      • Anders Hansen, University of Cambridge, UK.
      • Gitta KutyniokTechnische Universität Berlin, Germany.
      • Philip SchniterOhio State University, USA.
      • Eero Simoncelli, Howard Hughes Medical Institute, New York University, USA.
      • Rebecca Willett, University of Wisconsin, USA.

      A SpaRTan/MacSeNet Summer School on Sparse Representations will be organized during the week before SPARS 2017. More information will follow shortly.






      Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      Printfriendly