Saturday, May 30, 2015

Nuit Blanche in Review ( May 2015 )

What happened since the Nuit Blanche in Review ( April 2015 ), for one, New Horizons is getting closer to planet Pluto (yes, it's my blog, I can call whatever body in the solar system a planet). I started a Facebook page for Nuit blanche. We also had 12 implementations made available by their respective authors. Nuit Blanche reached 4 million page views, saw the great convergence in action, wondered the next steps after commodity sequencing and contemplated some blue skies, hoped that compressive sensing solved one of the most important problem of our times, got some feedback from readers, had an interesting machine learning meetup in Paris, and saw compressive sensing at work. We also had some in-depth studies, job offers, a book announcement and some videos.

Implementations:

  1. Four million page views: a million here, a million there and soon enough we're talking real readership... 
  2. The Great Convergence: Deep Learning, Compressive Sensing, Advanced Matrix Factorization
  3. The Great Convergence: FlowNet: Learning Optical Flow with Convolutional Networks  
  4. Hamming's time: The Important Things after Commodity Sequencing 
  5. Blue Skies: Foundational principles for large scale inference, Stochastic Simulation and Optimization Methods in Signal Processing, Streaming and Online Data Mining , Kernel Models and more.
  6. On Uncertainty Quantification of Lithium-ion Batteries
  7. Reader's Digest: Gaussian vs Structured Projections, Improving A2I implementations, Kaczmarz method, Xampling the Future
  8. Low-Cost Compressive Sensing for Color Video and Depth
  9. Tonight! Paris Machine Learning Meetup #9, Season 2: ML @Quora and @Airbus and in HFT, Tax, APIs war 
  10. Compressive Sensing at work: Three-dimensional coordinates of individual atoms in materials, Measurement of a 16.8 Million-Dimensional Entangled Probability Distribution, WiFi Fingerprinting in Indoor Environment, Airborne gravimetry data reconstruction
In depth

Job:
Video
credit photo: NASA
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, May 29, 2015

On Uncertainty Quantification of Lithium-ion Batteries

Forget devising the exact location of atoms, people, computing unimaginable quantum mechanical probability distributions, what if compressive sensing enabled us to figure out how to extend the lifetime of our smartphone batteries ? This is what this uncertainty quantification study using compressive sensing tries to do.

Figure from this tweet.


In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Reader's Digest: Gaussian vs Structured Projections, Improving A2I implementations, Kaczmarz method, Xampling the Future

Following up on this recent blog entry entitled Compressed Nonnegative Matrix Factorization is Fast and Accurate, I went ahead and asked the authors Mariano Tepper and Guillermo Sapiro a dumb question:

Dear Mariano and Guillermo,

....I recently featured your latest interesting preprint on "Compressed Nonnegative Matrix Factorization is Fast and Accurate".

I noted that you made a specific statement that gaussian projections were not as accurate as the structured projections you suggest in the paper. I was wondering the following, can a similar accuracy be obtained by gaussian random projections provided there are more projections (compared to the structured ones you have devised) ? I realize it sounds like a silly statement but I am wondering if during your different runs you had observed that Gaussian projections could indeed reach the accuracy of your structured projection with a reasonable additional set of projections (the key part of the question obviously resides in "reasonable"). Thanks in advance for your time.
Cheers,
Igor.

Mariano was kind enough to answer:


Hi Igor,

Thanks for featuring our paper! ...

Your intuition is right, adding more random projections (i.e., using less compression) leads to improved results. We have done many experiments regarding this particular point. Let me summarize the two key points from our observations:
  • In general, MANY more random projections need to be added, not just a few.
  • The amount of compression varies significantly with the data being used (some data are harder to compress). This variability is very much attenuated when using compression techniques that exploit the data structure.

I attach a simple plot, showing for a synthetic dataset how the results vary with the compression level (the compression level is the number of projections). In this case, the increase in performance has a quite weak dependency on the number of projections. These type of experiments were left out of our journal submission because of space constraints.

Hope this clarifies a bit more your comment.
 
Best,

--
Mariano
 
 On another interaction Edmar Candeia Gurjao mentioned 

Hi Igor,


we have just published a paper in the 2015 IEEE International Instrumentation and Measurement Technology Conference I2MTC entitled "Using Synchronism Pulse to Improve A2I Implementations" V. L. Reis, E. C. Gurjão and R. C. S. Freire. It presents our first contribution towards a reconfigurable Analog to Information Converter, and I am sending the poster we present in the conference, could you publish in Nuit Blanche? Thanks,


Edmar
Sure Edmar ! The poster is here.

In the comment section of the blog entry on  "Faster Randomized Kaczmarz", Surya wondered:

I am curious (and at the same time trying to check) if Kaczmarz method is used as an alternative to methods like conjugate gradient or LSQR when solving sparse least squares problems.
Finally, we should more about it later but I received an announcement for the SAMPL 2015 workshop with the title "Xampling the Future" on June 22nd at the Technion. I am sure more information will show up on Yonina's site very soon !
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Low-Cost Compressive Sensing for Color Video and Depth

As usual on some subject, I am a little late but this is very nice: use the varying shape of a liquid lens and a moving coded aperture in order to produce different measurements in a CS system.





Low-Cost Compressive Sensing for Color Video and Depth by Xin Yuan, Patrick Llull, Xuejun Liao, Jianbo Yang, Guillermo Sapiro, David J. Brady, Lawrence Carin

A simple and inexpensive (low-power and low-bandwidth) modification is made to a conventional off-the-shelf color video camera, from which we recover {multiple} color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via high-speed coding, manifested via translation of a single coded aperture; the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a {liquid} lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet/DCT coefficients.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSJob : 3 postdoc positions 'Compressed Sensing for Quantitative MR imaging', Edinburgh

Yves just sent me the following:


Dear Igor


May I ask you to post a job ad on Nuit Blanche?


Mike Davies, Ian Marshall and myself currently have 3 postdoc positions available in Edinburgh on 'Compressed Sensing for Quantitative MR imaging’.


These openings come in the context of a multidisciplinary project between the Edinburgh Joint Research Institute in Signal and Image Processing (JRI-SIP, Davies & Wiaux), and the Edinburgh Brain Research Imaging Centre (BRIC, Marshall).

Details on the positions and application procedures can be found at www.ed.ac.uk/jobs and basp.eps.hw.ac.uk .

Thanks a lot in advance



Best regards
___________________________________
Dr Yves Wiaux, Assoc. Prof., BASP Director
Institute of Sensors, Signals & Systems
School of Engineering & Physical Sciences
Heriot-Watt University, Edinburgh
basp.eps.hw.ac.uk
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 28, 2015

Compressive Sensing at work: Three-dimensional coordinates of individual atoms in materials, Measurement of a 16.8 Million-Dimensional Entangled Probability Distribution, WiFi Fingerprinting in Indoor Environment, Airborne gravimetry data reconstruction

Every once in while people ask how compressive sensing is used. Here are four different ways. Most of them use the sparsity seeking solvers side of things while others take full advantage of the multiplexing ability of the measurement matrices. Without further ado:


Three-dimensional coordinates of individual atoms in materials revealed by electron tomography  by Rui Xu, Chien-Chun Chen, Li Wu, M. C. Scott, W. Theis, Colin Ophus, Matthias Bartels, Yongsoo Yang, Hadi Ramezani-Dakhel, Michael R. Sawaya, Hendrik Heinz, Laurence D. Marks, Peter Ercius, Jianwei Miao
Crystallography, the primary method for determining the three-dimensional (3D) atomic positions in crystals, has been fundamental to the development of many fields of science. However, the atomic positions obtained from crystallography represent a global average of many unit cells in a crystal. Here, we report, for the first time, the determination of the 3D coordinates of thousands of individual atoms and a point defect in a material by electron tomography with a precision of ~19 picometers, where the crystallinity of the material is not assumed. From the coordinates of these individual atoms, we measure the atomic displacement field and the full strain tensor with a 3D resolution of ~1nm^3 and a precision of ~10^-3, which are further verified by density functional theory calculations and molecular dynamics simulations. The ability to precisely localize the 3D coordinates of individual atoms in materials without assuming crystallinity is expected to find important applications in materials science, nanoscience, physics and chemistry.



Millimeter Wave Beamforming Based on WiFi Fingerprinting in Indoor Environment by Ehab Mahmoud Mohamed, Kei Sakaguchi, Seiichi Sampei
(Submitted on 21 May 2015)
Millimeter Wave (mm-w), especially the 60 GHz band, has been receiving much attention as a key enabler for the 5G cellular networks. Beamforming (BF) is tremendously used with mm-w transmissions to enhance the link quality and overcome the channel impairments. The current mm-w BF mechanism, proposed by the IEEE 802.11ad standard, is mainly based on exhaustive searching the best transmit (TX) and receive (RX) antenna beams. This BF mechanism requires a very high setup time, which makes it difficult to coordinate a multiple number of mm-w Access Points (APs) in mobile channel conditions as a 5G requirement. In this paper, we propose a mm-w BF mechanism, which enables a mm-w AP to estimate the best beam to communicate with a User Equipment (UE) using statistical learning. In this scheme, the fingerprints of the UE WiFi signal and mm-w best beam identification (ID) are collected in an offline phase on a grid of arbitrary learning points (LPs) in target environments. Therefore, by just comparing the current UE WiFi signal with the pre-stored UE WiFi fingerprints, the mm-w AP can immediately estimate the best beam to communicate with the UE at its current position. The proposed mm-w BF can estimate the best beam, using a very small setup time, with a comparable performance to the exhaustive search BF.

We demonstrate how to implement extremely high-dimensional compressive imaging on a bi-photon probability distribution. When computationally reconstructing the two-party system, compressive imaging requires a sensing matrix that may drastically exceed practical limits for conventional computers. These limitations are virtually eliminated via fast-Hadamard transform Kronecker-based compressive sensing. We list, in detail, the operations necessary to implement this method and provide an experimental demonstration in which we measure and reconstruct a 16.8 million-dimensional bi-photon probability distribution. Instead of requiring over a year to raster scan or over 2 terabytes of computer memory to perform a reconstruction, we performed the experiment in approximately twenty hours, required between 8 and 32 gigabytes of computer memory, and reconstructed the full distribution in approximately twenty minutes.

Airborne gravimetry data sparse reconstruction via L1-norm convex quadratic programming by Ya-Peng Yang, Mei-Ping Wu, Gang Tang
In practice, airborne gravimetry is a sub-Nyquist sampling method because of the restrictions imposed by national boundaries, fi nancial cost, and database size. In this study, we analyze the sparsity of airborne gravimetry data by using the discrete Fourier transform and propose a reconstruction method based on the theory of compressed sensing for largescale gravity anomaly data. Consequently, the reconstruction of the gravity anomaly data is transformed to a L1-norm convex quadratic programming problem. We combine the preconditioned conjugate gradient algorithm (PCG) and the improved interior-point method (IPM) to solve the convex quadratic programming problem. Furthermore, a fl ight test was carried out with the homegrown strapdown airborne gravimeter SGA-WZ. Subsequently, we reconstructed the gravity anomaly data of the flight test, and then, we compared the proposed method with the linear interpolation method, which is commonly used in airborne gravimetry. The test results show that the PCG-IPM algorithm can be used to reconstruct large-scale gravity anomaly data with higher accuracy and more effectiveness than the linear interpolation method.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 27, 2015

Robust Rotation Synchronization via Low-rank and Sparse Matrix Decomposition

Another clue that GoDec does well:

 
Robust Rotation Synchronization via Low-rank and Sparse Matrix Decomposition by Federica Arrigoni, Andrea Fusiello, Beatrice Rossi, Pasqualina Fragneto

This paper deals with the rotation synchronization problem, which arises in global registration of 3D point-sets and in structure from motion. The problem is formulated in an unprecedented way as a "low-rank and sparse" matrix decomposition that handles both outliers and missing data. A minimization strategy, dubbed R-GoDec, is also proposed and evaluated experimentally against state-of-the-art algorithms on simulated and real data. The results show that R-GoDec is the fastest among the robust algorithms.  
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Low-Rank Matrix Recovery from Row-and-Column Affine Measurements - implementation -


Or just sent me the following:
Hi Igor,


Hope all is well,


The following manuscript on low-rank matrix recovery by my student Avishai Wagner and me might be of interest to your blog's reader:
http://arxiv.org/abs/1505.06292


Best,

Or
 Thanks Or !

Low-Rank Matrix Recovery from Row-and-Column Affine Measurements by Avishai Wagner, Or Zuk

We propose and study a row-and-column affine measurement scheme for low-rank matrix recovery. Each measurement is a linear combination of elements in one row or one column of a matrix X. This setting arises naturally in applications from different domains. However, current algorithms developed for standard matrix recovery problems do not perform well in our case, hence the need for developing new algorithms and theory for our problem. We propose a simple algorithm for the problem based on Singular Value Decomposition (SVD) and least-squares (LS), which we term \alg. We prove that (a simplified version of) our algorithm can recover X exactly with the minimum possible number of measurements in the noiseless case. In the general noisy case, we prove performance guarantees on the reconstruction accuracy under the Frobenius norm. In simulations, our row-and-column design and \alg algorithm show improved speed, and comparable and in some cases better accuracy compared to standard measurements designs and algorithms. Our theoretical and experimental results suggest that the proposed row-and-column affine measurements scheme, together with our recovery algorithm, may provide a powerful framework for affine matrix reconstruction.
 an implemenation of the algorithmis on Avishai's Github site.


 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 26, 2015

Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related - implementation -

MMV as a way to perform unmixing of hyperspectral imaging


Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related by Xiao Fu, Wing-Kin Ma, Tsung-Han Chan, José M. Bioucas-Dias

This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise---including its identification of the (unknown) number of endmembers---under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.  
 
an implementation is on Tsung-Han code page.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Randomized Robust Subspace Recovery for High Dimensional Data Matrices

I think this is a first ! A phase transition is found for a randomized algorithm. Welcome to the new mapmakers. A map to be shortly added to the Advanced Matrix Factorization page



Randomized Robust Subspace Recovery for High Dimensional Data Matrices by Mostafa Rahmani, George Atia

Principal Component Analysis (PCA) is a fundamental mathematical tool with broad applicability in numerous scientific areas. In this paper, a randomized PCA approach that is robust to the presence of outliers and whose complexity is independent of the dimension of the given data matrix is proposed. The proposed approach is a two-step algorithm. First, the given data matrix is turned into a small random matrix. Second, the columns subspace of the low rank matrix is learned and the outlying columns are located. The low-dimensional geometry of the low rank matrix is exploited to substantially reduce the complexity of the algorithm. A small random subset of the columns of the given data matrix is selected, then the selected data is projected into a random low-dimensional subspace. The subspace learning algorithm works with this compressed small size data. Two ideas for robust subspace learning are proposed to work under different model assumptions. The first idea is based on the linear dependence between the columns of the low rank matrix, and the second idea is based on the independence between the columns subspace of the low rank matrix and the subspace of the outlying columns. The proposed subspace learning approach has a closed-form expression and the outlier detector is a simple subspace projection operation. We derive sufficient conditions for the proposed method to extract the true subspace and identify the outlying data. These conditions are less stringent than those for existing methods. In particular, a remarkable portion of the given data is allowed to be outlier data.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 25, 2015

Book: Dictionary Learning in Visual Computing

I received the following from Qiang recently:
 
Hi, Igor,


This is Qiang Zhang, a Ph.D. in Arizona State University and now staff research scientist in Samsung Electronics.


I have been reading your Nuit Blanche since the very beginning of my Ph.D. life in 2009. It has given me a lot of help in my study and research in sparse learning.


I'd like to introduce my new book "Dictionary Learning in Visual Computing", which describes the recent advances (2008~2014) in dictionary learning, specialized in computer vision. The book covers both algorithms and applications, including a detailed example of how to use dictionary learning in face recognition tasks.


The abstract: The last few years have witnessed fast development on dictionary learning approaches for a set of visual computing tasks, largely due to their utilization in developing new techniques based on sparse representation. Compared with conventional techniques employing manually defined dictionaries, such as Fourier Transform and Wavelet Transform, dictionary learning aims at obtaining a dictionary adaptively from the data so as to support optimal sparse representation of the data. In contrast to conventional clustering algorithms like K-means, where a data point is associated with only one cluster center, in a dictionary-based representation, a data point can be associated with a small set of dictionary atoms. Thus, dictionary learning provides a more flexible representation of data and may have the potential to capture more relevant features from the original feature space of the data. One of the early algorithms for dictionary learning is K-SVD. In recent years, many variations/extensions of K-SVD and other new algorithms have been proposed, with some aiming at adding discriminative capability to the dictionary, and some attempting to model the relationship of multiple dictionaries. One prominent application of dictionary learning is in the general field of visual computing, where long-standing challenges have seen promising new solutions based on sparse representation with learned dictionaries. With a timely review of recent advances of dictionary learning in visual computing, covering the most recent literature with an emphasis on papers after 2008, this book provides a systematic presentation of the general methodologies, specific algorithms, and examples of applications for those who wish to have a quick start on this subject.


The link to the publisher is http://www.morganclaypool.com/doi/abs/10.2200/S00640ED1V01Y201504IVM018


Thanks!

======================================
Qiang(Charles) Zhang
CSE, Arizona State University
Home: http://www.public.asu.edu/~qzhang53
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressed Nonnegative Matrix Factorization is Fast and Accurate - implementation -


Compressed Nonnegative Matrix Factorization is Fast and Accurate by Mariano Tepper, Guillermo Sapiro

Nonnegative matrix factorization (NMF) has an established reputation as a useful data analysis technique in numerous applications. However, its usage in practical situations is undergoing challenges in recent years. The fundamental factor to this is the increasingly growing size of the datasets available and needed in the information sciences. To address this, in this work we propose to use structured random compression, that is, random projections that exploit the data structure, for two NMF variants: classical and separable. In separable NMF (SNMF) the left factors are a subset of the columns of the input matrix. We present suitable formulations for each problem, dealing with different representative algorithms within each one. We show that the resulting compressed techniques are faster than their uncompressed variants, vastly reduce memory demands, and do not encompass any significant deterioration in performance. The proposed structured random projections for SNMF allow to deal with arbitrarily shaped large matrices, beyond the standard limit of tall-and-skinny matrices, granting access to very efficient computations in this general setting. We accompany the algorithmic presentation with theoretical foundations and numerous and diverse examples, showing the suitability of the proposed approaches.
of note:

It is well studied that Gaussian projection preserves the l`2 norm [e.g.,14,and references therein]. However, our extensive experiments show that structured random compression achieves better performance than Gaussian compression. Intuitively, Gaussian compression is a general data-agnostic tool, whereas structured compression uses information from the matrix (an analogous of training). Theoretical research is needed to fully justify thisperformance gap

In particular this is quite obvious that gaussian projections do seem to get the same results


but then again, it may be because not enough gaussian projections were used. Anyway, an implementation is on Mariano Tepper's code page: http://www.marianotepper.com.ar/research/cnmf

 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, May 23, 2015

Saturday Morning Videos: Slides and Videos from ICLR 2015

From the conference schedule
 
0900 0940 keynote Antoine Bordes (Facebook), Artificial Tasks for Artificial Intelligence (slides) Video1 Video2
0940 1000 oral Word Representations via Gaussian Embedding by Luke Vilnis and Andrew McCallum (Brown University) (slides) Video
1000 1020 oral Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) by Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille (Baidu and UCLA) (slides) Video
1020 1050 coffee break

1050 1130 keynote David Silver (Google DeepMind), Deep Reinforcement Learning (slides) Video1 Video2
1130 1150 oral Deep Structured Output Learning for Unconstrained Text Recognition by Text Recognition” by Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (Oxford University and Google DeepMind) (slides) Video
1150 1210 oral Very Deep Convolutional Networks for Large-Scale Image Recognition by Karen Simonyan, Andrew Zisserman (Oxford) (slides) Video
1210 1230 oral Fast Convolutional Nets With fbfft: A GPU Performance Evaluation by Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun (Facebook AI Research) (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 1 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by Google



May 8 0730 0900 breakfast South Poolside – Sponsored by Facebook

0900 1230 Oral Session – International Ballroom

0900 0940 keynote Terrence Sejnowski (Salk Institute), Beyond Representation Learning Video1 Video2
0940 1000 oral Reweighted Wake-Sleep (slides) Video
1000 1020 oral The local low-dimensionality of natural images (slides) Video
1020 1050 coffee break

1050 1130 keynote Percy Liang (Stanford), Learning Latent Programs for Question Answering (slides) Video1 Video2
1130 1150 oral Memory Networks (slides) Video
1150 1210 oral Object detectors emerge in Deep Scene CNNs (slides) Video
1210 1230 oral Qualitatively characterizing neural network optimization problems (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 2 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by IBM Watson



May 9 0730 0900 breakfast South Poolside – Sponsored by Qualcomm

0900 0940 keynote Hal Daumé III (U. Maryland), Algorithms that Learn to Think on their Feet (slides) Video
0940 1000 oral Neural Machine Translation by Jointly Learning to Align and Translate (slides) Video
1000 1030 coffee break


1030 1330 posters Conference Poster Session – The Pavilion (AISTATS attendees are invited to this poster session)

1330 1700 lunch and break On your own

1700 1800 ICLR/AISTATS Oral Session – International Ballroom

1700 1800 keynote Pierre Baldi (UC Irvine), The Ebb and Flow of Deep Learning: a Theory of Local Learning Video
1800 2000 ICLR/AISTATS reception Fresco's (near the pool)

 
 

Conference Oral Presentations

May 9 Conference Poster Session

Board Presentation
2 FitNets: Hints for Thin Deep Nets, Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio
3 Techniques for Learning Binary Stochastic Feedforward Neural Networks, Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh
4 Reweighted Wake-Sleep, Jorg Bornschein and Yoshua Bengio
5 Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille
7 Multiple Object Recognition with Visual Attention, Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu
8 Deep Narrow Boltzmann Machines are Universal Approximators, Guido Montufar
9 Transformation Properties of Learned Visual Representations, Taco Cohen and Max Welling
10 Joint RNN-Based Greedy Parsing and Word Composition, Joël Legrand and Ronan Collobert
11 Adam: A Method for Stochastic Optimization, Jimmy Ba and Diederik Kingma
13 Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio
15 Scheduled denoising autoencoders, Krzysztof Geras and Charles Sutton
16 Embedding Entities and Relations for Learning and Inference in Knowledge Bases, Bishan Yang, Scott Yih, Xiaodong He, Jianfeng Gao, and Li Deng
18 The local low-dimensionality of natural images, Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli
20 Explaining and Harnessing Adversarial Examples, Ian Goodfellow, Jon Shlens, and Christian Szegedy
22 Modeling Compositionality with Multiplicative Recurrent Neural Networks, Ozan Irsoy and Claire Cardie
24 Very Deep Convolutional Networks for Large-Scale Image Recognition, Karen Simonyan and Andrew Zisserman
25 Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition, Vadim Lebedev, Yaroslav Ganin, Victor Lempitsky, Maksim Rakhuba, and Ivan Oseledets
27 Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille
28 Deep Structured Output Learning for Unconstrained Text Recognition, Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman
30 Zero-bias autoencoders and the benefits of co-adapting features, Kishore Konda, Roland Memisevic, and David Krueger
31 Automatic Discovery and Optimization of Parts for Image Classification, Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman, and Pedro Felzenszwalb
33 Understanding Locally Competitive Networks, Rupesh Srivastava, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber
35 Leveraging Monolingual Data for Crosslingual Compositional Word Representations, Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa
36 Move Evaluation in Go Using Deep Convolutional Neural Networks, Chris Maddison, Aja Huang, Ilya Sutskever, and David Silver
38 Fast Convolutional Nets With fbfft: A GPU Performance Evaluation, Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun
40 Word Representations via Gaussian Embedding, Luke Vilnis and Andrew McCallum
41 Qualitatively characterizing neural network optimization problems, Ian Goodfellow and Oriol Vinyals
42 Memory Networks, Jason Weston, Sumit Chopra, and Antoine Bordes
43 Generative Modeling of Convolutional Neural Networks, Jifeng Dai, Yang Lu, and Ying-Nian Wu
44 A Unified Perspective on Multi-Domain and Multi-Task Learning, Yongxin Yang and Timothy Hospedales
45 Object detectors emerge in Deep Scene CNNs, Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba

May 7 Workshop Poster Session

Board Presentation
2 Learning Non-deterministic Representations with Energy-based Ensembles, Maruan Al-Shedivat, Emre Neftci, and Gert Cauwenberghs
3 Diverse Embedding Neural Network Language Models, Kartik Audhkhasi, Abhinav Sethy, and Bhuvana Ramabhadran
4 Hot Swapping for Online Adaptation of Optimization Hyperparameters, Kevin Bache, Dennis Decoste, and Padhraic Smyth
5 Representation Learning for cold-start recommendation, Gabriella Contardo, Ludovic Denoyer, and Thierry Artieres
6 Training Convolutional Networks with Noisy Labels, Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus
7 Striving for Simplicity: The All Convolutional Net, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, and Martin Riedmiller
8 Learning linearly separable features for speech recognition using convolutional neural networks, Dimitri Palaz, Mathew Magimai Doss, and Ronan Collobert
9 Training Deep Neural Networks on Noisy Labels with Bootstrapping, Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich
10 On the Stability of Deep Networks, Raja Giryes, Guillermo Sapiro, and Alex Bronstein
11 Audio source separation with Discriminative Scattering Networks , Joan Bruna, Yann LeCun, and Pablo Sprechmann
13 Simple Image Description Generator via a Linear Phrase-Based Model, Pedro Pinheiro, Rémi Lebret, and Ronan Collobert
15 Stochastic Descent Analysis of Representation Learning Algorithms, Richard Golden
16 On Distinguishability Criteria for Estimating Generative Models, Ian Goodfellow
18 Embedding Word Similarity with Neural Machine Translation, Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio
20 Deep metric learning using Triplet network, Elad Hoffer and Nir Ailon
22 Understanding Minimum Probability Flow for RBMs Under Various Kinds of Dynamics, Daniel Jiwoong Im, Ethan Buchman, and Graham Taylor
23 A Group Theoretic Perspective on Unsupervised Deep Learning, Arnab Paul and Suresh Venkatasubramanian
24 Learning Longer Memory in Recurrent Neural Networks, Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato
25 Inducing Semantic Representation from Text by Jointly Predicting and Factorizing Relations, Ivan Titov and Ehsan Khoddam
27 NICE: Non-linear Independent Components Estimation, Laurent Dinh, David Krueger, and Yoshua Bengio
28 Discovering Hidden Factors of Variation in Deep Networks, Brian Cheung, Jesse Livezey, Arjun Bansal, and Bruno Olshausen
29 Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison, Pranava Swaroop Madhyastha, Xavier Carreras, and Ariadna Quattoni
30 On Learning Vector Representations in Hierarchical Label Spaces, Jinseok Nam and Johannes Fürnkranz
31 In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning, Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro
33 Algorithmic Robustness for Semi-Supervised (ϵ, γ, τ)-Good Metric Learning, Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, and Massih-Reza Amini
35 Real-World Font Recognition Using Deep Network and Domain Adaptation, Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jon Brandt, and Thomas Huang
36 Score Function Features for Discriminative Learning, Majid Janzamin, Hanie Sedghi, and Anima Anandkumar
38 Parallel training of DNNs with Natural Gradient and Parameter Averaging, Daniel Povey, Xioahui Zhang, and Sanjeev Khudanpur
40 A Generative Model for Deep Convolutional Learning, Yunchen Pu, Xin Yuan, and Lawrence Carin
41 Random Forests Can Hash, Qiang Qiu, Guillermo Sapiro, and Alex Bronstein
42 Provable Methods for Training Neural Networks with Sparse Connectivity, Hanie Sedghi, and Anima Anandkumar
43 Visual Scene Representations: sufficiency, minimality, invariance and approximation with deep convolutional networks, Stefano Soatto and Alessandro Chiuso
44 Deep learning with Elastic Averaging SGD, Sixin Zhang, Anna Choromanska, and Yann LeCun
45 Example Selection For Dictionary Learning, Tomoki Tsuchida and Garrison Cottrell
46 Permutohedral Lattice CNNs, Martin Kiefel, Varun Jampani, and Peter Gehler
47 Unsupervised Domain Adaptation with Feature Embeddings, Yi Yang and Jacob Eisenstein
49 Weakly Supervised Multi-embeddings Learning of Acoustic Models, Gabriel Synnaeve and Emmanuel Dupoux

May 8 Workshop Poster Session

Board Presentation
2 Learning Activation Functions to Improve Deep Neural Networks, Forest Agostinelli, Matthew Hoffman, Peter Sadowski, and Pierre Baldi
3 Restricted Boltzmann Machine for Classification with Hierarchical Correlated Prior, Gang Chen and Sargur Srihari
4 Learning Deep Structured Models, Liang-Chieh Chen, Alexander Schwing, Alan Yuille, and Raquel Urtasun
5 N-gram-Based Low-Dimensional Representation for Document Classification, Rémi Lebret and Ronan Collobert
6 Low precision arithmetic for deep learning, Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David
7 Theano-based Large-Scale Visual Recognition with Multiple GPUs, Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor
8 Improving zero-shot learning by mitigating the hubness problem, Georgiana Dinu and Marco Baroni
9 Incorporating Both Distributional and Relational Semantics in Word Representations, Daniel Fried and Kevin Duh
10 Variational Recurrent Auto-Encoders, Otto Fabius and Joost van Amersfoort
11 Learning Compact Convolutional Neural Networks with Nested Dropout, Chelsea Finn, Lisa Anne Hendricks, and Trevor Darrell
13 Compact Part-Based Image Representations: Extremal Competition and Overgeneralization, Marc Goessling and Yali Amit
15 Unsupervised Feature Learning from Temporal Data, Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun
16 Classifier with Hierarchical Topographical Maps as Internal Representation, Pitoyo Hartono, Paul Hollensen, and Thomas Trappenberg
18 Entity-Augmented Distributional Semantics for Discourse Relations, Yangfeng Ji and Jacob Eisenstein
20 Flattened Convolutional Neural Networks for Feedforward Acceleration, Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello
22 Gradual Training Method for Denoising Auto Encoders, Alexander Kalmanovich and Gal Chechik
23 Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet, Matthias Kümmerer, Lucas Theis, and Matthias Bethge
24 Difference Target Propagation, Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Antoine Biard, and Yoshua Bengio
25 Predictive encoding of contextual relationships for perceptual inference, interpolation and prediction, Mingmin Zhao, Chengxu Zhuang, Yizhou Wang, and Tai Sing Lee
27 Purine: A Bi-Graph based deep learning framework, Min Lin, Shuo Li, Xuan Luo, and Shuicheng Yan
28 Pixel-wise Deep Learning for Contour Detection, Jyh-Jing Hwang and Tyng-Luh Liu
29 Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews, Grégoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, and Yoshua Bengio
30 Fast Label Embeddings for Extremely Large Output Spaces, Paul Mineiro and Nikos Karampatziakis
31 An Analysis of Unsupervised Pre-training in Light of Recent Advances, Tom Paine, Pooya Khorrami, Wei Han, and Thomas Huang
33 Fully Convolutional Multi-Class Multiple Instance Learning, Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell
35 What Do Deep CNNs Learn About Objects?, Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko
36 Representation using the Weyl Transform, Qiang Qiu, Andrew Thompson, Robert Calderbank, and Guillermo Sapiro
38 Denoising autoencoder with modulated lateral connections learns invariant representations of natural images, Antti Rasmus, Harri Valpola, and Tapani Raiko
40 Towards Deep Neural Network Architectures Robust to Adversarial Examples, Shixiang Gu and Luca Rigazio
41 Explorations on high dimensional landscapes, Levent Sagun, Ugur Guney, and Yann LeCun
42 Generative Class-conditional Autoencoders, Jan Rudy and Graham Taylor
43 Attention for Fine-Grained Categorization, Pierre Sermanet, Andrea Frome, and Esteban Real
44 A Baseline for Visual Instance Retrieval with Deep Convolutional Networks, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson
45 Visual Scene Representation: Scaling and Occlusion, Stefano Soatto, Jingming Dong, and Nikolaos Karianakis
46 Deep networks with large output spaces, Sudheendra Vijayanarasimhan, Jon Shlens, Jay Yagnik, and Rajat Monga
47 Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets, Pascal Vincent
49 Self-informed neural network structure learning, David Warde-Farley, Andrew Rabinovich, and Dragomir Anguelov
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly