Monday, October 19, 2015

Neural Networks with Few Multiplications / BinaryConnect - implementation -



The reddit discussion is heavy on the hardware implementation mostly because having fewer multiplications makes FPGA a reasonable solution (multiplication in FPGA takes space and time). The fascinating part of this approach is the quantization in both the weight and the backpropagation scheme. Without further ado: Neural Networks with Few Multiplications by Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio

For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.

From the conclusion:

Directions for future work include exploring actual implementations of this approach (for example, using FPGA), seeking more efficient ways of binarization, and the extension to recurrent neural networks.


The implementation for BinaryConnect which is revisited in this preprint can be found at: https://github.com/AnonymousWombat/BinaryConnect

BinaryConnect was mentioned in this reference: Courbariaux, M., Bengio, Y., and David, J.-P. (2015). Binaryconnect:Training deep neural networks with binary weights during propagations.but as Yoav Goldberg, I cannot find on the interwebs.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly