Soft-to-hard vector quantization for end-to-end learning compressible representations

Authors

Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc Van Gool

Reference

Neural Information Processing Systems (NIPS), pp. 1141-1151, Dec. 2017.

[BibTeX, LaTeX, and HTML Reference]

Abstract

In this work we present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives state-of-the-art results for both.


Download this document:

 

Copyright Notice: © 2017 E. Agustsson, F. Mentzer, M. Tschannen, L. Cavigelli, R. Timofte, L. Benini, and L. Van Gool.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.