High-dimensional distribution generation through deep neural networks

Authors

Dmytro Perekrestenko, Léandre Eberhard, and Helmut Bölcskei

Reference

Partial Differential Equations and Applications, Springer, invited paper, Vol. 2, Article No. 64, Sept. 2021.

DOI: 10.1007/s42985-021-00115-6

[BibTeX, LaTeX, and HTML Reference]

Abstract

We show that every d-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a 1-dimensional uniform input distribution. What is more, this is possible without incurring a cost—in terms of approximation error measured in Wasserstein-distance—relative to generating the d-dimensional target distribution from d independent random variables. This is enabled by a vast generalization of the space-filling approach discovered in [2]. The construction we propose elicits the importance of network depth in driving the Wasserstein distance between the target distribution and its neural network approximation to zero. Finally, we find that, for histogram target distributions, the number of bits needed to encode the corresponding generative network equals the fundamental limit for encoding probability distributions as dictated by quantization theory.

Keywords

Deep learning, neural networks, generative networks, space-filling curves, quantization, approximation theory

Comments

Relative to the published version, Figures 2 and 6 were updated to provide more illustrative examples.


Download this document:

 

Copyright Notice: © 2021 D. Perekrestenko, L. Eberhard, and H. Bölcskei.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.