Optimal approximation with sparsely connected deep neural networks


Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, and Philipp Petersen


SIAM Journal on Mathematics of Data Science, Vol. 1, No. 1, pp. 8-45, 2019.

[BibTeX, LaTeX, and HTML Reference]


We derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes in L2(R^d). In other words, we establish a connection between the complexity of a function class and the complexity of deep neural networks approximating functions from this class to within a prescribed accuracy. Additionally, we prove that our lower bounds are achievable for a broad family of function classes. Specifically, all function classes that are optimally approximated by a general class of representation systems—so-called affine systems—can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, ridgelets, curvelets, shearlets, α-shearlets, and more generally α-molecules. Our central result elucidates a remarkable universality property of neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. As a specific example, we consider the class of α^(-1)-cartoon-like functions, which is approximated optimally by α-shearlets. We also explain how our results can be extended to the approximation of functions on low-dimensional immersed manifolds. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm yields deep neural networks with close-to-optimal approximation rates. Moreover, these results indicate that stochastic gradient descent can learn approximations that are sparse in the representation systems optimally sparsifying the function class the network is trained on.


Neural networks, function approximation, optimal sparse approximation, sparse connectivity, wavelets, shearlets


Corrections: 1. Condition 5.13 will not follow from the stated assumptions, in general, rather a mild non-degeneracy condition is required in addition, we refer to Section VIII in https://www.mins.ee.ethz.ch/pubs/p/deep-it-2019, where this is made explicit 2. removed the superfluous "and a bivariate polynomial \pi" in the statement of Theorem 5.4, relative to the published version

Download this document:


Copyright Notice: © 2019 H. Bölcskei, P. Grohs, G. Kutyniok, and P. Petersen.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.