Neural network identifiability for a family of sigmoidal nonlinearities


Verner Vlačić and Helmut Bölcskei


Constructive Approximation, May 2021, (invited paper).


[BibTeX, LaTeX, and HTML Reference]


This paper addresses the following question of neural network identifiability: Does the input-output map realized by a feed-forward neural network with respect to a given nonlinearity uniquely specify the network architecture, weights, and biases? Existing literature on the subject [1], [2], [3] suggests that the answer should be yes, up to certain symmetries induced by the nonlinearity, and provided the networks under consideration satisfy certain “genericity conditions”. The results in [1] and [2] apply to networks with a single hidden layer and in [3] the networks need to be fully connected. In an effort to answer the identifiability question in greater generality, we derive necessary genericity conditions for the identifiability of neural networks of arbitrary depth and connectivity with an arbitrary nonlinearity. Moreover, we construct a family of nonlinearities for which these genericity conditions are minimal, i.e., both necessary and sufficient. This family is large enough to approximate many commonly encountered nonlinearities to within arbitrary precision in the uniform norm.


Deep neural networks, identifiability, sigmoidal nonlinearities

Download this document:


Copyright Notice: © 2021 V. Vlačić and H. Bölcskei.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.