A quantifier-reversal approximation paradigm for recurrent neural networks
Authors
Clemens Hutter, Valentin Abadie, and Helmut BölcskeiReference
Neural Networks, Nov. 2025, submitted.[BibTeX, LaTeX, and HTML Reference]
Abstract
Classical neural network approximation results take the form: for every function f and every error tolerance ϵ > 0, one constructs a neural network whose architecture and weights depend on ϵ. This paper introduces a fundamentally different approximation paradigm that reverses this quantifier order. For each target function f , we construct a single recurrent neural network (RNN) with fixed topology and fixed weights that approximates f to within any prescribed tolerance ϵ > 0 when run for sufficiently many time steps. The key mechanism enabling this quantifier reversal is temporal computation combined with weight sharing: rather than increasing network depth, the approximation error is reduced solely by running the RNN longer. This yields exponentially decaying approximation error as a function of runtime while requiring storage of only a small, fixed set of weights. Such architectures are appealing for hardware implementations where memory is scarce and runtime is comparatively inexpensive. To initiate the systematic development of this novel approximation paradigm, we focus on univariate polynomials. Our RNN constructions emulate the structural calculus underlying deep feedforward ReLU network approximation theory—parallelization, linear combinations, affine transformations, and, most importantly, a clocked mechanism that realizes function composition within a single recurrent architecture. The resulting RNNs have size independent of the error tolerance ϵ and hidden-state dimension linear in the degree of the polynomial.Keywords
Recurrent neural networks, approximation theory, quantifier reversal
Download this document:
Copyright Notice: © 2025 C. Hutter, V. Abadie, and H. Bölcskei.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.