Skip to main content
Top

1999 | OriginalPaper | Chapter

Space and Precision

Author : Hava T. Siegelmann

Published in: Neural Networks and Analog Computation

Publisher: Birkhäuser Boston

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

So far, we have considered neural networks with two types of resource constraints: time, and the Kolmogorov complexity of the weights. Here, we consider rational-weight neural networks in which a bound is set on the precision available for the neurons. The issue of precision comes up when simulating a neural network on a digital computer. Any implementation of real arithmetic in hardware will handle “reals” of limited precision, seldom larger than 64 bits. When more precision is necessary, one must resort to a software implementation of real arithmetic (sometimes provided by the compiler), and even in this case a physical limitation on the length of the mantissa of each state of a neural network under simulation is imposed by the amount of available memory. This observation suggests that some connection can be established between the space requirements needed to solve a problem and the precision required by the activations of the neural networks that solve it.

Metadata
Title
Space and Precision
Author
Hava T. Siegelmann
Copyright Year
1999
Publisher
Birkhäuser Boston
DOI
https://doi.org/10.1007/978-1-4612-0707-8_6