1999 | OriginalPaper | Chapter
Space and Precision
Author : Hava T. Siegelmann
Published in: Neural Networks and Analog Computation
Publisher: Birkhäuser Boston
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
So far, we have considered neural networks with two types of resource constraints: time, and the Kolmogorov complexity of the weights. Here, we consider rational-weight neural networks in which a bound is set on the precision available for the neurons. The issue of precision comes up when simulating a neural network on a digital computer. Any implementation of real arithmetic in hardware will handle “reals” of limited precision, seldom larger than 64 bits. When more precision is necessary, one must resort to a software implementation of real arithmetic (sometimes provided by the compiler), and even in this case a physical limitation on the length of the mantissa of each state of a neural network under simulation is imposed by the amount of available memory. This observation suggests that some connection can be established between the space requirements needed to solve a problem and the precision required by the activations of the neural networks that solve it.