1993 | OriginalPaper | Chapter
Real-data-based Car-following with Adaptive Neural Control
Authors : Walter Weber, Jost Bernasch
Published in: Artificial Neural Nets and Genetic Algorithms
Publisher: Springer Vienna
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Our goal is to train a neural network to control the longitudinal dynamics of a real car. Input signals to the neural controller are its own speed, its own acceleration, distance and change of distance to the car ahead. Net output are control signals for throttle and brake. The acceleration of our own car and the distance to the car ahead are computed in discrete time-steps. Therefore all networks were trained with discrete-time data.We performed experiments with several architectures including Jordan net, Elman net, a fully recurrent network and a recurrent network with a slightly different architecture (using ‘normal’ hidden units without recurrent connections as well) to analyze different behaviour and the efficiency of these neural network types. We used standard back-propagation to train the Jordan- and Elman-style networks and the backpropagation through time algorithm to train the recurrent nets.Currently, we are working on a two-level predictor-hierarchy, where a higher-level network is intended to solve problems the lower-level net can not. This is achieved by restricting the highlevelnetwork’s input to those data items in time in the case of which the low-level net had difficulties to produce a matching output.The most promising results have been achieved using an algorithm which is a kind of backpropagation through time, in the case of which the time to look back is restricted to a specific amount of time steps. We call this backpropagation through truncated time.Throughout our experiments we have been observing that the way of encoding data has an important influence on the performance of a network. Several runnings confirmed that this has a stronger impact than all of the well-known parameters (e.g. learning rate, momentum) of backpropagation. It could be shown that backpropagation through truncated time and our way of encoding data both lead to far better results than conventional approaches.