Elsevier

Neurocomputing

Volume 20, Issues 1–3, 31 August 1998, Pages 173-188
Neurocomputing

Training wavelet networks for nonlinear dynamic input–output modeling

https://doi.org/10.1016/S0925-2312(98)00010-1Get rights and content

Abstract

In the framework of nonlinear process modeling, we propose training algorithms for feedback wavelet networks used as nonlinear dynamic models. An original initialization procedure is presented that takes the locality of the wavelet functions into account. Results obtained for the modeling of several processes are presented; a comparison with networks of neurons with sigmoidal functions is performed.

Introduction

During the past few years, the nonlinear dynamic modeling of processes by neural networks has been extensively studied. Both input–output 7, 8 and state-space 5, 14 models were investigated. In standard neural networks, the nonlinearities are approximated by superposition of sigmoidal functions. These networks are universal approximators [2] and have been shown to be parsimonious [3].

Wavelets are alternative universal approximators; wavelet networks have been investigated in [17] in the framework of static modeling; in the present paper, we propose a training algorithm for feedback wavelet networks used as nonlinear dynamic models of processes. We first present the wavelets that we use and their properties. In Section 3, feedforward wavelet networks for static modeling are presented. In Section 4, the training systems and algorithms for dynamic input–output modeling with wavelet networks, making use of the results of Section 3, are described. For illustration purposes, the modeling of several processes by wavelet networks and by neural networks with sigmoidal functions is presented in Section 5.

Section snippets

From orthogonal wavelet decomposition to wavelet networks

The theory of wavelets was first proposed in the field of multiresolution analysis; among others, it has been applied to image and signal processing [6]. A family of wavelets is constructed by translations and dilations performed on a single fixed function called the mother wavelet. A wavelet φj is derived from its mother wavelet φ byφj(z)=φx−mjdj,where its translation factor mj and its dilation factor dj are real numbers (dj>0). We are concerned with modeling problems, i.e. with the fitting of

Static modeling using feedforward wavelet networks

Static modeling with wavelet networks has been investigated by other authors in Ref. [17]. In order to make the paper self-contained, we devote the present section to introducing notations and to recalling basic equations which will be used in Section 4for dynamic modeling.

We consider a process with Ni inputs and a scalar output yp. Steady-state measurements of the inputs and outputs of the process build up a training set of N examples (xn,ynp), xn=[xn1,…,xnNi]T being the input vector for

Dynamic modeling using wavelet networks

We propose to extend the use of wavelet networks to the dynamic modeling of single-input–single-output (SISO) processes. The training set consists of two sequences of length N: the input sequence {u(n)} and the measured process output {yp(n)}. As in the static case, the aim is to approximate f by a wavelet network.

Depending on the assumptions about the noise, either feedforward or feedback predictors may be required [9]. For example, if it is assumed that the noise acting on the process is

Simulation results

In this section we make use of the above algorithms for training input–output wavelet networks on data gathered from simulated and from real processes, and we make use of the algorithms presented in Ref. [8] for training input–output neural networks with one hidden layer of sigmoidal neurons on the same data.

The wavelet networks are input–output models as defined by Eqs. (18) or (20), where the unknown function f is approximated by wavelet networks whose mother wavelet is described in Section 2

Conclusion

In this paper, we extend the use of wavelet networks for function approximation to dynamic nonlinear input–output modeling of processes. We show how to train such networks by a classic minimization of a cost function through second order gradient descent implemented in a backpropagation scheme, with appropriate initialization of the translation and dilation parameters. The training procedure is illustrated on the modeling of simulated and real processes. A comparison with classic sigmoidal

Yacine Oussar graduated from Ecole Nationale Polytechnique, Alger, in 1993, with an Engineer degree in Automatic Control, and received the DEA in Robotics from Université Pierre et Marie Curie, Paris, 1994. His doctoral thesis is on the nonlinear modeling of processes using neural and wavelet networks.

Isabelle Rivals graduated from ESPCI in 1991; in 1995, she received the Doctorat de l'Université Pierre et Marie Curie, Paris, on the modeling and control of a four-wheel-drive vehicle using

References (18)

  • M. Cannon et al.

    Space-frequency localized basis function networks for nonlinear system estimation and control

    Neurocomputing

    (1995)
  • G. Cybenko

    Approximation by superpositions of a sigmoidal function

    Math. Control, Signals Systems

    (1989)
  • K. Hornik et al.

    Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives

    Neural Comput.

    (1994)
  • M.I. Jordan

    The learning of representations for sequential performance

    Doctoral Dissertation

    (1985)
  • A.U. Levin

    Neural networks in dynamical systems; a system theoretic approach

    PhD Thesis

    (1992)
  • S. Mallat

    A theory for multiresolution signal decompositionthe wavelet transform

    IEEE Trans. Pattern Anal. Machine Intell.

    (1989)
  • K.S. Narendra et al.

    Identification and control of dynamical systems using neural networks

    IEEE Trans. Neural Networks

    (1990)
  • O. Nerrand et al.

    Neural networks and non-linear adaptive filteringunifying concepts and new algorithms

    Neural comput.

    (1993)
  • O. Nerrand et al.

    Training recurrent neural networks: why and how? An Illustration in Process Modeling

    IEEE Trans. Neural Networks

    (1994)
There are more references available in the full text version of this article.

Cited by (0)

  1. Download : Download full-size image
Yacine Oussar graduated from Ecole Nationale Polytechnique, Alger, in 1993, with an Engineer degree in Automatic Control, and received the DEA in Robotics from Université Pierre et Marie Curie, Paris, 1994. His doctoral thesis is on the nonlinear modeling of processes using neural and wavelet networks.

  1. Download : Download full-size image
Isabelle Rivals graduated from ESPCI in 1991; in 1995, she received the Doctorat de l'Université Pierre et Marie Curie, Paris, on the modeling and control of a four-wheel-drive vehicle using neural networks. She is Maı̂tre de Conférences at ESPCI, where her research activities are devoted to the application of neural networks to nonlinear process modeling and control.

  1. Download : Download full-size image
Léon Personnaz received the Doctorat ès Sciences from Université Pierre et Marie Curie, Paris, in 1986, on the design and applications of feedback neural networks. He is Maitre de Conférences at ESPCI, and he is the leader of a reaserch group on applications of neural networks to automatic classification, and to nonlinear process modeling and control.

  1. Download : Download full-size image
Gérard Dreyfus received the Doctorat ès Sciences from Université Pierre et Marie Curie, Paris, in 1976. Since 1982, he has been Professor of Electronics at ESPCI, and head of the electronics research departement, whose activities are devoted to neural networks (ranging from neurobiological modeling to industrial applications), and to cellular automata.

View full text