Skip to main content
Top
Published in: Pattern Analysis and Applications 3/2007

01-08-2007 | Theoretical Advances

Periodicity and stability issues of a chaotic pattern recognition neural network

Authors: Dragos Calitoiu, B. John Oommen, Doron Nussbaum

Published in: Pattern Analysis and Applications | Issue 3/2007

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Traditional pattern recognition (PR) systems work with the model that the object to be recognized is characterized by a set of features, which are treated as the inputs. In this paper, we propose a new model for PR, namely one that involves chaotic neural networks (CNNs). To achieve this, we enhance the basic model proposed by Adachi (Neural Netw 10:83–98, 1997), referred to as Adachi’s Neural Network (AdNN), which though dynamic, is not chaotic. We demonstrate that by decreasing the multiplicity of the eigenvalues of the AdNN’s control system, we can effectively drive the system into chaos. We prove this result here by eigenvalue computations and the evaluation of the Lyapunov exponent. With this premise, we then show that such a Modified AdNN (M-AdNN) has the desirable property that it recognizes various input patterns. The way that this PR is achieved is by the system essentially sympathetically “resonating” with a finite periodicity whenever these samples (or their reasonable resemblances) are presented. In this paper, we analyze the M-AdNN for its periodicity, stability and the length of the transient phase of the retrieval process. The M-AdNN has been tested for Adachi’s dataset and for a real-life PR problem involving numerals. We believe that this research also opens a host of new research avenues.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
Unfortunately, if the external excitation forces the brain out of chaos completely, it can lead to an epileptic seizure, and a future goal of this research is to see how these episodes can be anticipated, remedied and/or prevented. Some initial results of how this can be achieved are currently available.
 
2
They did this in an input-specific manner as follows. Let suppose that x i s is the value of ith feature (i.e., the pixel, in the domain example) of the sth pattern. Rather than feeding x i s directly to the network, they added a bias of ‘2’ if the pixel value was ‘0’, and a bias of ‘8’ if the pixel value was ‘1’, and fed the resulting value to be the input a i to the network. The intention was to artificially create a greater (scaled) disparity between the values of ‘0’ and ‘1’. Adachi et al. also tried to explain the significance of these biases.
 
3
It is well know that the Adachi model (see in Kawakami (ed) Bifurcation phenomena in nonlinear systems and theory of dynamical systems. World Scientific, Singapore, pp. 143–161, 1990.) has a biological basis. In this paper, the authors showed that the output of each neuron has a relation with its own history, through the information in the states. But in our model, the output of each neuron has a relation with the historical status of a special neuron, and the historical status with its own “past” is achieved by an interaction between the states. Such a modification leads to the possibility of switching from chaos to periodicity although the biological rationale is not yet fully understood. However, its role in controlling epilepsy seems to have been resolved [3].
 
4
It is also possible to relate η(t + 1) and ζ(t + 1) to η K (t + 1) and ζ K (t + 1) for any fixed K. However, we would like to highlight that the uniqueness of our model consists of “binding” these global state values to the specific values of any one state variable (as opposed to binding each ζ i (t + 1) to ζ i (t) itself, as the AdNN does). Theoretically, it seems to be clear that the specific value of K, which identifies this “binding” neuron, is of no significance. However, the question of whether it will give us any added PR capability remains open.
 
5
The term “sympathetic resonance” is used for lack of a better one. Quite simply, all we require is that the trained pattern periodically surfaces as the output of the CNN.
 
6
We submit here (as a footnote) a few introductory sentences regarding such an analysis. The analysis of a general nonlinear system has two steps. The first consists of computing the steady state points as determined by the testing pattern. The second consists of the analysis of the stability of each steady state point. The stability consideration generally involves approximating the dynamics of the “nonlinear system” (for example, by computing the Jacobian) in terms of the approximated linear system. This phenomenon is a consequence of the operating characteristics in a neighbourhood of the quiescent steady states (or attracting manifold), and the decomposition of the system using a Taylor series, thus neglecting the higher-order terms.
 
7
Observe that J ij 3 (t) has the value zero; this result is obtained as follows: \({\frac{\partial \xi_i(t+1)}{\partial \eta_j(t)} = \frac{\partial (k_r \xi_i(t)-\alpha x_i(t)+a_i)}{\partial \eta_j(t)}=0.}\)
 
8
It turns out that the AdNN is actually not operating as a CNN, and this further strengthens the claim presented here. On the other hand, it slowly converges towards the stable but periodic orbits. Adachi, actually, indirectly mentions it in his paper when he states that the “chaos” of his model is dependent on the parameters of the network. This can be seen in Fig. 8 and Table 2 in Ref. [1]. When k r  = 0.95, λmax was computed to be 0.00028, but this result is an approximation to the true value of λmax which turns out to be −0.051293.
 
9
As opposed to the AdNN, the M-AdNN has an additive positive term in the computation of the Lyapunov exponents as: \({\lambda_{2N-1}=1/{2\ln{N}}+\ln(k_f)}\) and \({\lambda_{2N}=1/{2\ln{N}}+\ln(k_r)}\) granting us the flexibility of forcing the system to be chaotic. Notice that since all but two of the eigenvalues are zero, it implies a rapid convergence to a fixed value for all directions corresponding to the eigenvalues whose value is zero. For the other eigenvalues the convergence is chaotic.
 
10
The periodicity of the M-AdNN has nothing to do with the identity of the classes. We believe that the brain possesses this same property (for otherwise, it would resonate differently for every single pattern that it is trained with, and the set of patterns which we can recognize is truly “infinitely large”). But we are currently investigating how we can set the parameters of the M-AdNN (k r and k f ) to be class-dependent.
 
11
How the brain is able to record and recognize such a periodic behaviour amidst chaos is yet unexplained [5].
 
12
The unique characteristic of such a PR system is that each trained pattern has a unique attractor. When the testing pattern is fed into the system, the system converges to the attractor that characterizes it best. The difficult question, for which we welcome suggestions, is one of knowing which attractor it falls on. As it stands now, we have used a simplistic “compare against all” strategy, but even here, we believe that hierarchical strategies that use of syntactic clustering could enhance the search. Besides, such a comparison needs to be invoked only if a periodicity is observed by, for example, a frequency domain analysis. This is an extremely interesting area of research which we are currently pursuing.
 
13
It would have been good if the periodicity was uniquely linked to the pattern classes, but, unfortunately, this is not the case. Rather, the system possesses the characteristic that it switches from being chaotic to periodic whenever a noisy version of one of the trained patterns is received. So, it would be more appropriate to say that this switching phenomenon occurs with 100% accuracy. We have taken the liberty to refer to this as “100% PR” accuracy.
 
14
We would like to emphasize that the pattern, from a PR perspective, is not a “shape”. Rather, it is a 100-dimensional vector. Every component of this vector can be modified. Thus, since the noise is bit-wise, the current PR exercise may not work if the “images” are subject to translation/rotation operations. To be able to also consider translations in an image-processing application, we propose to preprocess the patterns and thus extract the features which will serve as the above bit-wise array, which will be subsequently used to represent the latter vector.
 
15
In the case of Pattern 3, the first periodic pattern was visible at time index 19. But the periodicity was observable only after time index 39.
 
16
The periodicity (7,15) means that we encounter a “double cycle”. Thus, after that transient phase, the training pattern occurs at times 7, 22, 29, 44, etc. This is actually because we have a eight-shaped limit cycle with the smaller loop of the ‘8’ having a periodicity of 7, and the larger loop having a periodicity of ‘15’.
 
17
The training, which is a “one-shot” assignment, initializes each w ij to be zero and than sums this, for all the training patterns, {X S  = [x 1 s ... x N s ]}, to be w ij w ij  + x i s x j s .
 
Literature
1.
go back to reference Adachi M, Aihara K (1997) Associative dynamics in a chaotic neural network. Neural Netw 10:83–98CrossRef Adachi M, Aihara K (1997) Associative dynamics in a chaotic neural network. Neural Netw 10:83–98CrossRef
2.
go back to reference Albers DJ, Sprott JC, Dechert WD (1998) Routes to chaos in neural networks with random weights. Int J Bifurcat Chaos 8:1463–1478MATHCrossRef Albers DJ, Sprott JC, Dechert WD (1998) Routes to chaos in neural networks with random weights. Int J Bifurcat Chaos 8:1463–1478MATHCrossRef
3.
go back to reference Calitoiu D, Oommen BJ, Nussbaum D (2005) Modeling inaccurate perception: desynchronization issues of a chaotic pattern recognition neural network. In: Proceedings of the 14th Scandinavian conference in image analysis. Joensuu, Finland, June 19–22, 2005, pp 821–830 Calitoiu D, Oommen BJ, Nussbaum D (2005) Modeling inaccurate perception: desynchronization issues of a chaotic pattern recognition neural network. In: Proceedings of the 14th Scandinavian conference in image analysis. Joensuu, Finland, June 19–22, 2005, pp 821–830
4.
go back to reference Fausett L (1994) Fundamentals of Neural Networks. Prentice Hall, Englewood Cliffs Fausett L (1994) Fundamentals of Neural Networks. Prentice Hall, Englewood Cliffs
5.
go back to reference Freeman WJ (1992) Tutorial in neurobiology: from single neurons to brain chaos. Int J Bifurcat Chaos 2:451–482MATHCrossRef Freeman WJ (1992) Tutorial in neurobiology: from single neurons to brain chaos. Int J Bifurcat Chaos 2:451–482MATHCrossRef
6.
go back to reference Friedman M, Kandel A (1999) Introduction to pattern recognition, statistical, structural, neural and fuzzy logic approaches. World Scientific, Singapore Friedman M, Kandel A (1999) Introduction to pattern recognition, statistical, structural, neural and fuzzy logic approaches. World Scientific, Singapore
7.
go back to reference Fukunaga K (1990) Introduction to statistical pattern recognition. Academic, New York Fukunaga K (1990) Introduction to statistical pattern recognition. Academic, New York
8.
9.
go back to reference Makishima M, Shimizu T (1998) Wandering motion and co-operative phenomena in a chaotic neural network. Int J Bifurcat Chaos 8:891–898MATHCrossRef Makishima M, Shimizu T (1998) Wandering motion and co-operative phenomena in a chaotic neural network. Int J Bifurcat Chaos 8:891–898MATHCrossRef
10.
go back to reference Ripley B (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge Ripley B (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge
11.
go back to reference Rosenstein MT, Collins JJ, De Luca CJ (1993) A practical method for calculating largest lyapunov exponents from small data sets. Physica D 65:117–134MATHCrossRef Rosenstein MT, Collins JJ, De Luca CJ (1993) A practical method for calculating largest lyapunov exponents from small data sets. Physica D 65:117–134MATHCrossRef
12.
go back to reference Schurmann J (1996) Pattern classification, a unified view of statistical and neural approaches. Wiley, New York Schurmann J (1996) Pattern classification, a unified view of statistical and neural approaches. Wiley, New York
13.
go back to reference Shuai JW, Chen ZX, Liu RT, Wu BX (1997) Maximum hyperchaos in chaotic nonmonotonic neuronal networks. Phys Rev E 56:890–893CrossRef Shuai JW, Chen ZX, Liu RT, Wu BX (1997) Maximum hyperchaos in chaotic nonmonotonic neuronal networks. Phys Rev E 56:890–893CrossRef
14.
go back to reference Sinha S (1996) Controlled transition from chaos to periodic oscillations in a neural network model. Physica A 224:433–446CrossRef Sinha S (1996) Controlled transition from chaos to periodic oscillations in a neural network model. Physica A 224:433–446CrossRef
15.
go back to reference Theodoridis S, Koutroumbas K (1999) Pattern recognition. Academic, New York Theodoridis S, Koutroumbas K (1999) Pattern recognition. Academic, New York
Metadata
Title
Periodicity and stability issues of a chaotic pattern recognition neural network
Authors
Dragos Calitoiu
B. John Oommen
Doron Nussbaum
Publication date
01-08-2007
Publisher
Springer-Verlag
Published in
Pattern Analysis and Applications / Issue 3/2007
Print ISSN: 1433-7541
Electronic ISSN: 1433-755X
DOI
https://doi.org/10.1007/s10044-007-0060-3

Other articles of this Issue 3/2007

Pattern Analysis and Applications 3/2007 Go to the issue

Premium Partner