Skip to main content
Top

2011 | OriginalPaper | Chapter

22. Classical Reconditioning: Doing What Happens Naturally

Author : Derek Partridge

Published in: The Seductive Computer

Publisher: Springer London

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The organic world is full of stable, reliable complex systems. IT systems’ technologists may therefore find some useful structures and processes in the world of biology. Evolution, crudely viewed as a process involving time, random changes and selection, has found application in complex optimization. Neural computing (originally inspired by the impressive computational properties of brains) has been developed in a number different ways. Network programs, hence network programming, appear to circumvent some fundamental weaknesses of conventional programs. But network programming has its own weaknesses and cannot, as yet, be used for large-scale, multifunctional IT systems, although network programs could be elements of such systems. Network programming may offer the software engineer a new paradigm and so require fundamental shifts in thinking about applications and management of computational technology

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
Not surprisingly perhaps, attempts have been made to simulate biological evolution as a method of developing intelligent programs. Although denied the possibility of letting their ‘evolutionary’ systems evolve for millions of years, the enthusiasts hoped that the very high speed of computers might permit them to show progress over much shorter periods of time. In terms of evolving useful IT systems, or even useful elements of IT systems, the results have never been encouraging.
The next step was a further possible speed-up by circumventing the seemingly very wasteful step of random changes by trying to restrict the randomness to ‘possibly useful’ changes. What has emerged is a vigorous subfield clustered around the strategies of Genetic Algorithms and Simulated Evolution : program development based on ‘possibly useful’ changes that are assessed and either discarded or kept for subsequent modification in a population (i.e. a collection of similar copies of the software under development) of samples. This evolution-like approach to program development has not come close to producing good IT systems but it has found valuable application in optimisation technology – i.e. given a program that performs some unitary task adequately, we can usually ‘evolve’ it to get a better performance.
 
2
The American palaeontologist, Stephen Jay Gould, provided this example. Full details can be found in The Panda’s Thumb (Norton, 1980), a collection of his fascinating and informative essays.
 
3
It was a lack of appreciation for such biological-system interdependencies that scuppered the more ambitious applications of Expert Systems Technology (a Chapter 19 topic) in the last decades of the last millennium. A mass of simple rules and facts can reproduce the human expertise needed to decide on credit-card applications, but it falls dismally short when used in an attempt to replicate the diagnostic expertise of a medical practitioner.
 
4
I am assuming here that Richard Dawkins is right. But for those readers who are perplexed by the notion that the propagation of genetic material and not organisms is the driving force behind evolution, I recommend that you peruse Dawkins’ compelling and eminently readable book,The Selfish Gene (Oxford University Press, 1976). And for those who are not bothered either way, but like short, well-written, informative books touching on the secrets of life this is also the book for you.
 
5
I originally introduced this idea as Non-Programmed Computation in an article of that name published in 2000 in the online version of the leading US Computer Science journal called Communications of the ACM , vol. 43, issue 11es.
 
6
Of course, various specialists know an awful lot about how the brain works. For example, neural pulses are quite well understood in terms of molecules and atoms. The image processing capabilities of single neurons in the optic tract (an extension of the brain) are well understood. The behaviour of neurons en masse , via EEG recordings, is also somewhat understood. However, none of this specialist knowledge is the sort of knowledge that is needed to model brain activity as a computational process, and hence not the sort of knowledge that is needed in order to build ‘real’ neural computers – i.e. machines that compute in ‘much the same way’ as the brain does. We ran into this problem some years ago: see V. S. Johnston, D. Partridge and P. D. Lopez, “A neural theory of cognitive development”, Journal of Theoretical Biol ogy, vol. 100 , pp. 485–509, 1983.
 
7
There are significant exceptions: for example, in some systems engineering, such as reliable data transmission over an unreliable channel, designed-in redundancy can be the norm.
 
8
There is now a wealth of introductions to this blossoming subfield of computer technology. Under the simple banner of say, connectionism, there are very many, fundamentally different schemes. Once again, my New Guide to AI (Intellect, 1991) springs to mind as an introduction that is comprehensive yet not unduly burdened with unavoidable technicalities.
 
9
One final time I’d like to note that I’m trying to avoid the misleading labelling that dominates this area: ‘training’ (a straightforward feedback procedure for function optimization) is often called ‘learning’ which then licenses extravagant leaps into the complexities of human learning. ‘Neural networks’ similarly seem to induce unwarranted speculation based on the computational wonders of real neural networks, i.e. our brains. In reference to ‘training’ procedures, Johnson’s Emergence , for example, states that in “A few decades from now [2001]” development of this technology will “begin to convincingly simulate the human capacity for open-ended learning” (p. 208). Classic hopeware , based on a failure to appreciate the yawning gulf between our knowledge of how to optimize functions automatically and our ignorance of the mechanisms underlying “the human capacity for open-ended learning” (if that is indeed what we possess). Incidentally, “a few decades” have already passed since one of the most powerful of these optimization technologies (the Multilayer Perceptrons) has been in wide use (see note 8), and no development towards open-ended learning has appeared.
 
10
The NETtalk system was devised by Terry Sejnowski and Charles Rosenberg at Johns Hopkins University in the USA in 1986. Details of the system have been reprinted in many books, including the one referenced in note 8.
 
11
There are many possible ‘squash’ functions; the S-curve is commonly used. It is sketched out below. Any number input to this curve (via the horizontal axis, the input value 0.65 is illustrated) will be ‘squashed’ to an output value between 0 and 1 (via the vertical axis, where the output value of 0.75 is illustrated for the input value of 0.65). Notice that the horizontal, the input, axis can take all decimal values from minus infinity to plus infinity (although only −3.0 to 3.0 are illustrated). The vertical, the output, axis is illustrated in full – the minimum output is 0 and the maximum output is 1.0. Only inputs between about −1.0 and 1.0 will give outputs that are not close to 0 or 1.0. All other input values, however large or however small (large negative), will generate output values close to 1.0 or 0, respectively.
 
12
There is an obvious objection to my claim that NETtalk exemplifies a fundamental change in programming technology. Neural network systems are, after all, constructed using conventional programming languages and are executed on conventional digital computers. But in just the same way all natural systems are built from the discrete units we call atoms, and yet they can be analogue systems. It is a question of levels: in both cases (computational neural networks and natural analogue systems) the individual discrete components have no readily interpretable meaning at the system level.
 
13
For this particular type of network, a Multilayer Perceptron (or MLP, by far the most widely used NC technology), the training algorithm is (some version of) the Backpropagation of Error (the BP, or backprop) algorithm.
 
14
Computer Scientists might object that building programs by piecing together tiny elements of instruction is exactly the problem: the proper way to generate programs is top-down in the sense that individual instructions are refined, in functionally modular clumps, out of elements derived from a formal specification, via design abstractions. They do have a point, programming by selecting instructions and fitting them together one by one is a poor general strategy for the construction (or fixing) of reliable programs. Ultimately, of course, all such tasks will boil down to microengineering the final outcome. So the temptation is to bypass the foreplay with plans and design modules, and get straight to the instructions details where all solutions reside.
Unsurprisingly then, the majority of programmer time is spent working on programs in this way rather than by eliciting the fine detail from high-level abstractions i.e., the bulk of IT-systems work is microengineering them. Adding to the problem, top-down program-development schemes (especially the thoroughly formal ones) only work to a certain degree and in certain situations. I’m sure that there could, and should be more top-down programming, but I’m not at all sure that it is realistic to expect it ever to usurp the majority role from microengineering especially when we recognize that most commercial programming activity is the fixing and altering of other people’s large, undocumented (poorly or misleadingly documented, it makes little difference) piles of instructions, i.e. grappling with legacy software systems whose only guaranteed ‘description’ is the program instructions themselves.
 
15
Steven Jonhson’s Emergence (Penguin, 2001), although addressing many interesting phenomena, is guilty in this respect when referring to general software development. “In the short term, though, emergent software promises to transform the way we think about creating code” (p. 173) In which case Johnson asks: “Who is driving here, human or machine? Programmer or user?” (p. 174). The answer, as we know, is neither; it’s the program that drives the behaviour irrespective of what the user and, perhaps more pertinently, the programmer believes about the programmed components he constructed. The ‘promise’ may well be realized for game playing and for human-computer interaction phenomena, but not for IT systems that must exhibit pre-specified, complex behaviours. This single-function optimization technology, such as the MLP example in the chapter, has been well-understood for decades, and this understanding includes all the attendant technical problems and limitations.
 
16
Use of multiple systems, on the principle that a variety of answers (one from each system) can deliver (e.g. by simple majority vote) a more reliable result than any single system, has developed into the active subfield of Multi-Classifier Systems (MCS) with its own annual conference and numerous books, published by Springer.
 
17
There is blossoming subfield of ‘autonomous intelligent agent technology’ but (like so many of the good ideas that trespass into hopeware ), all of the proposed systems are largely future events.
 
18
One readable, but fanciful, exposition of this notion is The Society of Mind (Simon & Shuster, 1986) by a pioneer of AI, Marvin Minsky. For his PhD work, Jonathon Rowe attempted to transform The Society of Mind philosophy into a concrete IT system, one that displayed creativity in the context of game playing. His project clearly revealed both the strengths and the weaknesses of this general idea. The full project together with a survey of the notion of creativity in IT systems can be found in a small book – Computers and Creativity by D. Partridge and J. Rowe published by Ablex, NY and then Intellect, Bristol in 1994.
In general, forays into the possibilities for so-called emergent-behaviour programming were inaccessible to the non-specialist until Steven Johnson’s Emergence was published in 2001, but it does use a somewhat fragile definition of the term: “The movement from low-level rules to higher-level sophistication is what we call emergence.”(p. 18) Johnson’s explanations of this rather open ‘definition’ oscillate between the obvious (new behaviours emerge from an interaction of primitive ones) and the fanciful (“systems built with a conscious understanding of what emergence is” p. 21). There is much mention of “laws of emergence” but, curiously, no laws are explicitly given.
My view is that with all programs we are grappling with emergent behaviours which are no small part of the reason why large IT systems will always grow beyond comprehensibility. If one wishes to impose a more restrictive view of emergence, then one has to draw necessarily arbitrary boundaries within the multidimensional continuum of the universe of programs. I can see no justification for this.
 
Metadata
Title
Classical Reconditioning: Doing What Happens Naturally
Author
Derek Partridge
Copyright Year
2011
Publisher
Springer London
DOI
https://doi.org/10.1007/978-1-84996-498-2_22

Premium Partner