Skip to main content

2016 | Buch

Computational Intelligence

A Methodological Introduction

verfasst von: Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher

Verlag: Springer London

Buchreihe : Texts in Computer Science

insite
SUCHEN

Über dieses Buch

This textbook provides a clear and logical introduction to the field, covering the fundamental concepts, algorithms and practical implementations behind efforts to develop systems that exhibit intelligent behavior in complex environments. This enhanced second edition has been fully revised and expanded with new content on swarm intelligence, deep learning, fuzzy data analysis, and discrete decision graphs. Features: provides supplementary material at an associated website; contains numerous classroom-tested examples and definitions throughout the text; presents useful insights into all that is necessary for the successful application of computational intelligence methods; explains the theoretical background underpinning proposed solutions to common problems; discusses in great detail the classical areas of artificial neural networks, fuzzy systems and evolutionary algorithms; reviews the latest developments in the field, covering such topics as ant colony optimization and probabilistic graphical models.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction to Computational Intelligence
Abstract
Complex problem settings in widely differing technical, commercial, and financial fields evoke an increasing need for computer applications that must show “intelligent behavior.” These applications are desired to support decision-making, to control processes, to recognize and interpret patterns, or to maneuver vehicles or robots autonomously in unknown environments. Novel approaches, methods, tools and programming environments have been developed to accomplish such tasks.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher

Neural Networks

Frontmatter
Chapter 2. Introduction to Neural Networks
Abstract
(Artificial) neural networks are information processing systems, whose structure and operation principles are inspired by the nervous system and the brain of animals and humans. They consist of a large number of fairly simple units, the so-called neurons, which are working in parallel. These neurons communicate by sending information in the form of activation signals, along directed connections, to each other.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 3. Threshold Logic Units
Abstract
The description of biological neural networks in the preceding chapter makes it natural to model neurons as threshold logic units: if a neuron receives enough excitatory input that is not compensated by equally strong inhibitory input, it becomes active and sends a signal to other neurons. Such a model was already examined very early in much detail by McCulloch and Pitts (1943) (Bull. Math. Biophys. 5115–133, 1943). As a consequence, threshold logic units are also known as McCulloch–Pitts neurons. Another name which is commonly used for a threshold logic unit is perceptron, even though the processing units that (Rosenblatt 1958) (Psychol. Rev. 65:386–408, 1958), (Rosenblatt 1962) (Rosenblatt, Principles of Neurodynamics, 1962) called “perceptrons” are actually somewhat more complex than simple threshold logic units.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 4. General Neural Networks
Abstract
In this chapter we introduce a general model of (artificial) neural networks that captures (more or less) all special forms, which we consider in the following chapters. We start by defining the structure of an (artificial) neural network and then describe generally the operation and finally the training of an (artificial) neural network.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 5. Multilayer Perceptrons
Abstract
Having described the structure, the operation, and the training of (artificial) neural networks in a general fashion in the preceding chapter, we turn in this and the subsequent chapters to specific forms of (artificial) neural networks. We start with the best-known and most widely used form, the so-called multilayer perceptron (MLP), which is closely related to the networks of threshold logic units we studied in Chap. 3.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 6. Radial Basis Function Networks
Abstract
Like multilayer perceptrons, radial basis function networks are feedforward neural networks with a strictly layered structure.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 7. Self-organizing Maps
Abstract
Self-organizing maps are closely related to radial basis function networks. They can be seen as radial basis function networks without an output layer, or, rather, the hidden layer of a radial basis function network is already the output layer of a self-organizing map. This output layer also has an internal structure since the neurons are arranged in a grid. The neighborhood relationships resulting from this grid are exploited in the training process in order to determine a topology preserving map.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 8. Hopfield Networks
Abstract
In the preceding Chaps. 5 to 7 we studied so-called feed forward networks . that is, networks with an acyclic graph (no directed cycles). In this and the next chapter, however, we turn to so-called recurrent networks , that is, networks, the graph of which may contain (directed) cycles.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 9. Recurrent Networks
Abstract
The Hopfield networks that we discussed in the preceding chapter are special recurrent networks, which have a very constrained structure. In this chapter, however, we lift all restrictions and consider recurrent networks without any constraints. Such general recurrent networks are well suited to represent differential equations and to solve them (approximately) in a numerical fashion. If the type of differential equation is known that describes a given system, but the values of the parameters appearing in it are unknown, one may also try to train a suitable recurrent network with the help of example patterns in order to determine the system parameters.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 10. Mathematical Remarks for Neural Networks
Abstract
The following sections treat mathematical topics that were presupposed in the text (Sect. 10.1 on straight line equations and Sect. 10.2 on regression), or side remarks, which would have disturbed the flow of the exposition (Sect. 10.3 on activation transformation in a Hopfield network).
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher

Evolutionary Algorithms

Frontmatter
Chapter 11. Introduction to Evolutionary Algorithms
Abstract
Evolutionary algorithms comprise a class of optimization techniques that imitate principles of biological evolution.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 12. Elements of Evolutionary Algorithms
Abstract
Evolutionary algorithms are not fixed procedures, but contain several elements that must be adapted to the optimization problem to be solved.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 13. Fundamental Evolutionary Algorithms
Abstract
The preceding chapter presented all relevant elements of evolutionary algorithms, namely guidelines of how to choose an encoding for the solution candidates, procedures how to select individuals based on their fitness, and genetic operators with which modified solution candidates can be obtained.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 14. Computational Swarm Intelligence
Abstract
Swarm Intelligence (SI) is about a collective behavior of a population of individuals. The main properties of such populations is that all of the individuals have the same simple rule from which the global collective behavior cannot be predicted. Moreover, the individuals can only communicate within their local neighborhoods. The outcome of this local interaction defines the collective behavior which is unknown to single individuals. The world of Computational Swarm Intelligence contains several approaches for dealing with optimization problems. Particle Swarm Optimization (PSO) (Kennedy and Eberhart, Particle swarm optimization, 1995) and Ant Colony Optimization (ACO) (Dorigo and Stützle, Ant Colony Optimization 2004) will be addressed in the chapter. After the introduction, we explain the basic principles of computational swarm intelligence for PSO in Sect. 14.2 upon which the following Sects. 14.3 to 14.5 are built. The second part of the chapter, Sect. 14.6, is about the Ant Colony Optimization method.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher

Fuzzy Systems

Frontmatter
Chapter 15. Introduction to Fuzzy Sets and Fuzzy Logic
Abstract
Many propositions about the real world are not either true or false, rendering classical logic inadequate for reasoning with such propositions.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 16. The Extension Principle
Abstract
In Sect. 15.​6 we have discussed how set theoretic operations like intersection, union, and complement can be generalized to fuzzy sets. This section is devoted to the issue of extending the concept of mappings or functions to fuzzy sets. These ideas allow us to define operations like addition, subtraction, multiplication, division, or taking squares as well as set theoretic concepts like the composition of relations for fuzzy sets.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 17. Fuzzy Relations
Abstract
Relations can be used to model dependencies, correlations, or connections between variables, quantities, or attributes. Technically speaking, a (binary) relation over the universes of discourse X and Y is a subset R of the Cartesian product \(X \times Y\) of X and Y. The pairs \((x,y) \in X \times Y\) belonging to the relation R are linked by a connection described by the relation R. Therefore, a common notation for \((x,y) \in R\) is also xRy.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 18. Similarity Relations
Abstract
In this chapter, we discuss a special type of fuzzy relations called similarity relations.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 19. Fuzzy Control
Abstract
The biggest success of fuzzy systems in the field of industrial and commercial applications has been achieved with fuzzy controllers. Fuzzy control is a way of defining a nonlinear table-based controller whereas its nonlinear transition function can be defined without specifying every single entry of the table individually. Fuzzy control does not result from classical control engineering approaches. In fact, its roots can be found in the area of rule-based systems.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 20. Fuzzy Data Analysis
Abstract
So far, we considered fuzzy methods for modeling purposes, for which it is beneficial to incorporate vague concepts. As a consequence, the created (fuzzy) models are designed by domain experts and thus result from a purely knowledge-driven approach. However, fuzzy models may also be derived (automatically) from data if a sufficient amount of suitable data is available (data-driven approach).
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher

Bayes and Markov Networks

Frontmatter
Chapter 21. Introduction to Bayes Networks
Abstract
Relational database systems are amongst the most wide-spread data management systems in today’s businesses.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 22. Elements of Probability and Graph Theory
Abstract
This chapter introduces required theoretical concepts for the definition of Bayes and Markov networks. After important elements of probability theory—especially (conditional) independences—are discussed, we present relevant graph-theoretic notions with emphasis on so-called separation criteria. These criteria will later allow us to capture probabilistic independences with an undirected or directed graph.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 23. Decompositions
Abstract
The objective of this chapter is to connect the concepts of conditional independence with the separation in graphs.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 24. Evidence Propagation
Abstract
After having discussed efficient representations for expert and domain knowledge, we intent to exploit them to draw inferences when new information (evidence) becomes known. The objective is to propagate the evidence through the underlying network to reach all relevant attributes. Obviously, the graph structure will play an important role.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 25. Learning Graphical Models
Abstract
We will now address the third question from Chap. 21, namely how graphical models can be learned from given data. Until now, we were given the graphical structure. Now, we will introduce heuristics that allow us to induce these structures.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 26. Belief Revision
Abstract
In Chap. 23 we discussed graphical models as tools for structuring uncertain knowledge about high-dimensional domains. More precisely, we introduced Bayes networks, which are based on directed graphs and conditional distributions, and Markov networks, which refer to undirected graphs and marginal distributions or factor potentials. Further, we presented an evidence propagation method in Chap. 24 with which it is possible to condition the probability distribution represented by a graphical model on given evidence, i.e., on observed values for some of the variables, a reasoning process that is also called focusing.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Chapter 27. Decision Graphs
Abstract
In this chapter we will introduce the concepts of decision graphs (also referred to as influence diagrams) that are extensions to Bayes networks targeted for sequential decision making with dedicated focus on evaluating strategies and expected utilities.
Rudolf Kruse, Christian Borgelt, Christian Braune, Sanaz Mostaghim, Matthias Steinbrecher
Backmatter
Metadaten
Titel
Computational Intelligence
verfasst von
Rudolf Kruse
Christian Borgelt
Christian Braune
Sanaz Mostaghim
Matthias Steinbrecher
Copyright-Jahr
2016
Verlag
Springer London
Electronic ISBN
978-1-4471-7296-3
Print ISBN
978-1-4471-7294-9
DOI
https://doi.org/10.1007/978-1-4471-7296-3