Skip to main content

2009 | Buch

Computational Intelligence

Collaboration, Fusion and Emergence

herausgegeben von: Christine L. Mumford, Lakhmi C. Jain

Verlag: Springer Berlin Heidelberg

Buchreihe : Intelligent Systems Reference Library

insite
SUCHEN

Über dieses Buch

This book is about synergy in computational intelligence (CI). It is a c- lection of chapters that covers a rich and diverse variety of computer-based techniques, all involving some aspect of computational intelligence, but each one taking a somewhat pragmatic view. Many complex problems in the real world require the application of some form of what we loosely call “intel- gence”fortheirsolution. Fewcanbesolvedbythenaiveapplicationofasingle technique, however good it is. Authors in this collection recognize the li- tations of individual paradigms, and propose some practical and novel ways in which di?erent CI techniques can be combined with each other, or with more traditional computational techniques, to produce powerful probl- solving environments which exhibit synergy, i. e. , systems in which the whole 1 is greater than the sum of the parts . Computational intelligence is a relatively new term, and there is some d- agreement as to its precise de?nition. Some practitioners limit its scope to schemes involving evolutionary algorithms, neural networks, fuzzy logic, or hybrids of these. For others, the de?nition is a little more ?exible, and will include paradigms such as Bayesian belief networks, multi-agent systems, case-based reasoning and so on. Generally, the term has a similar meaning to the well-known phrase “Arti?cial Intelligence” (AI), although CI is p- ceived moreas a “bottom up” approachfrom which intelligent behaviour can emerge,whereasAItendstobestudiedfromthe“topdown”,andderivefrom pondering upon the “meaning of intelligence”. (These and other key issues will be discussed in more detail in Chapter 1.

Inhaltsverzeichnis

Frontmatter

Introduction

Frontmatter
Synergy in Computational Intelligence
Abstract
This chapter introduces the book. It begins with a historical perspective on Computational Intelligence (CI), and discusses its relationshipwith the longer established term “Artificial Intelligence” (AI). The chapter then gives a brief overview of the main CI techniques, and concludes with short summaries of all the chapters in the book.
Christine L. Mumford
Computational Intelligence: The Legacy of Alan Turing and John von Neumann
Abstract
In this chapter fundamental problems of collaborative computational intelligence are discussed. The problems are distilled from the seminal research of Alan Turing and John von Neumann. For Turing the creation of machines with human-like intelligence was only a question of programming time. In his research he identified the most relevant problems concerning evolutionary computation, learning, and structure of an artificial brain. Many problems are still unsolved, especially efficient higher learning methods which Turing called initiative. Von Neumann was more cautious. He doubted that human-like intelligent behavior could be described unambiguously in finite time and finite space. Von Neumann focused on self-reproducing automata to create more complex systems out of simpler ones. An early proposal from John Holland is analyzed. It centers on adaptability and population of programs. The early research of Newell, Shaw, and Simon is discussed. They use the logical calculus to discover proofs in logic. Only a few recent research projects have the broad perspectives and the ambitious goals of Turing and von Neumann. As examples the projects Cyc, Cog, and JANUS are discussed.
Heinz Mühlenbein

Fusing Evolutionary Algorithms and Fuzzy Logic

Frontmatter
Multiobjective Evolutionary Algorithms for Electric Power Dispatch Problem
Abstract
The potential of Multiobjective Evolutionary Algorithms (MOEA) for solving a real-world power system multiobjective nonlinear optimization problem is comprehensively presented and discussed. In this work, the Non-dominated Sorting Genetic Algorithm (NSGA), Niched Pareto Genetic Algorithm (NPGA), and Strength Pareto Evolutionary Algorithm (SPEA) have been developed and successfully applied to the Environmental/Economic electric power Dispatch (EED) problem. These multiobjective evolutionary algorithms have been individually examined and applied to a standard test system. A hierarchical clustering algorithm is imposed to provide the power system operator with a representative and manageable Pareto set. Moreover, a fuzzy set theory based approach is developed to extract one of the Pareto-optimal solutions as the best compromise solution. Several optimization runs have been carried out on different cases of problem complexity. The results of the MOEA have been compared to those reported in the literature. The results confirm the potential and effectiveness of MOEA compared to the traditional multiobjective optimization techniques. In addition, the performance of MOEA have been assessed and evaluated using different measures of diversity, distribution, and quality of the obtained non-dominated solutions.
Mohammad A. Abido
Fuzzy Evolutionary Algorithms and Genetic Fuzzy Systems: A Positive Collaboration between Evolutionary Algorithms and Fuzzy Systems
Abstract
There are two possible ways for integrating fuzzy logic and evolutionary algorithms. The first one involves the application of evolutionary algorithms for solving optimization and search problems related with fuzzy systems, obtaining genetic fuzzy systems. The second one concerns the use of fuzzy tools and fuzzy logic-based techniques for modelling different evolutionary algorithm components and adapting evolutionary algorithm control parameters, with the goal of improving performance. The evolutionary algorithms resulting from this integration are called fuzzy evolutionary algorithms. In this chapter, we shortly introduce genetic fuzzy systems and fuzzy evolutionary algorithms, giving a short state of the art, and sketch our vision of some hot current trends and prospects. In essence, we paint a complete picture of these two lines of research with the aim of showing the benefits derived from the synergy between evolutionary algorithms and fuzzy logic.
F. Herrera, M. Lozano
Multiobjective Genetic Fuzzy Systems
Abstract
In the design of fuzzy rule-based systems, we have two conflicting goals: One is accuracy maximization, and the other is complexity minimization (i.e., interpretability maximization). There exists a tradeoff relation between these two goals. That is, we cannot simultaneously achieve accuracy maximization and complexity minimization. Various approaches have been proposed to find accurate and interpretable fuzzy rule-based systems. In some approaches, these two goals are integrated into a single objective function which can be optimized by standard single-objective optimization techniques. In other approaches, accuracy maximization and complexity minimization are handled as different objectives in the framework of multiobjective optimization. Recently, multiobjective genetic algorithms have been used to search for a large number of non-dominated fuzzy rule-based systems along the accuracy-complexity tradeoff surface in some studies. These studies are often referred to as multiobjective genetic fuzzy systems. In this chapter, we first briefly explain the concept of accuracy-complexity tradeoff in the design of fuzzy rule-based systems. Next we explain various studies in multiobjective genetic fuzzy systems. Two basic ideas are explained in detail through computational experiments. Then we review a wide range of studies related to multiobjective genetic fuzzy systems. Finally we point out future research directions.
Hisao Ishibuchi, Yusuke Nojima

Adaptive Solution Schemes

Frontmatter
Exploring Hyper-heuristic Methodologies with Genetic Programming
Abstract
Hyper-heuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyper-heuristic idea is to generate new heuristics which are not currently known. These approaches operate on a search space of heuristics rather than directly on a search space of solutions to the underlying problem which is the case with most meta-heuristics implementations. In the majority of hyper-heuristic studies so far, a framework is provided with a set of human designed heuristics, taken from the literature, and with good measures of performance in practice. A less well studied approach aims to generate new heuristics from a set of potential heuristic components. The purpose of this chapter is to discuss this class of hyper-heuristics, in which Genetic Programming is the most widely used methodology. A detailed discussion is presented including the steps needed to apply this technique, some representative case studies, a literature review of related work, and a discussion of relevant issues. Our aim is to convey the exciting potential of this innovative approach for automating the heuristic design process.
Edmund K. Burke, Mathew R. Hyde, Graham Kendall, Gabriela Ochoa, Ender Ozcan, John R. Woodward
Adaptive Constraint Satisfaction: The Quickest First Principle
Abstract
The choice of a particular algorithm for solving a given class of constraint satisfaction problems is often confused by exceptional behaviour of algorithms. One method of reducing the impact of this exceptional behaviour is to adopt an adaptive philosophy to constraint satisfaction problem solving. In this report we describe one such adaptive algorithm, based on the principle of chaining. It is designed to avoid the phenomenon of exceptionally hard problem instances. Our algorithm shows how the speed of more naïve algorithms can be utilised safe in the knowledge that the exceptional behaviour can be bounded. Our work clearly demonstrates the potential benefits of the adaptive approach and opens a new front of research for the constraint satisfaction community.
James E. Borrett, Edward P. K. Tsang

Multi-agent Systems

Frontmatter
Collaborative Computational Intelligence in Economics
Abstract
In this chapter, we review the use of the idea of collaborative computational intelligence in economics. We examine two kinds of collaboration: first, the collaboration within the realm of computational intelligence, and, second, the collaboration beyond the realm of it. These two forms of collaboration have had a significant impact upon the current state of economics. First, they enhance and enrich the heterogeneous-agent research paradigm in economics, alternatively known as agent-based economics. Second, they help integrate the use of human agents and software agents in various forms, which in turn has tied together agent-based economics and experimental economics. The marriage of the two points out the future of economic research. Third, various hybridizations of the CI tools facilitate the development of more comprehensive treatments of the economic and financial uncertainties in terms of both their quantitative and qualitative aspects.
Shu-Heng Chen
IMMUNE: A Collaborating Environment for Complex System Design
Abstract
Engineering complex systems is a testing paradigm for engineers of this century. Integration of complex systems design is accomplished through innovation, and autonomy of design agents has been recognized as the main contributor to novelty and innovation of solutions. Unfortunately, agents’ autonomy can make the design environment chaotic and inefficient. To address this dilemma of distributed versus central control in complex system design, decision support systems that enable robust collaboration amongst many design agents from different disciplines, are required. The particular characteristics of such decision support system must include immunity to catastrophic failures and sudden collapse that are usually observed in complex systems. This chapter lays the conceptual framework for IMMUNE as a robust collaborating design environment. In this environment the complexity arising from autonomous collaborations is sensed and monitored by a central unit. The collaboration complexity, which is the collective problem solving capability of the design system, is compared to the complexity of the problem estimated from simulation based techniques. In this regard IMMUNE is an artificial immune system that balances the complexity of the environment and by that increases the possibility of achieving innovative and integral solutions to the complex design problems. Agents in IMMUNE are adaptive and can change their negotiation strategy and by that can contribute to the overall capability of the design system to maintain its problem solving complexity.
Mahmoud Efatmaneshnik, Carl Reidsema
Bayesian Learning for Cooperation in Multi-Agent Systems
Abstract
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, decentralisation, openness, dynamism and uncertainty. As work in these fields develops, such systems face increasing challenges. Two particular challenges are decision making in uncertain and partially-observable environments, and coordination with other agents in such environments. Although uncertainty and coordination have been tackled as separate problems, formal models for an integrated approach are typically restricted to simple classes of problem and are not scalable to problems with many agents and millions of states. We improve on these approaches by extending a principled Bayesian model into more challenging domains, using heuristics and exploiting domain knowledge in order to make approximate solutions tractable.We show the effectiveness of our approach applied to an ambulance coordination problem inspired by the Robocup Rescue system.
Mair Allen-Williams, Nicholas R. Jennings
Collaborative Agents for Complex Problems Solving
Abstract
Multi-Agent Systems (MAS) are particularly well suited to complex problem solving, whether the MAS comprises cooperative or competitive (selfinterested) agents. In this context we discuss both dynamic team formation among the former, as well as partner selection strategies with the latter type of agent. One-shot, long-term, and (fuzzy-based) flexible formation strategies are compared and contrasted, and experiments described which compare these strategies along dimensions of Agent Search Time and Award Distribution Situation.We find that the flexible formation strategy is best suited to self-interested agents in open, dynamic environments. Agent negotiation among competitive agents is also discussed, in the context of collaborative problem solving. We present a modification to Zhang’s Dual Concern Model which enables agents to make reasonable estimates of potential partner behavior during negotiation. Lastly, we introduce a Quadratic Regression approach to partner behavior analysis/estimation, which overcomes some of the limitations of Machine Learning-based approaches.
Minjie Zhang, Quan Bai, Fenghui Ren, John Fulcher

Computer Vision

Frontmatter
Predicting Trait Impressions of Faces Using Classifier Ensembles
Abstract
In the experiments presented in this chapter, single classifier systems and ensembles are trained to detect the social meanings people perceive in facial morphology. Exploring machine models of people’s impressions of faces has value in the fields of social psychology and human-computer interaction. Our first concern in designing this study was developing a sound ground truth for this problem domain. We accomplished this by collecting a large number of faces that exhibited strong human consensus in a comprehensive set of trait categories. Several single classifier systems and ensemble systems composed of Levenberg-Marquardt neural networks using different methods of collaboration were then trained to match the human perception of the faces in the six trait dimensions of intelligence, maturity, warmth, sociality, dominance, and trustworthiness. Our results show that machine learning methods employing ensembles are as capable as most individual human beings are in their ability to predict the social impressions certain faces make on the average human observer. Single classifier systems did not match human performance as well as the ensembles did. Included in this chapter is a tutorial, suitable for the novice, on the single classifier systems and collaborative methods used in the experiments reported in the study.
Sheryl Brahnam, Loris Nanni
The Analysis of Crowd Dynamics: From Observations to Modelling
Abstract
Crowd is a familiar phenomenon studied in a variety of research disciplines including sociology, civil engineering and physics. Over the last two decades computer vision has become increasingly interested in studying crowds and their dynamics: because the phenomenon is of great scientific interest, it offers new computational challenges and because of a rapid increase in video surveillance technology deployed in public and private spaces. In this chapter computer vision techniques, combined with statistical methods and neural network, are used to automatically observe measure and learn crowd dynamics. The problem is studied to offer methods to measure crowd dynamics and model the complex movements of a crowd. The refined matching of local descriptors is used to measure crowd motion and statistical analysis and a kind of neural network, self-organizing maps were employed to learn crowd dynamics models.
B. Zhan, P. Remagnino, D. N. Monekosso, S. Velastin

Communications for CI Systems

Frontmatter
Computational Intelligence for the Collaborative Identification of Distributed Systems
Abstract
In this chapter, on the basis of a rigorous mathematical formulation, a new algorithm for the identification of distributed systems by large scale collaborative sensor networks is suggested. The algorithm extends a KLT-based identification approach to a decentralized setting, using the distributed Karhunen-Loève transform (DKLT) recently proposed by Gastpar et al.. The proposed approach permits an arbitrarily accurate identification since it exploits both the asymptotic properties of convergence of DKLT and the universal approximation capabilities of radial basis functions neural networks. The effectiveness of the proposed approach is directly related to the reduction of total distortion in the compression performed by the single nodes of the sensor network, to the identification accuracy, as well as to the low computational complexity of the fusion algorithm performed by the fusion center to regulate the intelligent cooperation of the nodes. Some identification experiments, that have been carried out on systems whose behavior is described by partial differential equations in 2-D domains with random excitations, confirm the validity of this approach. It is worth noting the generality of the algorithm that can be applied in a wide range of applications without limitations on the type of physical phenomena, boundary conditions, sensor network used, and number of its nodes.
Giorgio Biagetti, Paolo Crippa, Francesco Gianfelici, Claudio Turchetti
Collaboration at the Basis of Sharing Focused Information: The Opportunistic Networks
Abstract
There is no doubt that the sharing of information lies at the basis of any collaborative framework. While this is the keen contrivance of social computation paradigms such as ant colonies and neural networks, it also represented the Achilles’ heel of many parallel computation protocols of the eighties. In addition to computational overhead due to the transfer of the information in these protocols, a modern drawback is constituted by intrusions in the communication channels, e.g. spamming in the e-mails, injection of malicious programming codes, or in general attacks on the data communication.While swarm intelligence and connectionist paradigms overcome these drawbacks with a fault tolerant broadcasting of data - any agent has access massively to any message reaching him - in this chapter we discuss within the paradigm of opportunistic networks an automatically selective communication protocol particularly suited to set up a robust collaboration within a very local community of agents. Like medieval monks who escaped world chaos and violence by taking refuge in small and protected communities, modern people may escape the information avalanche by forming virtual communities that do not in any case relinquish most ITC (Information Technology Community) benefits. A communication middleware to obtain this result is represented by opportunistic networks.
Bruno Apolloni, Guglielmo Apolloni, Simone Bassis, Gian Luca Galliani, Gianpaolo Rossi

Artificial Immune Systems

Frontmatter
Exploiting Collaborations in the Immune System: The Future of Artificial Immune Systems
Abstract
Despite a steady increase in the application of algorithms inspired by the natural immune system to a variety of domains over the previous decade, we argue that the field of Artificial Immune Systems has yet to achieve its full potential. We suggest that two factors contribute to this; firstly, that the metaphor has been applied to insufficiently complex domains, and secondly, that isolated mechanisms that occur in the immune system have been used naïvely and out of context. We outline the properties of domains which may benefit from an immune approach and then describe a number of immune mechanisms and perspectives that are ripe for exploration from a computational perspective. In each of these mechanisms collaboration plays a key role. The concepts are illustrated using two exemplars of practical applications of selected mechanisms from the domains of machine learning and wireless sensor networks. The article suggests that exploiting the collaborations that occur between actors and signals in the immune system will lead to a new generation of engineered systems that are fit for purpose in the same way as their biological counterparts.
Emma Hart, Chris McEwan, Despina Davoudani

Parallel Evolutionary Algorithms

Frontmatter
Evolutionary Computation: Centralized, Parallel or Collaborative
Abstract
This chapter discusses the nature and the importance of spatial interactions in evolutionary computation. The current state of evolution theories is discussed. An interaction model is investigated which we have called Darwin’s continent-island cycle conjecture. Darwin argued that such a cycle is the most efficient for successful evolution. This bold conjecture has not yet been noticed in science. We confirm Darwin’s conjecture using an evolutionary game based on the iterated prisoner’s dilemma. A different interaction scheme, called the stepping-stone model is used by the Parallel Genetic Algorithm PGA. The PGA is used to solve combinatorial optimization problems. Then the Breeder Genetic Algorithm BGA used for global optimization of continuous functions is described. The BGA uses competition between subpopulations applying different strategies. This kind of interaction is found in ecological systems.
Heinz Mühlenbein

CI for Clustering and Classification

Frontmatter
Fuzzy Clustering of Likelihood Curves for Finding Interesting Patterns in Expression Profiles
Abstract
Peptides derived from proteins are routinely analysed in so-called bottom-up proteome studies to determine the amounts of corresponding proteins. Such studies easily sequence and analyse thousands of peptides per hour by the combination of liquid chromatography and mass spectrometry instruments (LC-MS). However, quantified peptides belonging to the same protein do not necessarily exhibit the same regulatory information in all cases. Several causes can produce these regulatory inconsistencies at the peptide level. Quantitative data might be simply influenced by specific properties of the analytical procedure. However, it can also indicate meaningful biological processes such as the post-translational modification (PTM) of amino acids regulated in individual protein regions. This article describes a fuzzy clustering approach allowing the automatic detection of regulatory peptide clusters within individual proteins. The approach utilises likelihood curves to summarise the regulatory information of each peptide, based on a noise model of the used analytical workflow. The shape of these curves directly correlates with both the regulatory information and the underlying data quality, serving as a representative starting point for fuzzy clustering of peptide data assigned to one protein.
Claudia Hundertmark, Lothar Jänsch, Frank Klawonn
A Hybrid Rule-Induction/Likelihood-Ratio Based Approach for Predicting Protein-Protein Interactions
Abstract
We propose a new hybrid data mining method for predicting protein-protein interactions combining Likelihood-Ratio with rule induction algorithms. In essence, the new method consists of using a rule induction algorithm to discover rules representing partitions of the data, and then the discovered rules are interpreted as “bins” which are used to compute likelihood ratios. This new method is applied to the prediction of protein-protein interactions in the Saccharomyces Cerevisiae genome, using predictive genomic features in an integrated scheme. The results show that the new hybrid method outperforms a pure likelihood ratio based approach.
Mudassar Iqbal, Alex A. Freitas, Colin G. Johnson
Improvements in Flock-Based Collaborative Clustering Algorithms
Abstract
Inspiration from nature has driven many creative solutions to challenging real life problems. Many optimization methods, in particular clustering algorithms, have been inspired by such natural phenomena as neural systems and networks, natural evolution, the immune system, and lately swarms and colonies. In this paper, we make a brief survey of swarm intelligence clustering algorithms and focus on the flocks of agents-based clustering and data visualization algorithm, (FClust). A few limitations of FClust are then discussed with proposed improvements.We thus propose the FClust-annealing algorithm that decreases the number of iterations needed to converge and improves the quality of resulting clusters. We also propose a (K-means+ FClust) hybrid algorithm which decreases the complexity of FClust from quadratic to linear, with further improvements in the cluster quality. Experiments on both artificial and real data illustrate the workings of FClust and the advantages of our proposed variants.
Esin Saka, Olfa Nasraoui
Combining Statistics and Case-Based Reasoning for Medical Research
Abstract
In medicine many exceptions occur. In medical practice and in knowledge-based systems too, it is necessary to consider them and to deal with them appropriately. In medical studies and in research, exceptions should be explained.We present a system, called ISOR, that helps to explain cases that do not fit to a theoretical hypothesis. Starting points are situations where neither a well-developed theory nor reliable knowledge nor, at the beginning, a case base is available. So, instead of theoretical knowledge and intelligent experience, just some theoretical hypothesis and a set of measurements are given. In this chapter, we focus on the application of the ISOR system to the hypothesis that a specific exercise program improves the physical condition of dialysis patients. Additionally, for this application a method to restore missing data is presented.
Rainer Schmidt, Olga Vorobieva
Collaborative and Experience-Consistent Schemes of System Modelling in Computational Intelligence
Abstract
Computational Intelligence (CI) is commonly regarded as a synergistic framework within which one can analyze and design (synthesize) intelligent systems. The methodology of CI has been firmly established through the unified and highly collaborative usage of the underlying information technologies of fuzzy sets (and granular computing, in general), neural networks, and evolutionary optimization. It is the collaboration which makes the CI models highly versatile, computationally attractive and very much user-oriented. While this facet of functional collaboration between these three key information technologies has been broadly recognized and acknowledged within the research community, there is also another setting where the collaboration aspects start to play an important role. They are inherently associated with the nature of intelligent systems that quite often become distributed and whose interactions come with a suite of various mechanisms of collaboration. In this study, we focus on collaborative Computational Intelligence which dwells upon numerous forms of collaborative linkages in distributed intelligent systems. In the context of intelligent systems we are usually faced with various sources of data in terms of their quality, granularity and origin. We may encounter large quantities of numeric data coming from noisy sensors, linguistic findings conveyed by rules and associations and perceptions offered by human observers. Given the enormous diversity of the sources of data and knowledge the ensuing quality of data deserves careful attention. Knowledge reuse and knowledge sharing have been playing a vital role in the knowledge management which has become amplified over the time we encounter information systems of increasing complexity and of a distributed nature. The collaborative CI is aimed at the effective exchange of locally available knowledge. The exchange is commonly accomplished by interacting at the level of information granules rather than numeric quantitative. As a detailed case study we discuss experience–consistent modeling of CI constructs and raise an issue of knowledge reuse in the setting of constructs of Computational Intelligence. One could note that the knowledge-based component (viz. previously built CI constructs) can serve as a certain form of the regularization mechanism encountered quite often in various modeling platforms. The optimization procedure applied there helps us strike a sound balance between the data-driven and knowledge-driven evidence. We introduce a general scheme of optimization and show an effective way of reusing knowledge. In the sequel, we demonstrate the development details with regard to fuzzy rule-based models and neural networks.
Witold Pedrycz
Backmatter
Metadaten
Titel
Computational Intelligence
herausgegeben von
Christine L. Mumford
Lakhmi C. Jain
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-01799-5
Print ISBN
978-3-642-01798-8
DOI
https://doi.org/10.1007/978-3-642-01799-5