Skip to main content

2007 | Buch

Challenges for Computational Intelligence

herausgegeben von: Prof. Włodzisław Duch, Prof. Jacek Mańdziuk

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Computational Intelligence

insite
SUCHEN

Über dieses Buch

In the year 1900 at the International Congress of Mathematicians in Paris David Hilbert delivered what is now considered the most important talk ever given in the history of mathematics, proposing 23 major problems worth working at in the future. One hundred years later the impact of this talk is still strong: some problems have been solved, new problems have been added, but the direction once set - identify the most important problems and focus on them - is still actual.

Computational Intelligence (CI) is used as a name to cover many existing branches of science, with artificial neural networks, fuzzy systems and evolutionary computation forming its core. In recent years CI has been extended by adding many other subdisciplines and it became quite obvious that this new field also requires a series of challenging problems that will give it a sense of direction. Without setting up clear goals and yardsticks to measure progress on the way many research efforts are wasted.

The book written by top experts in CI provides such clear directions and the much-needed focus on the most important and challenging research issues, showing a roadmap how to achieve ambitious goals.

Inhaltsverzeichnis

Frontmatter
What Is Computational Intelligence and Where Is It Going?
Summary
What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with “computational intelligence” in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed.
Włodzisław Duch
New Millennium AI and the Convergence of History
Summary
Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. At the same time there has been rapid progress in practical methods for learning true sequence-processing programs, as opposed to traditional methods limited to stationary pattern association. Here we will briefly review some of the new results, and speculate about future developments, pointing out that the time intervals between the most notable events in over 40,000 years or 29 lifetimes of human history have sped up exponentially, apparently converging to zero within the next few decades. Or is this impression just a by-product of the way humans allocate memory space to past events?
Jürgen Schmidhuber
The Challenges of Building Computational Cognitive Architectures
Summary
The work in the area of computational cognitive modeling explores the essence of cognition through developing detailed understanding of cognition by specifying computational models. In this enterprise, a cognitive architecture is a domain-generic computational cognitive model that may be used for a broad, multiple-domain analysis of cognition. It embodies generic descriptions of cognition in computer algorithms and programs. Building cognitive architectures is a difficult task and a serious challenge to the fields of cognitive science, artificial intelligence, and computational intelligence. In this article, discussions of issues and challenges in developing cognitive architectures will be undertaken, examples of cognitive architectures will be given, and future directions will be outlined.
Ron Sun
Programming a Parallel Computer: The Ersatz Brain Project
Summary
There is a complex relationship between the architecture of a computer, the software it needs to run, and the tasks it performs. The most difficult aspect of building a brain-like computer may not be in its construction, but in its use: How can it be programmed? What can it do well? What does it do poorly? In the history of computers, software development has proved far more difficult and far slower than straightforward hardware development. There is no reason to expect a brain like computer to be any different. This chapter speculates about its basic design, provides examples of “programming” and suggests how intermediate level structures could arise in a sparsely connected massively parallel, brain like computer using sparse data representations.
James A. Anderson, Paul Allopenna, Gerald S. Guralnik, David Sheinberg, John A. Santini Jr., Socrates Dimitriadis, Benjamin B. Machta, Brian T. Merritt
The Human Brain as a Hierarchical Intelligent Control System
Summary
An approach to intelligence and reasoning is developed for the brain. The need for such an approach to Computational Intelligence is argued for on ethical grounds. The paper then considers various components of information processing in the brain, choosing attention, memory and reward as key (Language cannot be handled in the space available). How these could then be used to achieve cognitive faculties and ultimately reasoning are then discussed, and the paper concludes with a brief analysis of reasoning tasks, including the amazing powers of Betty the Crow.
JG Taylor
Artificial Brain and OfficeMate TR based on Brain Information Processing Mechanism
Summary
The Korean Brain Neuroinformatics Research Program has dual goals, i.e., to understand the information processing mechanism in the brain and to develop intelligent machine based on the mechanism. The basic form of the intelligent machine is called Artificial Brain, which is capable of conducting essential human functions such as vision, auditory, inference, and emergent behavior. By the proactive learning from human and environments the Artificial Brain may develop oneself to become more sophisticated entity. The OfficeMate will be the first demonstration of these intelligent entities, and will help human workers at offices for scheduling, telephone reception, document preparation, etc. The research scopes for the Artificial Brain and OfficeMate are presented with some recent results.
Soo-Young Lee
Natural Intelligence and Artificial Intelligence: Bridging the Gap between Neurons and Neuro-Imaging to Understand Intelligent Behaviour
Summary
The brain has been a source of inspiration for artificial intelligence since long. With the advance of modern neuro-imaging techniques we have the opportunity to peek into the active brain in normal human subjects and to measure its activity. At the present, there is a large gap in knowledge linking results about neuronal architecture, activity of single neurons, neuro-imaging studies and human cognitive performance. Bridging this gap is necessary before we can understand the neuronal encoding of human cognition and consciousness and opens the possibility for Brain- Computer Interfaces (BCI). BCI applications aim to interpret neuronal activity in terms of action or intention for action and to use these signals to control external devices, for example to restore motor function after paralysis in stroke patients. Before we will be able to use neuronal activity for BCI-applications in an efficient and reliable way, advanced pattern recognition algorithms have to be developed to classify the noisy signals from the brain. The main challenge for the future will be to understand neuronal information processing to such an extent that we can interpret neuronal activity reliably in terms of cognitive activity of human subjects. This will provide insight in the cognitive abilities of humans and will help to bridge the gap between natural and artificial intelligence.
Stan Gielen
Computational Scene Analysis
Summary
A remarkable achievement of the perceptual system is its scene analysis capability, which involves two basic perceptual processes: the segmentation of a scene into a set of coherent patterns (objects) and the recognition of memorized ones. Although the perceptual system performs scene analysis with apparent ease, computational scene analysis remains a tremendous challenge as foreseen by Frank Rosenblatt. This chapter discusses scene analysis in the field of computational intelligence, particularly visual and auditory scene analysis. The chapter first addresses the question of the goal of computational scene analysis. A main reason why scene analysis is difficult in computational intelligence is the binding problem, which refers to how a collection of features comprising an object in a scene is represented in a neural network. In this context, temporal correlation theory is introduced as a biologically plausible representation for addressing the binding problem. The LEGION network lays a computational foundation for oscillatory correlation, which is a special form of temporal correlation. Recent results on visual and auditory scene analysis are described in the oscillatory correlation framework, with emphasis on real-world scenes. Also discussed are the issues of attention, feature-based versus model-based analysis, and representation versus learning. Finally, the chapter points out that the time dimension and David Marr's framework for understanding perception are essential for computational scene analysis.
DeLiang Wang
Brain-, Gene-, and Quantum Inspired Computational Intelligence: Challenges and Opportunities
Summary
This chapter discusses opportunities and challenges for the creation of methods of computational intelligence (CI) and more specifically – artificial neural networks (ANN), inspired by principles at different levels of information processing in the brain: cognitive-, neuronal-, genetic-, and quantum, and mainly, the issues related to the integration of these principles into more powerful and accurate CI methods. It is demonstrated how some of these methods can be applied to model biological processes and to improve our understanding in the subject area, along with other – being generic CI methods applicable to challenging generic AI problems. The chapter first offers a brief presentation of some principles of information processing at different levels of the brain, and then presents brain-inspired, geneinspired and quantum inspired CI. The main contribution of the chapter though is the introduction of methods inspired by the integration of principles from several levels of information processing, namely: (1) a computational neurogenetic model, that combines in one model gene information related to spiking neuronal activities; (2) a general framework of a quantum spiking neural network model; (3) a general framework of a quantum computational neuro-genetic model. Many open questions and challenges are discussed, along with directions for further research.
Nikola Kasabov
The Science of Pattern Recognition. Achievements and Perspectives
Summary
Automatic pattern recognition is usually considered as an engineering area which focusses on the development and evaluation of systems that imitate or assist humans in their ability of recognizing patterns. It may, however, also be considered as a science that studies the faculty of human beings (and possibly other biological systems) to discover, distinguish, characterize patterns in their environment and accordingly identify new observations. The engineering approach to pattern recognition is in this view an attempt to build systems that simulate this phenomenon. By doing that, scientific understanding is gained of what is needed in order to recognize patterns, in general.
Robert P. W. Duin, Elżbieta Pekalska
Towards Comprehensive Foundations of Computational Intelligence
Summary
Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.
Włodzisław Duch
Knowledge-Based Clustering in Computational Intelligence
Summary
Clustering is commonly regarded as a synonym of unsupervised learning aimed at the discovery of structure in highly dimensional data. With the evident plethora of existing algorithms, the area offers an outstanding diversity of possible approaches along with their underlying features and potential applications. With the inclusion of fuzzy sets, fuzzy clustering became an integral component of Computational Intelligence (CI) and is now broadly exploited in fuzzy modeling, fuzzy control, pattern recognition, and exploratory data analysis. A lot of pursuits of CI are human-centric in the sense they are either initiated or driven by some domain knowledge or the results generated by the CI constructs are made easily interpretable. In this sense, to follow the tendency of human-centricity so profoundly visible in the CI domain, the very concept of fuzzy clustering needs to be carefully revisited. We propose a certain paradigm shift that brings us to the idea of knowledge-based clustering in which the development of information granules – fuzzy sets is governed by the use of data as well as domain knowledge supplied through an interaction with the developers, users and experts. In this study, we elaborate on the concepts and algorithms of knowledge-based clustering by considering the well known scheme of Fuzzy C-Means (FCM) and viewing it as an operational model using which a number of essential developments could be easily explained. The fundamental concepts discussed here involve clustering with domain knowledge articulated through partial supervision and proximity-based knowledge hints.
Witold Pedrycz
Generalization in Learning from Examples
Summary
Capability of generalization in learning from examples can be modeled using regularization, which has been developed as a tool for improving stability of solutions of inverse problems. Theory of inverse problems has been developed to solve various tasks in applied science such as acoustics, geophysics and computerized tomography. Such problems are typically described by integral operators. It is shown that learning from examples can be reformulated as an inverse problem defined by an evaluation operator. This reformulation allows one to characterize optimal solutions of learning tasks and design learning algorithms based on numerical solutions of systems of linear equations.
Věra Kůrková
A Trend on Regularization and Model Selection in Statistical Learning: A Bayesian Ying Yang Learning Perspective
Summary
In this chapter, advances on regularization and model selection in statistical learning have been summarized, and a trend has been discussed from a Bayesian Ying Yang learning perspective. After briefly introducing Bayesian Ying- Yang system and best harmony learning, not only its advantages of automatic model selection and of integrating regularization and model selection have been addressed, but also its differences and relations to several existing typical learning methods have been discussed and elaborated. Taking the tasks of Gaussian mixture, local subspaces, local factor analysis as examples, not only detailed model selection criteria are given, but also a general learning procedure is provided, which unifies those automatic model selection featured adaptive algorithms for these tasks. Finally, a trend of studies on model selection (i.e., automatic model selection during parametric learning), has been further elaborated. Moreover, several theoretical issues in a large sample size and a number of challenges in a small sample size have been presented.
Lei Xu
Computational Intelligence in Mind Games
Summary
The chapter considers recent achievements and perspectives of Computational Intelligence (CI) applied to mind games. Several notable examples of unguided, autonomous CI learning systems are presented and discussed. Based on advantages and limitations of existing approaches a list of challenging issues and open problems in the area of intelligent game playing is proposed and motivated.
It is generally concluded in the paper that the ultimate goal of CI in mind game research is the ability to mimic human approach to game playing in all its major aspects including learning methods (learning from scratch, multitask learning, unsupervised learning, pattern-based knowledge acquisition) as well as reasoning and decision making (efficient position estimation, abstraction and generalization of game features, autonomous development of evaluation functions, effective preordering of moves and selective, contextual search).
Jacek Mańdziuk
Computer Go: A Grand Challenge to AI
Summary
The oriental game of Go is among the most tantalizing unconquered challenges in artificial intelligence after IBM's DEEP BLUE beat the world Chess champion in 1997. Its high branching factor prevents the conventional tree search approach, and long-range spatiotemporal interactions make position evaluation extremely difficult. Thus, Go attracts researchers from diverse fields who are attempting to understand how computers can represent human playing and win the game against humans. Numerous publications already exist on this topic with different motivations and a variety of application contexts. This chapter surveys methods and some related works used in computer Go published from 1970 until now, and offers a basic overview for future study. We also present our attempts and simulation results in building a non-knowledge game engine, using a novel hybrid evolutionary computation algorithm, for the Capture Go game.
Xindi Cai, Donald C. Wunsch II
Noisy Chaotic Neural Networks for Combinatorial Optimization
Summary
In this Chapter, we review the virtues and limitations of the Hopfield neural network for tackling NP-hard combinatorial optimization problems (COPs). Then we discuss two new neural network models based on the noisy chaotic neural network, and applied the two methods to solving two different NP-hard COPs in communication networks. The simulation results show that our methods are superior to previous methods in solution quality. We also point out several future challenges and possible directions in this domain.
Lipo Wang, Haixiang Shi
Metadaten
Titel
Challenges for Computational Intelligence
herausgegeben von
Prof. Włodzisław Duch
Prof. Jacek Mańdziuk
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-71984-7
Print ISBN
978-3-540-71983-0
DOI
https://doi.org/10.1007/978-3-540-71984-7

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.