Skip to main content

2009 | Buch

Foundations of Computational Intelligence Volume 2

Approximate Reasoning

herausgegeben von: Aboul-Ella Hassanien, Ajith Abraham, Francisco Herrera

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Computational Intelligence

insite
SUCHEN

Über dieses Buch

Foundations of Computational Intelligence Volume 2: Approximation Reasoning: Theoretical Foundations and Applications Human reasoning usually is very approximate and involves various types of - certainties. Approximate reasoning is the computational modelling of any part of the process used by humans to reason about natural phenomena or to solve real world problems. The scope of this book includes fuzzy sets, Dempster-Shafer theory, multi-valued logic, probability, random sets, and rough set, near set and hybrid intelligent systems. Besides research articles and expository papers on t- ory and algorithms of approximation reasoning, papers on numerical experiments and real world applications were also encouraged. This Volume comprises of 12 chapters including an overview chapter providing an up-to-date and state-of-the research on the applications of Computational Intelligence techniques for - proximation reasoning. The Volume is divided into 2 parts: Part-I: Approximate Reasoning – Theoretical Foundations Part-II: Approximate Reasoning – Success Stories and Real World Applications Part I on Approximate Reasoning – Theoretical Foundations contains four ch- ters that describe several approaches of fuzzy and Para consistent annotated logic approximation reasoning. In Chapter 1, “Fuzzy Sets, Near Sets, and Rough Sets for Your Computational Intelligence Toolbox” by Peters considers how a user might utilize fuzzy sets, near sets, and rough sets, taken separately or taken together in hybridizations as part of a computational intelligence toolbox. In multi-criteria decision making, it is necessary to aggregate (combine) utility values corresponding to several criteria (parameters).

Inhaltsverzeichnis

Frontmatter

Approximate Reasoning - Theoretical Foundations and Applications

Frontmatter

Approximate Reasoning - Theoretical Foundations

Fuzzy Sets, Near Sets, and Rough Sets for Your Computational Intelligence Toolbox
Summary
This chapter considers how one might utilize fuzzy sets, near sets, and rough sets, taken separately or taken together in hybridizations as part of a computational intelligence toolbox. These technologies offer set theoretic approaches to solving many types of problems where the discovery of similar perceptual granules and clusters of perceptual objects is important. Perceptual information systems (or, more concisely, perceptual systems) provide stepping stones leading to nearness relations and properties of near sets. This work has been motivated by an interest in finding a solution to the problem of discovering perceptual granules that are, in some sense, near each other. Fuzzy sets result from the introduction of a membership function that generalizes the traditional characteristic function. Near set theory provides a formal basis for observation, comparison and classification of perceptual granules. Near sets result from the introduction of a description-based approach to perceptual objects and a generalization of the traditional rough set approach to granulation that is independent of the notion of the boundary of a set approximation. Near set theory has strength by virtue of the strength it gains from rough set theory, starting with extensions of the traditional indiscernibility relation. This chapter has been written to establish a context for three forms of sets that are now part of the computational intelligence umbrella. By way of introduction to near sets, this chapter considers various nearness relations that define partitions of sets of perceptual objects that are near each other. Every perceptual granule is represented by a set of perceptual objects that have their origin in the physical world. Objects that have the same appearance are considered perceptually near each other, i.e., objects with matching descriptions. Pixels, pixel windows, and segmentations of digital images are given by way of illustration of sample near sets. This chapter also briefly considers fuzzy near sets and near fuzzy sets as well as rough sets that are near sets.The main contribution of this chapter is the introduction of a formal foundation for near sets considered in the context of fuzzy sets and rough sets.
James F. Peters
Fuzzy without Fuzzy: Why Fuzzy-Related Aggregation Techniques Are Often Better Even in Situations without True Fuzziness
Summary
Fuzzy techniques have been originally invented as a methodology that transforms the knowledge of experts formulated in terms of natural language into a precise computer-implementable form. There are many successful applications of this methodology to situations in which expert knowledge exist, the most well known is an application to fuzzy control.
In some cases, fuzzy methodology is applied even when no expert knowledge exists: instead of trying to approximate the unknown control function by splines, polynomials, or by any other traditional approximation technique, researchers try to approximate it by guessing and tuning the expert rules. Surprisingly, this approximation often works fine, especially in such application areas as control and multi-criteria decision making.
In this chapter, we give a mathematical explanation for this phenomenon.
Hung T. Nguyen, Vladik Kreinovich, François Modave, Martine Ceberio
Intermediate Degrees Are Needed for the World to Be Cognizable: Towards a New Justification for Fuzzy Logic Ideas
Summary
Most traditional examples of fuzziness come from the analysis of commonsense reasoning. When we reason, we use words from natural language like “young”, “well”. In many practical situations, these words do not have a precise true-or-false meaning, they are fuzzy. One may therefore be left with an impression that fuzziness is a subjective characteristic, it is caused by the specific way our brains work. However, the fact that that we are the result of billions of years of successful adjusting-to-the-environment evolution makes us conclude that everything about us humans is not accidental. In particular, the way we reason is not accidental, this way must reflect some real-life phenomena – otherwise, this feature of our reasoning would have been useless and would not have been abandoned long ago. In other words, the fuzziness in our reasoning must have an objective explanation – in fuzziness of the real world. In this chapter, we first give examples of objective real-world fuzziness. After these example, we provide an explanation of this fuzziness – in terms of cognizability of the world.
Hung T. Nguyen, Vladik Kreinovich, J. Esteban Gamez, François Modave, Olga Kosheleva
Paraconsistent Annotated Logic Program Before-after EVALPSN and Its Application
Summary
We have already proposed a paraconsistent annotated logic program called EVALPSN. In EVALPSN, an annotation called an extended vector annotation is attached to each literal. In order to deal with before-after relation between two time intervals, we introduce a new interpretation for extended vector annotations in EVALPSN, which is named Before-after(bf) EVALPSN.
In this chapter, we introduce the bf-EVALPSN and its application to real-time process order control and its safety verification with simple examples. First, the background and overview of EVALPSN are introduced, and paraconsistent annotated logic as the formal background of EVALPSN and EVALPSN itself are recapitulated with simple examples. Then, after bf-EVALPSN is formally defined, how to implement and apply bf-EVALPSN to real-time intelligent process order control and its safety verification with simple practical examples. Last, unique and useful features of bf-EVALPSN are introduced, and conclusions and remarks are provided.
Kazumi Nakamatsu

Approximate Reasoning - Success Stories and Real World Applications

Frontmatter
A Fuzzy Set Approach to Software Reliability Modeling
Summary
This chapter provides a discussion of a fuzzy set approach which is used to extend the notion of software debugging from a 0-1 (perfect/imperfect) crisp approach to one which incorporates some fuzzy sets ideas. The main objective of this extension is to make current software reliability models more realistic. The theory underlying this approach, and hence its key modeling tool, is the theory of random point processes with fuzzy marks. The relevance of this theory to software debugging arises from the fact that it incorporates the randomness due to the locations of the software faults and the fuzziness bestowed by the imprecision of the debugging effort. Through several examples, we also demonstrates that this theory provides the natural vehicle for an investigation into the properties and efficacy of fuzzy debugging of software programs and is therefore a contribution to computational intelligence.
P. Zeephongsekul
Computational Methods for Investment Portfolio: The Use of Fuzzy Measures and Constraint Programming for Risk Management
Summary
Computational intelligence techniques are very useful tools for solving problems that involve understanding, modeling, and analysis of large data sets. One of the numerous fields where computational intelligence has found an extremely important role is finance. More precisely, optimization issues of one’s financial investments, to guarantee a given return, at a minimal risk, have been solved using intelligent techniques such as genetic algorithm, rule-based expert system, neural network, and support-vector machine. Even though these methods provide good and usually fast approximation of the best investment strategy, they suffer some common drawbacks including the neglect of the dependence among among criteria characterizing investment assets (i.e. return, risk, etc.), and the assumption that all available data are precise and certain. To face these weaknesses, we propose a novel approach involving utility-based multi-criteria decision making setting and fuzzy integration over intervals.
Tanja Magoč, François Modave, Martine Ceberio, Vladik Kreinovich
A Bayesian Solution to the Modifiable Areal Unit Problem
Summary
The Modifiable Areal Unit Problem (MAUP) prevails in the analysis of spatially aggregated data and influences pattern recognition. It describes the sensitivity of the measurement of spatial phenomena to the size (the scale problem) and the shape (the aggregation problem) of the mapping unit. Much attention has been recieved from fields as diverse as statistical physics, image processing, human geography, landscape ecology, and biodiversity conservation. Recently, in the field of spatial ecology, a Bayesian estimation was proposed to grasp how our description of species distribution (described by range size and spatial autocorrelation) changes with the size and the shape of grain. This Bayesian estimation (BYE), called the scaling pattern of occupancy, is derived from the comparison of pair approximation (in the spatial analysis of cellular automata) and join-count statistics (in the spatial autocorrelation analysis) and has been tested using various sources of data. This chapter explores how the MAUP can be described and potentially solved by the BYE. Specifically, the scale and the aggregation problems are analyzed using simulated data from an individual-based model. The BYE will thus help to finalize a comprehensive solution to the MAUP.
C. Hui
Fuzzy Logic Control in Communication Networks
Summary
The problem of network congestion control remains a critical issue and a high priority, especially given the increased demand to use the Internet for time/delay-sensitive applications with differing Quality of Service (QoS) requirements (e.g. Voice over IP, video streaming, Peer-to-Peer, interactive games). Despite the many years of research efforts and the large number of different control schemes proposed, there are still no universally acceptable congestion control solutions. Thus, even with the classical control system techniques used from various researchers, these still do not perform sufficiently to control the dynamics, and the nonlinearities of the TCP/IP networks, and thus meet the diverse needs of today’s Internet. Given the need to capture such important attributes of the controlled system, the design of robust, intelligent control methodologies is required. Consequently, a number of researchers are looking at alternative non-analytical control system design and modeling schemes that have the ability to cope with these difficulties in order to devise effective, robust congestion control techniques as an alternative (or supplement) to traditional control approaches. These schemes employ fuzzy logic control (a well-known Computational Intelligence technique). In this chapter, we firstly discuss the difficulty of the congestion control problem and review control approaches currently in use, before we motivate the utility of Computational Intelligence based control. Then, through a number of examples, we illustrate congestion control methods based on fuzzy logic control. Finally, some concluding remarks and suggestions for further work are given.
Chrysostomos Chrysostomou, Andreas Pitsillides
Adaptation in Classification Systems
Summary
The persistence and evolution of systems essentially depend of their ability to self-adapt to new situations. As an expression of intelligence, adaptation is a distinguishing quality of any system that is able to learn and to adjust itself in a flexible manner to new environmental conditions. Such ability ensures selfcorrection over time as new events happen, new input becomes available, or new operational conditions occur. This requires self-monitoring of the performance in an ever changing environment. The relevance of adaptation is established in numerous domains and by versatile real world applications.
The primary goal of this contribution is to investigate adaptation issues in learning classification systems formdifferent perspectives. Being a scheme of adaptation, life long incremental learning will be examined. However, special attention will be given to adaptive neural networks and the most visible incremental learning algorithms (fuzzy ARTMAP, nearest generalized exemplar, growing neural gas, generalized fuzzy minmax neural network, IL based on function decomposition) and their adaptation mechanisms will be discussed. Adaptation can also be incorporated in the combination of such incremental classifiers in different ways so that adaptive ensemble learners can be obtained too. These issues and other pertaining to drift will be investigated and illustrated by means of a numerical simulation.
Abdelhamid Bouchachia
Music Instrument Estimation in Polyphonic Sound Based on Short-Term Spectrum Match
Summary
Recognition and separation of sounds played by various instruments is very useful in labeling audio files with semantic information. This is a non-trivial task requiring sound analysis, but the results can aid automatic indexing and browsing music data when searching for melodies played by user specified instruments. In this chapter, we describe all stages of this process, including sound parameterization, instrument identification, and also separation of layered sounds. Parameterization in our case represents power amplitude spectrum, but we also perform comparative experiments with parameterization based mainly on spectrum related sound attributes, including MFCC, parameters describing the shape of the power spectrum of the sound waveform, and also time domain related parameters. Various classification algorithms have been applied, including k-nearest neighbor (KNN) yielding good results. The experiments on polyphonic (polytimbral) recordings and results discussed in this chapter allow us to draw conclusions regarding the directions of further experiments on this subject, which can be of interest for any user of music audio data sets.
Wenxin Jiang, Alicja Wieczorkowska, Zbigniew W. Raś
Ultrasound Biomicroscopy Glaucoma Images Analysis Based on Rough Set and Pulse Coupled Neural Network
Summary
The objective of this book chapter is to present the rough sets and pulse coupled neural network scheme for Ultrasound Biomicroscopy glaucoma images analysis. To increase the efficiency of the introduced scheme, an intensity adjustment process is applied first using the Pulse Coupled Neural Network (PCNN) with a median filter. This is followed by applying the PCNN-based segmentation algorithm to detect the boundary of the interior chamber of the eye image. Then, glaucoma clinical parameters have been calculated and normalized, followed by application of a rough set analysis to discover the dependency between the parameters and to generate set of reduct that contains minimal number of attributes. Finally, a rough confusion matrix is designed for discrimination to test whether they are normal or glaucomatous eyes. Experimental results show that the introduced scheme is very successful and has high detection accuracy.
El-Sayed A. El-Dahshan, Aboul Ella Hassanien, Amr Radi, Soumya Banerjee
An Overview of Fuzzy C-Means Based Image Clustering Algorithms
Summary
Clustering is an important step in many imaging applications with a variety of image clustering techniques having been introduced in the literature. In this chapter we provide an overview of several fuzzy c-means based image clustering concepts and their applications. In particular, we summarise the conventional fuzzy c-means (FCM) approaches as well as a number of its derivatives that aim at either speeding up the clustering process or at providing improved or more robust clustering performance.
Huiyu Zhou, Gerald Schaefer
Backmatter
Metadaten
Titel
Foundations of Computational Intelligence Volume 2
herausgegeben von
Aboul-Ella Hassanien
Ajith Abraham
Francisco Herrera
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-01533-5
Print ISBN
978-3-642-01532-8
DOI
https://doi.org/10.1007/978-3-642-01533-5

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.