Skip to main content

1998 | Buch

Methodology and Tools in Knowledge-Based Systems

11th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems IEA-98-AIE Benicàssim, Castellón, Spain, June 1–4, 1998 Proceedings, Volume I

herausgegeben von: José Mira, Angel Pasqual del Pobil, Moonis Ali

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This two-volume set constitutes the refereed proceedings of the 11th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE-98, held in Benicassim, Castellon, Spain, in June 1998.The two volumes present a total of 187 revised full papers selected from 291 submissions. In accordance with the conference, the books are devoted to new methodologies, knowledge modeling and hybrid techniques. The papers explore applications from virtually all subareas of AI including knowledge-based systems, fuzzyness and uncertainty, formal reasoning, neural information processing, multiagent systems, perception, robotics, natural language processing, machine learning, supervision and control systems, etc..

Inhaltsverzeichnis

Frontmatter
Knowledge technology: Moving into the next millennium
B. J. Wielinga, A. Th. Schreiber
In search of a common structure underlying a representative set of generic tasks and methods: The hierarchical classification and therapy planning cases study

CommonKADS methodology for the analysis of human expertise and the development of knowledge based systems yields as a result a library of generic tasks and problem solving methods (PSM's), to be used as components of a conceptual model; later on the corresponding formal and design models have to be made. Currently, given the conceptual model, a complete description that explains how we can obtain the program code is not available yet, though important efforts have been made in order to describe such transitions. Besides, an immediate question is how much these descriptions depend on the generic task.Our conjeture in this article is that in addition to the libraries of generic tasks and PSM's, there is an underlying structure indeed, a common structure for most of these tasks, at least for the diagnosis and planning tasks. In these cases, we can describe a constructive method for obtaining the operational code for the task and the associated PSM. That is, if we describe the generic task in a suitable way, in terms of natural language, we can establish relationships between this description and some structure we can represent by means of hierarchic graphs. Since we can also establish relationships between the program code and such graphs and both relationships are constant and reversible, we can always use this method to obtain the program code, given the generic task and inversely, to obtain the graph and the model at the knowledge level from the code.

J. C. Herrero, J. Mira
Theory of constructible domains for robotics: Why?

The goal of Recursion Manipulation (RM) is to design a calculator that provides formal proofs for a particular type of formulae corresponding to the task of program construction and program verification of recursive procedures. Recalling first that Gödel's result cannot be used as a mathematically justifiable argument against RM, the paper illustrates the strategic importance of RM in the design of autonomous, self-reprogrammable robots. In particular, on the basis of more technical papers making a necessary theoretical support for this paper, we illustrate that for roboticians it is sufficient to be concerned with an external characterization of RM. Relying on the Theory of Constructible Domains, the framework of RM is powerful enough to take care of logical justifications inherent to various forms of induction schemes (i.e., the termination justifications for recursive programs and plans). The paper illustrates also that two, not necessarily compatible, types of efficiency are related to recursive plans.

Marta Fraňová, Monia Kooli
Intelligent systems must be able to make programs automatically for assuring the practicality

This paper discusses an important issue on the intelligent systems to solve large scale problem, especially on the way of making computer programs automatically. Since an activity that human beings design a problem solving system is a kind of problem solving, a new modeling scheme that can deal with multi-level structures is necessary for representing problem solving itself. A large scale system used repeatedly should be a procedural program and not exploratory one. A method of translating modeling results into a procedural program is proposed in this paper.

Takumi Aida, Setsuo Ohsuga
Towards a knowledge-level model for concurrent design

This paper describes the development and validation of a knowledge-level model of concurrent design. Concurrent design is characterised by the extent to which multidisciplinary perspectives influence all stages of the product design process. This design philosophy is being increasingly used in industry to reduce costs and improve product quality. We propose an essentially rational model for concurrent design using CommonKADS and report on our validation of the model through studies with designers.

Robin Barker, Anthony Meehan, Ian Tranter
Multiagent AI implementation techniques: A new software engineering trend

We present techniques for design and implementation of software systems with AI and concurrent S.E. techniques. The stages of conceptualization, design and implementation are defined by Al methods. Software systems are proposed to be designed through knowledge acquisition, specification, and multiagent implementations. Multi-agent implementations are proposed to facilitate a fault tolerant software design methodology, applying our recent research that has lead to fault tolerant AI systems. A particular approach to and an AI formulation of designing fault free and fault tolerant software is presented, which is based on the agent models of computation. Design with objects and a novel multi-kernel approach is presented. A system is defined by many communicating pairs of kernels, each defining a part of the system, as specified by object level knowledge acquisition. An overview to agent morphisms and algebras are presented and applied to the design.

Cyrus F. Nourani
Information systems integration: some principles and ideas

In this paper we present an approach to model a complex information system based on the integration of existing information systems. We emphasise on the comparison phase of the integration process in which similarities and conflicts between the initial systems must be detected. We show how object classes can be compared using behavioural aspects of objects and we discuss conflicts that can occur between object classes and we give rules to resolve them.

Nahla Haddar, Faïez Gargouri, Abdelmajid Ben Hamadou, Charles François Ducateau
An emergent paradigm for expert resource management systems

Data structuring mechanisms suited to complex problem solving environments have, in general, only been satisfactory for static processes. The challenge of complexity, for example, of process dynamics, in which human managers interact continuously with control systems and environmental changes, has focussed our research program on new paradigms in knowledge engineering and their associated requirements for novel data structures to support knowledge modeling. This paper reports our conclusions regarding an emergent paradigm for expert resource management systems and identifies the requisite data structure, which is based on a semantic network abstraction “system map”. Other examples of emergent paradigms in knowledge engineering will be found in [Garner, 96]

Swamy Kutti, Brian Garner
New modeling method for using large knowledge bases

The objective of this paper is to discuss some aspects of large scale knowledge bases, especially the ways of building knowledge bases from the view point of using knowledge bases for solving real problems and for assuring knowledge based systems generality. A new modeling scheme for representing problems including strategic decision is discussed as well as a way of generating problem specific problem solving systems. Many multi-level concepts such as a multi-level function structure and its corresponding knowledge structure, multiple meta-level operations, a multi-strata model to represent problems including human activity, etc. It is shown that the system realizes not only the generality but also practicality of problem solving by enabling automatic programming.

Wataru Ikeda, Setsuo Ohsuga
Integration of formal concept analysis in a knowledge-based assistant

To build a knowledge base for an intelligent help system, as Aran, is an essential and difficult phase. In Aran, a knowledge based assistant to help users in the use of UNIX systems, we integrate the traditional manual approach of conceptual information structuring with a complementary one based on Formal Concept Analysis (FCA). FCA allow us to obtain the domain formal concepts (semi) automatically and to organise the information around them.

Baltasar Fernandez-Manjon, Antonio Navarro, Juan M. Cigarran, Alfredo Fernandez-Valmayor
Knowledge modeling of program supervision task

This paper presents a knowledge-level analysis of the task program supervision based on two different systems: PEGASE and PULSAR. A knowledge-level analysis of a knowledge-based system reveals the organisation of the knowledge it uses, and how it uses this knowledge to solve the task. It is also the key to determine the set of properties that it assumes on domain knowledge. These aspects have been successfully used as a framework to compare different systems, mostly for knowledge engineering purposes. Our purpose is to use assumptions as the properties that the knowledge base must verify.

Mar Marcos, Sabine Moisan, Angel P. del Pobill
Quadri — dimensional interpretation of syllogistic inferential processes in polyvalent logic, with a view to structuring concepts and assertions for realizing the universal knowledge basis

Modelling syllogistic — inferential processes in polyvalent logic by diachronic syllogistic structures, we realise their QUADRI DIMENSIONAL interpretation, in the paper, by relational — objectual — propertational chains convergent in diachronic spaces. Aristotle considered the definition the motor nerve of syllogistic deduction, the medium term being a definition. Leibnitz conceived the definition as the beginning and end of any demonstration, a demonstration being nothing but a chain of definition. The concept of structure, implying a topological relational approach designates the necessary relations between the elements of a system, invariant and independent of the elements, therefore formalizable the structure constituting an abstract model capable of making the rules, governing the transformations, rationally intelligible. Structuring the concepts and the assertions of scientific theories according to the rules of syllogistic definability and deductibility systems are obtained, which underlie the realization of the Universal Knowledge Basis.

Ion I. Mirita
Complexity of precedence graphs for assembly and task planning

This paper deals with a complete and correct method to compute how many plans exist for an assembly or processing task. Planning is a NP-hard problem and then, in some situations, the application of time consuming search methods must be avoided. However, up to now the computation of the exact number of alternative plans for any situation was not reported elsewhere. Notice that the complexity of the problem does not depend on the number of involved operations, components or parts. The complexity of the problem depends on the topology of the precedences between operations. With the method presented in this paper, it will be easy to decide the search method to use, since we know how many possible plans could exist before applying the search method.

João Rocha, Carlos Ramos, Zita Vale
A Monte Carlo algorithm for the satisfiability problem

Recently a randomized algorithm based on Davis and Putnam Procedure was designed in [16] for the purpose of solving the satisfiabilty problem. In this letter another Monte Carlo algorithm following from an original algorithm [4] is proposed. The average performance of the algorithm is polynomial and the probability that the algorithm fails to yield a correct answer for some data is less than e. Results are compared with those given in [16] and show an interesting performance for our algorithm.

Habiba Drias
Applying the propose&revise strategy to the hardware-software partitioning problem

Hardware-software Co-design is a new discipline that provides methodologies and tools for the fast development of complex heterogeneous systems by combining hardware and software development cycles. This paper shows how the Propose&Revise PSM can be used to solve the hardware-software partitioning problem, the Co-design task that makes the decision on the best implementation of the different components of a system. In particular, a fuzzy expert system, SHAPES, has been developed following the CommonKADS methodology.

M. L. López-Vallejo, C.A. Iglesias, J. C. López
Fuzzy hybrid techniques in modeling

We will describe several methods to approximate the fuzzy rules tuning and generation problem. To generate the rules we will use several clustering algorithms. This approach supposes that the data lacks of structure. To tune the rules we will use two different techniques. One of them is based in descent gradients. The other one is based in a try to tune the rules outputs to reduce the error.

M. Delgado, A. F. Gómez-Skarmeta, J. Gómez Marín-Blázquez, H. Martinez Barberá
Chemical process fault diagnosis using kernel retrofitted fuzzy genetic algorithm based learner (FGAL) with a hidden Markov model

A hybrid generative-discriminative diagnostic system based on a symbolic learner (FGAL) retrofitted with Gaussian kernel densities for generating instantaneous class probabilities which are further used by a hidden Markov model to estimate the most likely state (fault) given the past evidence is introduced for real time process fault diagnosis. The system allows symbolic knowledge extraction, it is modular and robust. The diagnostic performance of the developed system is shown on a nonisothermal cascade controlled continuously stirred tank reactor (CSTR).

I. Burak Özyurt, Aydin K. Sunol, Lawrence O. Hall
On linguistic approximation with genetic programming

Linguistic approximation is a well known problem in fuzzy set theory to describe an arbitrary fuzzy set in the most appropriate linguistic terms. It involves a search through the space of linguistic descriptions (labels) generated according to a given grammar and vocabulary. The vocabulary and grammar specify a language structure, i.e. allowable and meaningful combinations of primary linguistic terms, linguistic modifiers and linguistic connectives as well as the order of their composition. This structure may become quite complex when a larger vocabulary and more complicated grammatical rules are considered. Therefore linguistic approximation may require search techniques that are able to deal effectively with more complex linguistic structures. The paper presents an approach to linguistic approximation based on an evolutionary search technique, genetic programming. The approach is demonstrated with the results of the initial experiments.

Ryszard Kowalczyk
A new solution methodology for fuzzy relation equations

In this paper a general methodology for studying and solving fuzzy relation equations based on sup-t composition, where t is any continuous triangular norm, is proposed. To this end the concept of the “solution matrices” is introduced, as a way of representing the process information required for the resolution. Using this concept, the solution existence of a fuzzy relation equation is first examined. When the relation equation has no solution, the reasons for this lack of solvability are found. Otherwise, the solution set is determined.

Spyros G. Tzafestas, Giorgos B. Stamou
An approach to generate membership function by using Kohonen's SOFM nets

This paper provides an approach that determines the fuzzy membership-degree based on Kohonen's Self-Organizing Feature Map ( SOFM ) nets and Lagrangian interpolation function. In this way, we need not depend on traditional membership function, but only use several typical models. It can be useful in distinguishing and processing some kinds of uncertain systems. The specific algorithm is given in paper, and at last, by means of data simulation, we get some objective and reasonable truth-values.

Li Manping, Wang Yuying, Zhang Xiurong
Intelligent policing function for ATM networks

In this work we present an intelligent system implemented with a hybrid technique that aims to resolve the problem of traffic control in Asynchronous Transfer Mode (ATM) networks. In this system, fuzzy and neural techniques are combined to obtain a policing mechanism simple enough to be cost-effective and with a performance hugely better than the traditional ones. The UPC proposed is very fast, has a high selectivity and presents an immediate hardware implementation.

Anna Rita Calisti, Francisco David Trujillo Aguilera, Antonio Díaz Estrella, Francisco Sandoval Hernández
Deriving fuzzy subsethood measures from violations of the implication between elements

The aim of this paper is to present a collection of new measures of subsethood between fuzzy sets. Starting from the relationship between crisp set containment and logical implication, some fuzzy approaches are reviewed. An excerpt of reasonable fuzzy implication operators is used to define fuzzy measures of inclusion using Kosko's fitviolation strategy. We test these measures on two axiomatics and derive, when possible, measures of fuzzy entropy. Once a subsethood measure between fuzzy sets is defined, other operations as set equality, similarity, disjointness, complement,... can be considered. The need for containment measures is present in wide areas as approximate reasoning and inference, image processing or learning.

Francisco Botana
Object oriented stochastic petri net simulator: A generic kernel for heuristic planning tools

To produce a large number of different products in an efficient way, traditional industry production architectures have incorporated non-value-adding operations such as transporting, storing and inspecting to provide flexibility. These operations should be minimised, if not eliminated, to improve performance on automated manufacturing systems. Heuristic planning tools seem to offer a good methodology to deal with a sub-optimal scheduling policy of present flexible manufacturing systems.In this paper an approach to evaluate and improve heuristic planning tools results will be presented.

Miquel A. Piera, Antonio J. Gambin, Ramon Vilanova
An artificial neural network for flashover prediction. A preliminary study

Trying to estimate the probability of a flashover occurring during a compartment fire is a complex problem as flashovers depend on a large number of factors (for example, room size, air flow etc.). Artificial neural networks appear well suited to problems of this nature as they can be trained to understand the explicit and inexplicit factors that might cause flashover. For this reason, artificial neural networks were investigated as a potential tool for predicting flashovers in a room with known, or estimable, compartment characteristics. In order to deal with uncertainties that can exist in a model's results, a statistical analysis was employed to identify confidence intervals for predicted flashover probabilities. In addition, Monte Carlo simulation of trained artificial neural networks was also employed to deal with uncertainties in initial room characteristic estimates. This paper discusses these analyses and comments on the results that were obtained when artificial neural networks were developed, trained and tested on the data supplied.

Christian W. Dawson, Paul D. Wilson, Alan N. Beard
Fuzzy decision making under uncertainty

When no probabilities are available for states of nature, decisions are given under uncertainty. When probabilities are unattainable, the criteria such as minimax, maximin, minimaxregret can be used. While these criteria are used, a single value is assigned for every strategy and state of nature. Fuzzy numbers are a good tool for the operation research analyst facing uncertainty and subjectivity. A triangular fuzzy number has been used instead of a single value of outcome. Numerical Examples have been given for every fuzzy decision criterion.

Cengiz Kahraman, Ethem Tolga
Improving performance of naive bayes classifier by including hidden variables

A basic task in many applications is classification of observed instances into a predetermined number of categories or classes. Popular classifiers for probabilistic classification are Bayesian network classifiers and particularly the restricted form known as the Naive Bayes classifier. Naive Bayes performs well in many domains but suffers the limitation that its classification performance cannot improve significantly with an increasing sample size. The expressive power of Naive Bayes is inadequate to capture higher order relationships in the data. This paper presents a method for improving predictive performance of the Naive Bayes classifier by augmenting its structure with additional variables learned from the training data. The resulting classifier retains the advantages of simplicity and efficiency and achieves better predictive performance than Naive Bayes. The approach proposed here can be extended to more general Bayesian network classifiers.

Bozena Stewart
A system handling RCC-8 queries on 2D regions representable in the closure algebra of half-planes

The paper describes an algebraic framework for representing and reasoning about 2D spatial regions. The formalism is based on a Closure Algebra (CA) of half-plaries -i.e., a Boolean Algebra augmented with a closure operator. The CA provides a flexible representation for polygonal regions and for expressing topological constraints among such regions. The paper relates these constraints to relations defined in the 1st-order Region Connection Calculus (RCC). This theory allows the definition of a set of eight topological relations (RCC-8) which forms a partition of all possible relations between two regions. We describe an implemented algorithm for determining which of the RCC-8 relations holds between any two regions representable in the CA. One application of such a system is in Geographical Information Systems (GIS), where often the data is represented quantitatively, but it would be desirable for queries to be expressed qualitatively in a high level language such as that of the RCC theory.

Brandon Bennett, Amar Isli, Anthony G. Cohn
Cardinal directions on extended objects for qualitative navigation

With the aim of simulating the human reasoning process about spatial aspects such as orientation, distance and cardinal directions, several qualitative models have been developed in the last years. However, there is no model for representing and reasoning with all these spatial aspects in a uniform way. In the approach presented in this paper, this integration has been accomplished thanks to consider each type of spatial relationship an instance of the Constraint Satisfaction Problem. Constraint Logic Programming instantiated to Finite Domains extended with Constraint Handling Rules is a programming paradigm which provides the suited level of abstraction to define an incremental, flexible, efficient -with polynomial cost- and general purpose Constraint Solver (CS) for each one of the spatial aspects to be integrated. Moreover, it also allows the use of point and extended objects as primitive of reasoning. The corresponding CS for cardinal directions is described in this paper. This model has been applied to the development of a Qualitative Navigator Simulator.

M. Teresa Escrig, Francisco Toledo
Definition and study of lineal equations on order of magnitude models

In situations where there is a lack of quantitative data description, it is important to use equations where the operators and the coefficients are qualitative. In addition, the results obtained have to be the qualitative description of the quantitative information that we would obtain by using numerical models. In this paper we define and study qualitative linear equations by using qualitative operators consistent with IR in a qualitative model of magnitude orders.

Núria Agell, Fernando Febles, Núria Piera
A constraint-based approach to assigning system components to tasks

In multi-component systems, individual components must be assigned to the tasks that they are to perform. In many applications, there are several possible task decompositions that could be used to achieve the task, and there are limited resources available throughout the system. We present a technique for making task assignments under these conditions. Constraint satisfaction is used to assign components to particular tasks. The task decomposition is selected using heuristics to suggest a decomposition for which an assignment can be found efficiently. We have applied our technique to the problem of task assignment in systems of underwater robots and instrument platforms working together to collect data in the ocean.

Elise H. Turner, Roy M. Turner
Automatic semiqualitative analysis: Application to a biometallurgical system

The aim of this work is the representation and analysis of semiqualitative models. Their qualitative knowledge is represented by means of qualitative operators and envelope functions. A semiqualitative model is transformed into a family of quantitative models.In this paper the analysis of a model is proposed as a constraint satisfaction problem. Constraint satisfaction is an umbrella term for a variety of techniques of Artificial Intelligence and related disciplines. In this paper attention is focused on intervals consistency techniques. The semiqualitative analysis is automatically made by means of consistency techniques. The presented method is applied to a industrial biometallurgical system in order to show how increase the capacity of production.

R. M. Gasca, J. A. Ortega, M. Toro
Including qualitative knowledge in semiqualitative dynamical systems

A new method to incorporate qualitative knowledge in semiqualitative systems is presented. In these systems qualitative knowledge may be expressed in their parameters, initial conditions and/or vector fields. The representation of qualitative knowledge is made by means of intervals, continuous qualitative functions and envelope functions.A dynamical system is defined by differential equations with qualitative knowledge. This definition is transformed into a family of dynamical systems. In this paper the semiqualitative analysis is carried out by means of constraint satisfaction problems, using interval consistency techniques.

J. A. Ortega, R. M. Gasca, M. Toro
A tool to obtain a hierarchical qualitative rules from quantitative data

A tool to obtain a classifier system from labelled databases is presented. The result is a hierarchical set of rules to divide the space in n-orthohedrons. This hierarchy means that obtained rules must be applied in specific order, that is, an example will be classify by i-rule only if it isn't matched the conditions of the i-1 preceding rules. It is used a genetic algorithm with real codification as searching method. Logically, computation time will be greater than other systems like C4.5, but it will provide flexibility to the user because it is always possible to produce rules set with 0% of error rate and, from here, to relax the error rate for having less rules. Afterwards, a qualitative approach is made to obtain a linguistic rule set. Finally, several results are summarized in section 4.

Jesús Aguilar, José Riqueltne, Miguel Toro
Hybridization techniques in optical emission spectral analysis

The utilisation of formal artificial intelligence (AI) tools has been implemented to produce a hybrid system for optical emission spectral analysis that combines a multilayer perceptron neural network with rule-based system techniques. Even though optical emission spectroscopy is extensively used as an in-situ diagnostic for ionised gas plasmas in manufacturing processes, ways of interpreting the spectra without prior knowledge or expertise from the user's stand-point has encouraged the use of Al techniques to automate the interpretation process. The hybrid approach presented here combines a modified network architecture with a simple rule-base in order to produce explicit models of the identifiable chemical species.

C. S. Ampratwum, P. D. Picton, A. A. Hopgood, A. Browne
KADS qualitative model based motor speed control

The aim of this work is to propose a qualitative modelling method based on two AI approaches: Qualitative Physics and KADS. It is an expertise data driven modelling, that arising from the domain ontology and dynamics achieves a symbolic formalization that models a dynamical system. The system model is used for a control application (also based on KADS): a D.C. motor speed control.

J. Ruiz Gomez
Negligibility relations between real numbers and qualitative labels

This work continues previous ones in the study of a relative orders of magnitude model, based in the formalization of certain qualitative knowledge between real quantities and qualitative labels. The mathematical model for negligibility in qualitative reasoning presented is to be used in the not unusual situation in which one disposes of some real quantitative data and only disposes of the qualitative descriptions of other ones. Negligibility relations between real numbers, between numbers and labels and between labels are defined and studied with detail.

Mónica Sánchez, Francesc Prats, Núria Piera
Qualitative reasoning under uncertain knowledge

In this paper, we focus our attention on the processing of the uncertainty encountered in the common sense reasoning. Firstly, we explore the uncertainty concept and then we suggest a new approach which enables a representation of the uncertainty by using linguistic values. The originality of our approach is that it allows to reason on the uncertainty interval [[Certain, Totally uncertain]] The uncertainty scale that we use here, presents some advantages over other scales in the representation and in the management of the uncertainty. The axiomatic of our approach is inspired by the Shannon theory of entropy and built on the substrate of a symbolic many-valued logic.

M. Chachoua, D. Pacholczyk
Qualitative reasoning for admission control in ATM networks

Asynchronous Transfer Mode (ATM) provides transport functionality to the future Broadband Integrated Service Digital Network, ATM networks are connection oriented and Connection Admission Control (CAC) is a preventive control procedure responsible for determining whether a connection request is admitted or denied. There is a trade-off between the accuracy and the evaluation time on the CAC schemes. Most accurate methods spend excessive time on evaluations and consequently the request is rejected due to time-out. A possible approach is to evaluate some possible future situations (pre-evaluation). These results would be previously evaluated and saved on a table. Between consecutive connection requests, the evaluation system evaluates in advance the future response for each class of traffic.The drawback occurs when two, or more, connection requests arrive close in time, so that the on-line pre-evaluation scheme has ‘Unknown’ on the response vector. The objective of this work is to cover the possible lack of information by using Qualitative Reasoning (QR). QR enables calculations with non-numerical descriptions and still producing coherent results.

J. L. Marzo, A. Bueno, R. Fabregat, T. Jove
Using tolerance calculus for reasoning in relative order of magnitude models

Order of magnitude (OM) reasoning is an approach that offers a midway abstraction level between numerical methods and qualitative formalisms. Relative OM models postulate a set of relations and inference rules on orders of magnitude. The main shortcoming of these models is the difficulty to validate the results they produce when applied to reasoning on real world problems. A widely accepted solution to avoid this deficiency is to extend the relative OM reasoning systems by a tolerance calculus. The way this extension is defined is a sensitive problem, affecting the accuracy of the results produced by the system. In this paper we present two ideas which could help to obtain more accurate results. First, we propose a more refined definition of the negligibility relation which is subject to avoid the artificial growth of tolerances. Second, we show that, in the case of many inference rules, one can derive tighter tolerance bounds when additional information is available.

Robert Dollinger, loan Alfred Letia
Complexity and cognitive computing

This paper proposes a hybrid expert system to minimize some of the complexity problems present in the artificial intelligence field such as the so-called bottleneck of expert systems, e.g., the knowledge elicitation process; the model choice for the knowledge representation to code human reasoning; the number of neurons in the hidden layer and the topology used in the connectionist approach, ; the difficulty to obtain the explanation on how the network arrived to a conclusion. To overcome these difficulties the cognitive computing was integrated to the developed system.

Lourdes Mattos Brasil, Fernando Mendes de Azevedo, Jorge Muniz Barreto, Monique Noirhomme-Fraiture
On decision-making in strong hybrid evolutionary algorithms

Any search algorithm is inherently limited if a broad enough range of problems is considered. Hybridization (use of problem-dependent knowledge) is the mechanism to overcome these limitations. This work discusses several ways to achieve such hybridization. It is shown that no efficient algorithmic tool exists to guide the design process. Therefore, two methodological heuristics are studied for that purpose: minimization of intra-forma fitness variance and the use of non-homogeneous representations. The effectiveness of these heuristics is supported by empirical results.

Carlos Cotta, José M. Troya
Generalized predictive control using genetic algorithms (GAGPC). An application to control of a non-linear process with model uncertainty

Predictive Control is one of the most powerful techniques in process control, but its application in non-linear processes is challenging. This is basically because the optimization method commonly used limits the kind of functions which can be minimized. The aim of this work is to show how the combination of Genetic Algorithms (GA) and Generalized Predictive Control (GPC), what we call GAGPC, can be applied to nonlinear process control. This paper also shows GAGPC performance when controlling non-linear processes with model uncertanties. Success in this area will open the door to using GAGPC for a better control of industrial processes.

Xavier Blasco, Miguel Martinez, Juan Senent, Javier Sanchis
Piping layout wizard: Basic concepts and its potential for pipe route planning

The paper proposes a search method for pipe route planning using genetic algorithm incorporated with several heuristics. First, the basic principle of our method is presented using key ideas which include representation of pipe route for GA operations, spatial potential energy to cover design scenarios, fitness function, basic GA operations, coordinates conversion procedure, and route modification procedure using subgoal setting. In order to apply the method to actual problems and to solve them in a practical manner, the study employs various heuristics, which are concept of direction, generation of initial individuals using intermediate point, extended two-points crossover, and dynamic selection. Those heuristics are also described and their effectiveness in our method is discussed. Then, the paper shows a prototype system, or Piping Layout Wizard, which were developed based on our approach and discusses the validity of the proposed method.

Teruaki Ito
Integration of constraint programming and evolution programs: Application to channel routing

This paper presents a novel approach for the integration of constraint programming techniques and evolution programs. In this approach the genetic operators implementation is based on a constraint solver, and chromosomes are arc-consistent solutions to the problem, represented as arrays of finite integer domains. This method allows to tackle efficiently constrained optimisation problems over finite integer domains with a large search space. The paper describes the main issues arising in this integration: chromosome representation and evaluation, selection and replacement strategies, and genetic operators design. The implemented system has been applied to the channel routing problem, a particular kind of the interconnection routing problem, one of the major tasks in the physical design of very large scale integration circuits.

Alvaro Ruiz-Andino, José J. Ruz
Using a genetic algorithm to select parameters for a neural network that predicts aflatoxin contamination in peanuts

Aflatoxin contamination in crops of peanuts is a problem of significant health and financial importance, so it would be useful to develop techniques to predict the levels prior to harvest. Backpropagation neural networks have been used in the past to model problems of this type, however development of networks poses the complex problem of setting values for architectural features and backpropagation parameters. Genetic algorithms have been used in prior efforts to locate parameters for backpropagation neural networks. This paper describes the development of a genetic algorithm/backpropagation neural network hybrid (GA/BPN) in which a genetic algorithm is used to find architectures and backpropagation parameter values simultaneously for a backpropagation neural network that predicts aflatoxin contamination levels in peanuts based on environmental data.

C. E. Henderson, W. D. Potter, R. W. McClendon, G. Hoogenboom
Interacting with articulated figures within the PROVIS project

The main goal of our work is to allow designers to replace physical mock-ups by virtual ones in order to test the integration and space requirements of satellite components. To this end, PROVIS combines a “desktop VR” interface with an interactive 3D display. This paper describes how PROVIS allows users to interact with flexible or articulated objects, for testing maintenance procedures and component accessibility, thanks to an original use of genetic algorithms.

Hervé Luga, Olivier Balet, Yves Duthen, René Caubet
Computing the spanish medium electrical line maintenance costs by means of evolution-based learning processes

In this paper, we deal with the problem of computing the maintenance costs of electrical medium line in spanish towns. To do so, we present two Data Analysis tools taking as a base Evolutionary Algorithms, the Interval Genetic Algorithm-Programming method to perform symbolic regression and Genetic Fuzzy Rule-Based Systems to design fuzzy models, and use them to solve the said problem. Results obtained are compared with other kind of techniques: classical regression and neural modeling.

Oscar Cordón, Francisco Herrera, Luciano Sánchez
A new dissimilarity measure to improve the GA performance

The performance of Genetic Algorithms hinges on the choice of “eligible parents” for carrying the genetic information from one generation to the next. Several methods have been proposed to identify the most promising mating partners for the continuation of the progeny.We propose, in this paper, a measure of dissimilarity between individuals to be considered along with their actual fitnesses. This would help the emergence of more combinations of chromosomes within the population so that at least a few are better. The more is the dissimilarity between the parents, the better are the chances of producing fit children. After the problem introduction, we draw an analogy from biology to illustrate that the method should really work, then proceed with the implementation details and discuss the results.Apparently the philosophy of this paper contradicts some of the views held in connection with niche and speciation, where breeding within the community is preferred. However the issues involved are different and this aspect is dealt with in detail elsewhere in the paper.

G Raghavendra Rao, K Chidananda Gowdaz
Strplan: A distributed planner for object-centred application domains

This paper presents a novel planner, called Strplan, which is intended to generate and validate parallel plans for structured domain models. Structured domains are here defined as a specialization of the Loosely Sort-Abstracted (LSA) models [13], which regards most of the features of object-oriented data models. The operational semantics of Strplan are based on the decomposition of incomplete plans into subplans that are solved separately. The criterion to produce this decomposition is given by the concept of domain region, which is here envisaged as a grouping of substate class expressions.

Rafael Berlanga Llavori
An intelligent hybrid system for knowledge acquisition

An intelligent hybrid system is proposed. It includes an adaptive human-machine interface and a hybrid case-based reasoning component for knowledge engineering. The adaptive human-machine interface remembers past question and answer scenarios between the system and the end users. It employs a KA-related commonsense base to help user interaction and a housekeeping engine to search and aggregate the data relevant to those the user answered. It discriminates the professional level of the user by the fuzziness of his answers, and provides different interaction patterns for the user accordingly. The hybrid case-based reasoning component hybridizes case-based reasoning, neural networks, fuzzy theory, induction, and knowledge-based reasoning technology. Hybridizing these techniques together properly enhances the robustness of the system, improves the knowledge engineering process, and promotes the quality of the developed knowledge-based systems.

Chien-Chang Hsu, Cheng-Seen Ho
Using neural nets to learn weights of rules for compositional expert systems

Knowledge base for a compositional expert system consists of a set of IF THEN rules with uncertainties expressed as weights. During consultation for a particular case, all aplicable rules are combined and weighhs of goals (final diagnoses or recommendations) are computed. The main problem when eliciting such knowledge base from an expert is the question of “correct” weights of rules.Our idea is, to combine the structure of knowledge obtained from expert with weights learned from data. We choose the topology and initial settings of the neural net (number of neurons, prohibited links) according to the rules obtained from expert. Then, after learning such network, we try to interpret the weights of connections as uncertainty of the original rules.The paper shows some experimental results of this approach on a knowledge base for credit risk assessment.

Petr Berka, Marek Sláma
Case-, knowledge-, and optimization- based hybrid approach in AI

The Hybrid approach described in this paper presents a powerful tool for development DSS, ES, as well as Data Mining, Pattern Recognition Systems and other Al applications. It's more important advantage is an ability to realize an optimal choice from the set of the admissible alternatives based on the canonical optimization model which can be synthesized by using such AI methods as case-based, rule-based and analogy-based reasoning. Some applications dedicated to the portfolio selection process, regional ecological-economic control, and Dual Expert Systems are described.

Vladimir I. Donskoy
A contextual model of beliefs for communicating agents

We consider the problem of reasoning about the cognitive state of communicating agents. In order to compute belief revisions resulting from information exchanges, these agents could be ascribed models of explicit mental states. Towards this end, we introduce a model of distributed beliefs based on formalizing the assertion ≪ fact P is believed by agent X in context C ≫. These developments give rise to a meta-logic program embodying the various computational aspects (object class and instance creation, proof system, message interpreter) of a complete agent test bed.

Pierre E. Bonzon
Context-mediated behavior for AI applications

Artificial intelligence (AI) applications are often faulted for their brittleness and slowness. In this paper, we argue that both of these problems can be ameliorated if the AI program is context-sensitive, making use of knowledge about the context it is in to guide its perception, understanding, and action. We describe an approach to this problem, context-mediated behavior (CMB). CMB uses contextual schemas (c-schemas) to explicitly represent contexts. Features of the context are used to find the appropriate c-schemas, whose knowledge then guides all aspects of behavior.

Roy M. Turner
A context-sensitive, iterative approach to diagnostic problem solving

In domains where the knowledge is not represented by strong theories or where the problems are to be solved with incomplete information, problem solving needs to be a context sensitive process. This paper presents a task-centered methodology that we used when modelling the context in a diagnostic process. We identify two aspects of the notion of context as important: its role and its elements. We argue that a systematic investigation of the notion of context needs to be organised along these two aspects. Regarding the role that the context play we distinguish between two basic issues. These are related to its role in shifting the focus of attention, and in capturing the focus of attention. We discuss a diagnostic problem solving model, referred to as context-sensitive iterative diagnosis (ConSID) approach which is particularly appropriate for open and weak domains. We presented an implementation of this approach as a hybrid case-based and explanation-based reasoning system.

Pinar Öztürk
A framework for developing intelligent tutoring systems incorporating reusability

The need for effective tutoring and training is mounting, especially in industry and engineering fields, which demand the learning of complex tasks and knowledge. Intelligent tutoring systems are being employed for this purpose, thus creating a need for cost-effective means of developing tutoring systems. We discuss a novel approach to developing an Intelligent Tutoring System shell that can generate tutoring systems for a wide range of domains. Our focus is to develop an ITS shell framework for the class of Generic Task expert systems. We describe the development of an ITS for an existing expert system, which serves as an evaluation test-bed for our approach.

Eman El-Sheikh, Jon Sticklen
The classification and specification of a domain independent agent architecture

The use of multi-agent systems to model and control complex domains is increasing. The application of multi-agent systems to distributed real-world domains motivates the development of domain independent agent architectures to foster the re-use of existing research. A domain independent agent architecture is defined as a set of representations, reasoning mechanisms and algorithms that can be used to define the behavior of an agent.Ideally, a classification method for agent architectures should ease the process of requirements matching between domains and architectures by allowing system designers to understand the relationship between domains and architectures. This paper presents the need for a classification schema and proposes a starting point for future discussion. The Sensible Agent architecture is then introduced within the classification schema. Sensible Agents perceive, process, and respond based on an understanding of both local and system goals.The core capabilities and internal architecture of a Sensible Agent is given as an example of how accurate specifications can help foster the re-use of research.

A. Goel, K. S. Barber
MAVE: A multi-agent architecture for virtual environments

This paper describes a Multi-agent Architecture for Virtual Environments (MAVE). The foundation of this research is the development of a two-tier architecture. The first tier of the architecture is an object oriented physical representation of the virtual environment that is designed to mimic the logical decomposition. The second tier of the architecture is designed to support the need for persistence, real-time interface to external data sources, distribution, and collaboration. MAVE addresses the need for autonomous components that support re-use, access to component level services, and intelligent behaviors.

Jeffrey Coble, Karan Harbison
Agent-based simulation of reactive, pro-active and social animal behaviour

In this paper it is shown how animal behaviour can be simulated in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active and social behaviour. The compositional development method for multi-agent systems DESIRE and its software environment supports the conceptual and detailed design, and execution of these models. Experiments reported in the literature on animal behaviour have been simulated for a number of agent models.

Catholijn M. Jonker, Jan Treur
A fuzzy-neural multiagent system for optimisation of a roll-mill application

This article presents an industrial application of hybrid system: the development of a fuzzy-neural prototype for optimising a rollmill. The prototype has been developed following an agent-oriented methodology called MAS-CommonKADS. This prototype has the original characteristic of being agent-oriented, i.e. each learning technique has been encapsulated into an agent. This multiagent architecture for hybridisation provides flexibility for testing different hybrid configurations. Moreover, the intelligence of the agents allows them to select the best possible hybrid configuration dynamically, for some applications whereas no best configuration for all the situations has been determined.

Carlos A. Iglesias, José C. González, Juan R. Velasco
High-level communication protocol in a distributed multiagent system

In previous multiagent system studies, the agents interact locally (on a single processor). In this paper, we consider the organization of these agents in a physical distributed universe, we outline the management of their interactions, and we propose the way to detect and resolve conflicts. We emphasize particularly the communication aspects considered as the basis of the cooperation, coordination, and negotiation mechanisms. Agents must deal with their respective tasks solely and leave any communiation management to a specialized agent which will be responsible for any problem that may arise. Since the specialized agents need to have a global view of the system state, we define a communication model which takes into account all possible conflicting cases and the established communication models and modes.

Wided Lejouad-Chaari, Fériel Mouria-Beji
Evolving the scale of genetic search

The Genetic Algorithm often has difficulties solving problems in which the scale of important regions in the search space (and thus the type of scale needed for successful search differs. An algorithm is proposed in which the encoding precision for real based chromosomal structures is evolved concurrently with the solution, allowing the Genetic Algorithm to change the scale of its search to suit the current environment. The Algorithm is tested on three standard Genetic Algorithm test functions, and a cardboard box manufacturing application.

John R. Podlena
Ensembles of neural networks for analogue problems

This paper discusses a technique that uses several different networks working together to find a relationship that approximates analogue training sets. Each network learns a different region of the training space and all these regions fit together, like pieces of a jigsaw puzzle, to cover the entire training space. This analogue approach is an extension to a technique previously developed for solving digital problems. The networks can be of any type (eg backprop, cascade). However, virtually any other technique can be used in place of the networks : evolved polynomials, DRS, Taboo search, Nearest Neighbour, and other statistical techniques.

David Philpot, Tim Hendtlass
An evolutionary algorithm with a genetic encoding scheme

This paper explores a variant of an evolutionary algorithm which has no explicit parameter representation, but rather represents parameters across a common “genetic string” in a manner much more akin to the storage of genetic information in DNA. Results for several common test functions are given, and compared with the performances for a standard evolutionary algorithm.

Howard Copland, Tim Hendtlass
Generating lookup tables using evolutionary algorithms

This paper describes the use of an evolutionary algorithm to develop lookup tables which consist of an ordered list of regions, each of which encloses training examples of only one category. Compared to a simpler type of lookup table which consists of an unordered list of the training points and their categories, region based tables are smaller and, in general, faster to use. The development of a region based lookup table for the Frey and Slate character recognition problem is described and the size and accuracy achieved are compared with the original Frey and Slate point based lookup table. The reasons why it outperforms the original lookup table are discussed.

Tim Hendtlass
A genetic algorithm for linear feature extraction

A genetic algorithm for the detection and extraction of linear features in a gray scale image is presented. Conventional techniques for detection of linear features based on template matching and the Hough Transform, rely on an exhaustive search of the solution space, thus rendering them computationally intensive, whereas techniques based on heuristic search in a state-space graph are prone to being trapped in a suboptimal solution state. On account of its building blocks property the genetic algorithm alleviates the need for exhaustive search and the stochastic nature of the genetic algorithm operators makes it robust to the presence of local optima in the solution space. Experimental results on gray scale images bring out the advantages of the genetic algorithm in comparison to the template matching-based and Hough Transform-based techniques for linear feature extraction.

Suchendra M. Bhandarkar, Jögg Zeppen, Walter D. Potter
Knowledge representation in a blackboard system for sensor data interpretation

A prototype system has been developed for automated defect classification and characterisation of automotive or other components employing two separate inspection sensors, vision and electromagnetic. The development work for the electromagnetic sensor sub-system was fraught with difficulties. In particular, there was the basic problem of encoding human expertise. The reasoning carried out by the human inspectors is more complex than the experts themselves may suppose and was not easily encapsulated.A blackboard architecture was used to integrate the different areas of expertise required for each sensor to interpret the results of the inspections. One issue here discussed is the effective use of the blackboard architecture for intelligent data fusion at all levels to improve interpretation.

S. M. C. Peers
Local information processing for decision making in decentralised sensing networks

This paper describes consequences of local information processing for decision making in decentralised systems of sensor nodes. In decentralised data fusion systems, nodes take decisions based on information acquired locally. The ability of such nodes to fuse or combine information is linked to network organisation. Earlier work demonstrates a problem of inconsistency which arises given cyclic information flow in decentralised systems where nodes combine global information. This work shows how this inconsistency limits the decision making capabilities of the sensor nodes. Consequences for real-world systems using decentralised processes for decision making in process monitoring, tracking and aviation are discussed.

Simukai W. Utete
A fuzzy linear approach to recognising signal profiles

In other papers we have introduced the model of Fuzzy Temporal Profile (FTP) [4] through which we describe the evolution of a particular physical parameter V over time. Following this model an approximation to the evolution curve by means of sections between a set of significant points (X0,X1,...,Xn) is defined. Each section is defined by way of an imprecise constraint on duration, on increase in value and on slope between the points connected by the section.In a previous paper [5] we presented a method of matching which analyses the occurrence of certain signal events associated to a particular temporal instant, given by a fuzzy predicate, and a certain condition of value, given, in the same manner, by a fuzzy predicate. In this paper we go beyond this, tackling the following problem: given an FTP and a signal representing the evolution of a certain system, to find the degree of compatibility between the profile and a piece of this signal, and the most possible locations in the global evolution.

P. Félix, S. Fraga, R. Marín, S. Barrot
Performance of a smart velocity sensor: The impulse retina

In the context of mobile robot, we have developed an artificial neural network permitting a pre-processing of foveal vision : the Retina model. This model is adaptative and its multi-resolution allows to detect a large scale of velocities. The aim of this study is to use the Retina to detect the motion and extract the velocity vector of a time sequence image. From impulse output signals of Retina we extract the pertinent parameters which encode the motion by time frequency analysis.

Marilly Emmanuel, Coroyer Christophe, Cachard Olga, Faure Alain
Two methods of linear correlation search for a knowledge based supervised classification

We present an image classification system based on a supervised learning method. The learning phase consists in an automatic rules construction: ≪ knowledge acquisition ≫ from training pixels is automatic. The obtained rules are classification ones : their conclusions are hypotheses about the membership in a given class. An inference engine uses these rules to classify new pixels.The building of the premises of the production rules is realized by linear correlation research among the training set elements. In this paper, we present and compare two methods of linear correlation searches : the first is done among all the training set without distinction of classes, and the second is an intra-classes search. An application to image processing in the medical field is presented and some experimental results obtained in the case of medical human thigh section are reported.

Amel Borgi, Jean-Michel Bazin, Herman Akdag
TURBIO: A system for extracting information from restricted-domain texts

The more extended way of acquiring information for knowledge based systems is manually. However, the high cost of this approach and the availability of alternative Knowledge Sources has lead to an increasing use of automatic acquisition approaches. In this paper we present TURBIO, a Text-Based Intelligent. System (7.'1315) that. extracts information contained in restricted-domain documents. The system acquires part of its knowledge about the structure of the documents and the way the information is presented (i.e. syntactic-semantic rules ) from a training set of them. Then, a database is created by means of applying these syntactic-semantic rules to extract the information contained in the whole documents.

J. Turino, N. Catala, II. Rodriguez
An approach to ellipsis detection and classification for the Arabic Language

The phenomenon of ellipsis is particularly difficult to be treated automatically because it isn't usually easy to characterise the elements that we can delete. In this present article, we propose a formal characterisation of the phenomenon of ellipsis and a localization algorithm of elliptical parts of a sentence based on a clause parser. Then, we present a certain number of classification criteria of elliptical sentences to prepare the phase of resolution itself later.

Kais Haddar, Abdelmajid Ben Hamadou
A review of Earley-based parser for TIG

Tree Insertion Grammar (TIG) is a compromise between Context-Free Grammars (CFG) and Tree Adjoining Grammars (TAG), that combines the efficiency of the former with the strong lexicalizing power of the latter. In this paper, we present a plain representation of TIG elementary trees that can be used directly as the input grammar for the original Earley parser without the additional considerations established in the Schabes and Waters Earley-based parser for TIG.

Victor J. Diaz, Vicente Carrillo, Miguel Toro
Generating explanations from electronic circuits

We have developed a prototype system which explains the structure and behavior of electronic circuits to assist electrical engineers. This system analyzes the structure of the given circuit and generates explanations of its operation. In our system, circuits are viewed as sentences, and their elements as words, in a word-order-free language. Knowledge of circuit structures is coded as grammar rules. Using these grammar rules, the given circuit is parsed and explanations are generated from the result of the parse.

Takushi Tanaka
The effect of a dynamical layer in neural network prediction of biomass in a fermentation process

In this paper, computational intelligence has been considered as a tool (software sensor) for state-estimation and prediction of biomass concentration in a simulated fermentation process. Two different paradigms of an artificial neural networks have been introduced as possible computational engines. Inclusion of process dynamics is inherent within the second paradigm, as a pre-processing layer. The constructed computational engines ‘infer’ the production of biomass from easily measured on-line variables. First and second-order non-linear optimisation methods are used to train the neural networks. It is shown that the use of the pre-processing layer which contains dynamical elements, produces better results and shows significant improvement in the convergence rate of the neural networks.

Majeed Soufian, Mustapha Soufian, M. J. Dempsey
State estimation for nonlinear systems using restricted genetic optimization

In this paper we describe a new nonlinear estimator for filtering systems with nonlinear process and observation models, based on the optimization with RGO (Restricted Genetic Optimization). Simulation results are used to compare the performance of this method with EKF (Extended Kalman Filter), IEKF (Iterated Extended Kalman Filter), SNF (Second-order Nonlinear Filter), SIF (Single-stage Iterated Filter) and MSF (Monte-Carlo Simulation Filter) in the presence of diferents levels of noise.

Santiago Garrido, Luis Moreno, Carlos Balaguer
Identification of a nonlinear industrial process via fuzzy clustering

This paper presents a fuzzy logic approach to complex system modeling that is based on fuzzy clustering technique. As compared with other modeling methods,the proposed approach has the advantage of simplicity, flexibility, and high accuracy. Further, it is easy to use and may be handled by an automatic procedure. An industrial process example (i.e, Heat exchanger) is provided to illustrate the performance of the proposed apprach.

B. Moshiri, S. Chaychi. Maleki
Applying computer vision techniques to traffic monitoring tasks

This paper presents a method for tracking and segmenting vehicles in a traffic scene. The approach is based on a frame to frame segmentation followed by a tracking process. As opposed to usual segmentation methods, our method feedbacks segmentation with tracking information for improving results. Several facilities are provided for traffic monitoring such as vehicles trajectories surveillance, segmentation of vehicle shape, measuring the mean velocity of the traffic, counting the vehicles that are moving on the lanes of a road or a motorway, counting the vehicles that stop at a junction and detection of events such as a vehicle stops on a road or a possible accident.

Jorge Badenas, Filiberto Pla
A self-diagnosing distributed monitoring system for nuclear power plants

Sensor data fusion and interpretation, sensor failure detection, isolation and identification are extremely important activities for the safety of a nuclear power plant. In particular, they become critical in cases of conflicts among the data. If the monitored system's description model is correct and its components work properly, then incompatibilities among data may only be attributed to temporary deterioration or permanent breakage of one or more sensors. This paper introduces and discusses the conception of a distributed monitoring system able to attach each sensor with a statistically-evaluated relative degree of reliability, which is especially useful for devices situated in dangerous zones or areas, difficult to reach inside huge and complex power plants.

Aldo Franco Dragoni, Paolo Giorgini, Maurizio Panti
Overload screening of transmission systems using neural networks

The process of determining whether a power system is in a secure or insecure state is a crucial task which must be addressed on-line in any Energy Management System. In this paper, an Artificial Neural Network, capable of accurately identifying the set of harmful contingencies, is presented, along with several results obtained from a real-size power network. The proposed approach makes use of classical numerical techniques to compensate the ANN'S inputs so that it can deal with topological changes in the power system.

J. Riquelme, A. Gómez, J. L. Martínez, J. A. Peças Lopes
On line industrial diagnosis: An attempt to apply artificial intelligence techniques to process control

Three Knowledge Based Systems (KBS's), performing diagnosis and integrated in a Knowledge Based Supervisory System (KBSS), are presented. The systems work on line in a continuos process factory and one of them is routinely used at the control room. The paper summarises the conceptual framework that guided the design of the KBSS, describing later the fault identification module of each diagnostician. Specially relevant were the different approaches tried to deal with the dynamic nature of the problem, looking for a good trade off between expressiveness and simplicity of the final system. Some experimental results, obtained from actual system performance at a beet sugar factory, and major conclusions, are included.

Alonso González Carlos, Pulido Junquera Belarmino, Acosta Gerardo
An AI modelling approach to understanding a complex manufacturing process

A manufacturer who wishes to set an improved quality target has the problem of setting the target high enough to make a significant improvement in customer satisfaction but at the same time, not so high that the cost of rejected products is counter productive. The problem is difficult because of the uncertain relationship between the quality level, customer satisfaction and cost. This paper describes an approach to understanding the complex relationship between quality target, customer satisfaction and cost for a manually intensive production line. An AI-based modelling and simulation approach is used to investigate in a quantitative way the consequences of changing manufacturing parameters. The approach allows the production engineers to improve their understanding of the production process by formalising their knowledge in a model that can be tested in a variety of scenarios.

M. A. Rodrigues, L. Bottaci
Identification and defect inspection with ultrasonic techniques in foundry pieces

This work describes an ultrasonic-based inspection and identification system for pieces. Given the well-known versatility and reliability of ultrasonic sensors for working in industrial environments, this system is developed for application to foundry pieces which will later be machined within the automobile industry. Starting from treatment of the ultrasonic signal reflected by an object and applying artificial intelligence techniques like neural networks, the identification of the different classes of pieces is carried out and the discrimination between machinable and non machinable pieces is made. In this field, ultrasonic tools appear to be a powerful technique for the quality control of processes because of their capacity to recognise and classify pieces. This work forms part of the investigation tasks of a collaboration project between the Engineering Control Group of the University of Cantabria and FUNIDIMOTOR S.A. (Nissan Group).

A. Lázaro, I. Serrano, J. P. Oria, C. de Miguel
An OOM-KBES approach for fault detection and diagnosis

This paper presents an integrated approach to the intelligent building research: using both the object-oriented modeling (OOM) and knowledge-based expert-system (KBES) methodologies and technologies for fault detection and diagnosis (FDD) of building HVAC systems. The approach consists of five basic steps: 1) establish a model simulating the behavior of the target system using object-oriented design methodologies; 2) identify all types of faults in the target system, extract rules for each process to build the knowledge bases; 3) integrate the knowledge bases into system model to allow the system perform FDD task on itself; 4) build an on-line monitoring system to collect all real-time setpoint data; and 5) make inference against the knowledge bases based on real time data and generate reports.

Yunfeng Xiao, Chia Y. Han
Model-based fault simulation: A reduction method for the diagnosis of electrical components

Today's systems, for example motor vehicles, can be characterized by a variety of subsystems and components. For the automatic investigation of all possible breakdowns and the consequences thereof, an entire system simulation is inevitable. Without taking particular measures into account a variety of individual simulations are necessary in order to copy all the electrical error conditions of the subsystems or components. The necessary computing times for this are astronomical. In the following article a method is described which makes it possible to restrict the number of times the simulation is carried out to a practical number prior to the simulation.

Andreas Heinzelmann, Paula Chammas
A model-based automated diagnosis algorithm

The explosive growth of the Internet and the World Wide Web has made management tasks such as configuration, monitoring, diagnosis, and fault correction much more complicated. An automated management system is highly needed to alleviate those tasks of system managers. Some tools use Case-Based Reasoning or Rule-Based Reasoning to pinpoint faults. However, they have difficulties in solving problems requiring deep level knowledge. Although some Model-Based diagnostic systems have been developed, they are mainly used to tackle problems in other domains. In this paper, a goal-driven model-based automated diagnosis algorithm is proposed.

Along Lin
Cyclic forecasting with recurrent neural network

General statistical method such as the Box-Jenkins ARIMA(p,d,q) model have long been applied in forecasting. Statistical methods such as auto-regression has been used as an efficient and accurate way for forecasting in certain applications such as stock-market forecasting. However, one still has to monitor the forecasting system and determine whether to adjust the parameters to reduce forecasting errors when applying auto-regressive method. A recurrent neural network has been designed to make the forecasts of auto-regression. Then the weight adjusting strategies of the recurrent neural network can be used to continuously adjust the parameters based on the forecasting errors. Therefore, we obtain the forecasts efficiently based on auto-regression without having to monitor the forecasting system constantly and adjust the parameters manually. This provides a very effective tool in forecasting monthly cyclic trends in importing and exporting in a harbor.

Shaun-inn Wu
New algorithms to predict secondary structures of RNA macromolecules

In this paper, we present, under the Hypothesis of Linearity of Energy (HLE), new algorithms to solve the problem of the prediction by energy computation of RNA stable secondary structures. We present our dynamic programming algorithm to compute the free energies of the stable secondary structures, and our traceback algorithm to predict these structures. Our algorithm for computing the free energies is of complexities O(n3) in computing time and O(n2) in memory space, where n is the length of the string. Our prediction algorithm is of complexity O(n*log2(n)) in computing time. Compared to other algorithms, under the HLE, our algorithms present the advantage of taking into account the energetic contribution of all the unpaired bases, in addition to the energetic contribution of the paired ones.

Mourad Ellourni
A hybrid GA statistical method for the forecasting problem : The prediction of the river Nile inflows

The prediction of time series phenomena is a hard and complex task. Many statistical models have been used for solving such task. The selection of a proper statistical model and the setup of its parameters (in terms of the number of parameters and their values) are difficult tasks and they are usually solved by trial and error. This paper presents a hybrid system that integrates genetic algorithms -as a search algorithm- and traditional statistical models to overcome the model selection and tuning problems. The system is applied to the domain of river Nile inflows forecasting which is characterized by the availability of large amount of data and prediction models. The model sdeveloped by the proposed system are then compared with other models like traditional statistical methods and ANNs.

Ashraf H. Abdel-Wahab, Mohammed E. El-Telbany, Samir I. Shaheen
Backmatter
Metadaten
Titel
Methodology and Tools in Knowledge-Based Systems
herausgegeben von
José Mira
Angel Pasqual del Pobil
Moonis Ali
Copyright-Jahr
1998
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-69348-2
Print ISBN
978-3-540-64582-5
DOI
https://doi.org/10.1007/3-540-64582-9