Skip to main content
main-content

Über dieses Buch

The papers in this volume comprise the refereed proceedings of the conference ‘ Artificial Intelligence in Theory and Practice’ (IFIP AI 2008), which formed part of the 20th World Computer Congress of IFIP, the International Federation for Information Processing (WCC-2008), in Milan, Italy in September 2008. The conference is organised by the IFIP Technical Committee on Artificial Intelligence (Technical Committee 12) and its Working Group 12.5 (Artificial Intelligence Applications). All papers were reviewed by at least two members of our Program Committee. Final decisions were made by the Executive Program Committee, which comprised John Debenham (University of Technology, Sydney, Australia), Ilias Maglogiannis (University of Aegean, Samos, Greece), Eunika Mercier-Laurent (KIM, France) and myself. The best papers were selected for the conference, either as long papers (maximum 10 pages) or as short papers (maximum 5 pages) and are included in this volume. The international nature of IFIP is amply reflected in the large number of countries represented here. The conference also featured invited talks by Prof. Nikola Kasabov (Auckland University of Technology, New Zealand) and Prof. Lorenza Saitta (University of Piemonte Orientale, Italy). I should like to thank the conference chair, John Debenham for all his efforts and the members of our program committee for reviewing papers to a very tight deadline.

Inhaltsverzeichnis

Frontmatter

Agents 1

A Light-Weight Multi-Agent System Manages 802.11 Mesh Networks

A light-weight multi-agent system is employed in a “self-organisation of multi-radio mesh networks” project to manage 802.11 mesh networks. As 802.11 mesh networks can be extremely large the two main challenges are the scalability and stability of the solution. The basic approach is that of a distributed, light-weight, co-operative multiagent system that guarantees scalability. As the solution is distributed it is unsuitable to achieve any global optimisation goal — in any case, we argue that

global optimisation

of mesh network performance in any significant sense is not feasible in real situations that are subjected to unanticipated perturbations and external intervention. Our overall goal is simply to reduce maintenance costs for such networks by removing the need for humans to tune the network settings. So stability of the algorithms is our main concern.

Ante Prodan, John Debenham

Decisions with multiple simultaneous goals and uncertain causal effects

A key aspect of decision-making in a disaster response scenario is the capability to evaluate multiple and simultaneously perceived goals. Current competing approaches to build decision-making agents are either mental-state based as BDI, or founded on decision-theoretic models as MDP. The BDI chooses heuristically among several goals and the MDP searches for a policy to achieve a specific goal. In this paper we develop a preferences model to decide among multiple simultaneous goals. We propose a pattern, which follows a decision-theoretic approach, to evaluate the expected causal effects of the observable and non-observable aspects that inform each decision. We focus on yes-or-no (i.e., pursue or ignore a goal) decisions and illustrate the proposal using the RoboCupRescue simulation environment.

Paulo Trigo, Helder Coelho

Agent Based Frequent Set Meta Mining: Introducing EMADS

In this paper we: introduce EMADS, the Extendible Multi-Agent Data mining System, to support the dynamic creation of communities of data mining agents; explore the capabilities of such agents and demonstrate (by experiment) their application to data mining on distributed data. Although, EMADS is not restricted to one data mining task, the study described here, for the sake of brevity, concentrates on agent based Association Rule Mining (ARM), in particular what we refer to as frequent set meta mining (or Meta ARM). A full description of our proposed Meta ARM model is presented where we describe the concept of Meta ARM and go on to describe and analyse a number of potential solutions in the context of EMADS. Experimental results are considered in terms of: the number of data sources, the number of records in the data sets and the number of attributes represented.

Kamal Ali Albashiri, Frans Coenen, Paul Leng

Agents 2

On the evaluation of MAS development tools

Recently a great number of methods and frameworks to develop multiagent systems have appeared. Nowadays there is no established framework to evaluate environments to develop multiagent systems (MAS) and choosing between one framework or another is a difficult task. The main contributions of this paper are: (1) a brief analysis of the state of the art in the evaluation of MAS engineering; (2) a complete list of criteria that helps in the evaluation of multiagent system development environments; (3) a quantitative evaluation technique; (4) an evaluation of the Ingenias methodology and its development environment using this evaluation framework.

Emilia Garcia, Adriana Giret, Vicente Botti

Information-Based Planning and Strategies

The foundations of information-based agency are described, and the principal architectural components are introduced. The agent’s deliberative planning mechanism manages interaction using plans and strategies in the context of the relationships the agent has with other agents, and is the means by which those relationships develop. Finally strategies are described that employ the deliberative mechanism and manage argumentative dialogues with the aim of achieving the agent’s goals.

John Debenham

Teaching Autonomous Agents to Move in a Believable Manner within Virtual Institutions

Believability of computerised agents is a growing area of research. This paper is focused on one aspect of believability — believable movements of avatars in normative 3D Virtual Worlds called Virtual Institutions. It presents a method for implicit training of autonomous agents in order to “believably” represent humans in Virtual Institutions. The proposed method does not require any explicit training efforts from human participants. The contribution is limited to the lazy learning methodology based on imitation and algorithms that enable believable movements by a trained autonomous agent within a Virtual Institution.

A. Bogdanovych, S. Simoff, M. Esteva, J. Debenham

Data Mining

Mining Fuzzy Association Rules from Composite Items

This paper presents an approach for mining fuzzy Association Rules (ARs) relating the properties of composite items, i.e. items that each feature a number of values derived from a common schema. We partition the values associated to properties into fuzzy sets in order to apply fuzzy Association Rule Mining (ARM). This paper describes the process of deriving the fuzzy sets from the properties associated to composite items and a unique Composite Fuzzy Association Rule Mining (CFARM) algorithm founded on the certainty factor interestingness measure to extract fuzzy association rules. The paper demonstrates the potential of composite fuzzy property ARs, and that a more succinct set of property ARs can be produced using the proposed approach than that generated using a non-fuzzy method.

M. Sulaiman Khan, Maybin Muyeba, Frans Coenen

P-Prism: A Computationally Efficient Approach to Scaling up Classification Rule Induction

Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.

Frederic T. Stahl, Max A. Bramer, Mo Adda

Applying Data Mining to the Study of Joseki

Go is a strategic two player boardgame. Many studies have been done with regard to go in general, and to joseki, localized exchanges of stones that are considered fair for both players. We give an algorithm that finds and catalogues as many joseki as it can, as well as the global circumstances under which they are likely to be played, by analyzing a large number of professional go games. The method used applies several concepts, e.g., prefix trees, to extract knowledge from the vast amount of data.

Michiel Helvensteijn

A Fuzzy Semi-Supervised Support Vector Machines Approach to Hypertext Categorization

Hypertext/text domains are characterized by several tens or hundreds of thousands of features. This represents a challenge for supervised learning algorithms which have to learn accurate classifiers using a small set of available training examples. In this paper, a fuzzy semi-supervised support vector machines (FSS-SVM) algorithm is proposed. It tries to overcome the need for a large labelled training set. For this, it uses both labelled and unlabelled data for training. It also modulates the effect of the unlabelled data in the learning process. Empirical evaluations with two real-world hypertext datasets showed that, by additionally using unlabelled data, FSS-SVM requires less labelled training data than its supervised version, support vector machines, to achieve the same level of classification performance. Also, the incorporated fuzzy membership values of the unlabelled training patterns in the learning process have positively influenced the classification performance in comparison with its crisp variant.

Houda Benbrahim, Max Bramer

Neural Networks

Estimation of Neural Network Parameters for Wheat Yield Prediction

Precision agriculture (PA) and information technology (IT) are closely interwoven. The former usually refers to the application of nowadays’ technology to agriculture. Due to the use of sensors and GPS technology, in today’s agriculture many data are collected. Making use of those data via IT often leads to dramatic improvements in efficiency. For this purpose, the challenge is to change these raw data into useful information. This paper deals with suitable modeling techniques for those agricultural data where the objective is to uncover the existing patterns. In particular, the use of feed-forward backpropagation neural networks will be evaluated and suitable parameters will be estimated. In consequence, yield prediction is enabled based on cheaply available site data. Based on this prediction, economic or environmental optimization of, e.g., fertilization can be carried out.

Georg Ruß, Rudolf Kruse, Martin Schneider, Peter Wagner

Enhancing RBF-DDA Algorithm’s Robustness: Neural Networks Applied to Prediction of Fault-Prone Software Modules

Many researchers and organizations are interested in creating a mechanism capable of automatically predicting software defects. In the last years, machine learning techniques have been used in several researches with this goal. Many recent researches use data originated from NASA (National Aeronautics and Space Administration) IV&V (Independent Verification & Validation) Facility Metrics Data Program (MDP). We have recently applied a constructive neural network (RBF-DDA) for this task, yet MLP neural networks were not investigated using these data. We have observed that these data sets contain inconsistent patterns, that is, patterns with the same input vector belonging to different classes. This paper has two main objectives, (i) to propose a modified version of RBF-DDA, named RBF-eDDA (RBF trained with enhanced Dynamic Decay Adjustment algorithm), which tackles inconsistent patterns, and (ii) to compare RBF-eDDA and MLP neural networks in software defects prediction. The simulations reported in this paper show that RBF-eDDA is able to correctly handle inconsistent patterns and that it obtains results comparable to those of MLP in the NASA data sets.

Miguel E. R. Bezerra, Adriano L. I. Oliveira, Paulo J. L. Adeodato, Silvio R. L. Meira

Learning

A Study with Class Imbalance and Random Sampling for a Decision Tree Learning System

Sampling methods are a direct approach to tackle the problem of class imbalance. These methods sample a data set in order to alter the class distributions. Usually these methods are applied to obtain a more balanced distribution. An open-ended question about sampling methods is which distribution can provide the best results, if any. In this work we develop a broad empirical study aiming to provide more insights into this question. Our results suggest that altering the class distribution can improve the classification performance of classifiers considering AUC as a performance metric. Furthermore, as a general recommendation, random over-sampling to balance distribution is a good starting point in order to deal with class imbalance.

Ronaldo C. Prati, Gustavo E. A. P. A. Batista, Maria Carolina Monard

Answer Extraction for Definition Questions using Information Gain and Machine Learning

Extracting nuggets (pieces of an answer) is a very important process in question answering systems, especially in the case of definition questions. Although there are advances in nugget extraction, the problem is finding some general and flexible patterns that allow producing as many useful definition nuggets as possible. Nowadays, patterns are obtained in manual or automatic way and then these patterns are matched against sentences. In contrast to the traditional form of working with patterns, we propose a method using information gain and machine learning instead of matching patterns. We classify the sentences as likely to contain nuggets or not. Also, we analyzed separately in a sentence the nuggets that are

left

and

right

of the target term (the term to define). We performed different experiments with the collections of questions from the TREC 2002, 2003 and 2004 and the F-measures obtained are comparable with the participating systems.

Carmen Martínez-Gil, A. López-López

Batch Reinforcement Learning for Controlling a Mobile Wheeled Pendulum Robot

In this paper we present an application of Reinforcement Learning (RL) methods in the field of robot control. The main objective is to analyze the behavior of batch RL algorithms when applied to a mobile robot of the kind called

Mobile Wheeled Pendulum

(MWP). In this paper we focus on the common problem in classical control theory of following a reference state (e.g., position set point) and try to solve it by RL. In this case, the state space of the robot has one more dimension, in order to represent the desired variable state, while the cost function is evaluated considering the difference between the state and the reference. Within this framework some interesting aspects arise, like the ability of the RL algorithm to generalize to reference points never considered during the training phase. The performance of the learning method has been empirically analyzed and, when possible, compared to a classic control algorithm, namely linear quadratic optimal control (LQR).

Andrea Bonarini, Claudio Caccia, Alessandro Lazaric, Marcello Restelli

Knowledge Management

Optimizing Relationships Information in Repertory Grids

The Repertory Grid method is widely used in knowledge engineering to infer functional relationships between constructs given by an expert. The method is ignoring information that could be used to infer more precise dependencies. This paper proposes an improvement to take advantage on the information that is being ignored in the current method. Furthermore, this improvement fixes several other limitations attached to the original method, such as election in a discrete set of two values as a similarity pole or a contrast pole, the arbitrary measurement of distances, the unit-scale dependency and the normalization, among others. The idea is to use linear regression to estimate the correlation between constructs and use the fitness error as a distance measure.

Enrique Calot, Paola Britos, Ramón García-Martínez

Modeling Stories in the Knowledge Management Context to Improve Learning Within Organizations

Knowledge Management has been always considered as a problem of acquiring, representing and using information and knowledge about problem solving methods. Anyway, the complexity reached by organizations over the last years has deeply changed the role of Knowledge Management. Today, it is not possible to take care of knowledge involved in decision making processes without taking care of social context where it is produced. This point has direct implications on learning processes and education of newcomers: a decision making process to solve a problem is composed by not only a sequence of actions (i.e. the know-how aspect of knowledge), but also a number of social interconnections between people involved in their implementation (i.e. the social nature of knowledge). Thus, Knowledge Management should provide organizations with new tools to consider both these aspects in the development of systems to support newcomers in their learning process about their new jobs. This paper investigates how this is possible through the integration of storytelling and case-based reasoning methodologies.

Stefania Bandini, Federica Petraglia, Fabio Sartori

Knowledge Modeling Framework for System Engineering Projects

System Engineering (SE) projects encompass knowledge-intensive tasks that involve extensive problem solving and decision making activities among interdisciplinary teams. Management of knowledge emerging in previous SE projects is vital for organizational process improvement. To fully exploit this intellectual capital, it must be made explicit and shared among system engineers. In this context, we propose a knowledge modelling framework for system engineering projects. Our main objective is to provide a semantic description for knowledge items created and/or used in system engineering processes in order to facilitate their reuse. The framework is based on a set of layered ontologies where entities such as domain concepts, actors, decision processes, artefacts, are interlinked to capture explicit as well as implicit engineering project knowledge.

Olfa Chourabi, Yann Pollet, Mohamed Ben Ahmed

Foundations

Machines with good sense: How can computers become capable of sensible reasoning?

Good sense can be defined as the quality which someone has to make sensible decisions about what to do in specific situations. It can also be defined as good judgment. However, in order to have good sense, people have to use common sense knowledge. This is not different to computers. Nowadays, computers are still not able to make sensible decisions and one of the reasons is the fact that they lack common sense. This paper focuses on OMCS-Br, a collaborative project that makes use of web technologies in order to get common sense knowledge from a general public and so use it in computer applications. Here it is presented how people can contribute to give computers the knowledge they need to be able to perform common sense reasoning and, therefore, to make good sense decisions. In this manner, it is hoped that software with more usability can be developed.

Junia C. Anacleto, Ap. Fabiano Pinatti de Carvalho, Eliane N. Pereira, Alexandre M. Ferreira, Alessandro J. F. Carlos

Making Use of Abstract Concepts–Systemic-Functional Linguistics and Ambient Intelligence

One of the challenges for ambient intelligence is to embed technical artefacts into human work processes in such a way that they support the sense making processes of human actors instead of placing new burdens upon them. This successful integration requires an operational model of context. Such a model of context is particularly important for disambiguating abstract concepts that have no clear grounding in the material setting of the work process. This paper examines some of the strengths and current limitations in a systemic functional model of context and concludes by suggesting that the notions of instantiation and stratification can be usefully employed.

Jörg Cassens, Rebekah Wegener

Making Others Believe What They Want

We study the interplay between argumentation and belief revision within the MAS framework. When an agent uses an argument to persuade another one, he must consider not only the proposition supported by the argument, but also the overall impact of the argument on the beliefs of the addressee. Different arguments lead to different belief revisions by the addressee. We propose an approach whereby the best argument is defined as the one which is both rational and the most appealing to the addressee.

Guido Boella, Célia Costa da Pereira, Andrea G. B. Tettamanzi, Leendert van der Torre

Foundation for Virtual Experiments to Evaluate Thermal Conductivity of Semi- and Super-Conducting Materials

Thermal conductivity of solids provides an ideal system for analysis by conducting numerical experiment, currently known as virtual experiment. Here, the model is a numerical model, which is dynamic in nature, as the parameters are interrelated. The present paper discusses the steps involved to conduct virtual experiments using Automated Reasoning for simulation to evaluate the thermal conductivity of Ge, Mg

2

Sn semiconducting and YBCO superconducting materials, close to the experimental values.

R. M. Bhatt, R. P. Gairola

Applications 1

Intelligent Systems Applied to Optimize Building’s Environments Performance

By understanding a building as a dynamic entity capable of adapting itself not only to changing environmental conditions but also to occupant’s living habits, high standards of comfort and user satisfaction can be achieved. An intelligent system architecture integrating neural networks, expert systems and negotiating agents technologies is designed to optimize intelligent building’s performance. Results are promising and encourage further research in the field of AI applications in building automation systems.

E. Sierra, A. Hossian, D. Rodríguez, M. García-Martínez, P. Britos, R. García-Martínez

A Comparative Analysis of One-class Structural Risk Minimization by Support Vector Machines and Nearest Neighbor Rule

One-class classification is an important problem with applications in several different areas such as outlier detection and machine monitoring. In this paper we propose a novel method for one-class classification, referred to as kernel

k

-NNDDSRM. This is a modification of an earlier algorithm, the kNNDDSRM, which aims to make the method able to build more flexible descriptions with the use of the kernel trick. This modification does not affect the algorithm’s main feature which is the significant reduction in the number of stored prototypes in comparison to NNDD. Aiming to assess the results, we carried out experiments with synthetic and real data to compare the method with the support vector data description (SVDD) method. The experimental results show that our oneclass classification approach outperformed SVDD in terms of the area under the receiver operating characteristic (ROC) curve in six out of eight data sets. The results also show that the kernel kNNDDSRM remarkably outperformed kNNDDSRM.

George G. Cabral, Adriano L. I. Oliveira

Estimation of the Particle Size Distribution of a Latex using a General Regression Neural Network

This paper presents a neural-based model for estimating the particle size distribution (PSD) of a polymer latex, which is an important physical characteristic that determines some end-use properties of the material (e.g., when it is used as an adhesive, a coating, or an ink). The PSD of a dilute latex is estimated from combined DLS (dynamic light scattering) and ELS (elastic light scattering) measurements, taken at several angles. To this effect, a neural network approach is used as a tool for solving the involved inverse problem. The method utilizes a general regression neural network (GRNN), which is able to estimate the PSD on the basis of both the average intensity of the scattered light in the ELS experiments, and the average diameters calculated from the DLS measurements. The GRNN was trained with a large set of measurements simulated from typical asymmetric PSDs, represented by unimodal normal-logarithmic distributions of variable geometric mean diameters and variances. The proposed approach was successfully evaluated on the basis of both simulated and experimental examples.

G. Stegmayer, J. Vega, L. Gugliotta, O. Chiotti

Intelligent Advisory System for Designing Plastics Products

Plastics product design is very experience dependent process. In spite of various computer tools available on the market, the designer has to rely on personal or supporting experts’ knowledge and experience when designing plastics products. Proposed development of the intelligent advisory system presented in this paper involves two methodologies. “Design for X” strategy will be applied to consider specific design aspects for plastic products, while “Knowledge-Based Engineering” will be used for knowledge acquisition, its systematization and utilization within the system. The major benefit of the intelligent support provided by the advisory system will be faster and more reliable product development process, as the system will offer the user some recommendation and advice about material selection and related production process. Thus, the expert team could be contracted. Minimized development process costs along with optimal technical design solutions for plastic products will enable small and medium size enterprises to compete with their plastics products on the global market.

U. Sancin, B. Dolšak

Applications 2

Modeling the Spread of Preventable Diseases: Social Culture and Epidemiology

This paper uses multiagent simulation to examine the effect of various awareness interventions on the spread of preventable diseases in a society. The work deals with the interplay between knowledge diffusion and the spreading of these preventable infections in the population. The knowledge diffusion model combines information acquisition through education, personal experiences, and the spreading of information through a scale-free social network. A conditional probability model is used to model the interdependence between the risk of infection and the level of health awareness acquired. The model is applied to study the spread of HIV/Aids, malaria, and tuberculosis in the South African province Limpopo. The simulation results show that the effect of various awareness interventions can be very different and that a concerted effort to spread health awareness through various channels is more likely to control the spread of these preventable infections in a reasonable time.

Ahmed Y. Tawfik, Rana R. Farag

An Intelligent Decision Support System for the Prompt Diagnosis of Malaria and Typhoid Fever in the Malaria Belt of Africa

Malaria is endemic in Africa, though curable it is difficult to manage the prompt diagnosis of the disease because available diagnostic tools are affected by the harsh tropical weather. Also, the lack of electricity for the storage of current diagnostic tool in the rural areas as well as the fact that it has signs and symptoms that are similar to those of typhoid fever; a common disease in the region as well, is a major setback. This paper describes the research and development in implementing an Intelligent Decision Support System for the diagnosis of malaria and typhoid fever in the malaria subregions of Africa. The system will be mounted on a laptop, the one child per laptop, which will be powered by a wind-up crank or solar panel. The region chosen for our study was the Western Subregional network of malaria in Africa.

A. B. Adehor, P. R. Burrell

Detecting Unusual Changes of Users Consumption

The points being approached in this paper are: the problem of detecting unusual changes of consumption in mobile phone users, the corresponding building of data structures which represent the recent and historic users’ behavior bearing in mind the information included in a call, and the complexity of the construction of a function with so many variables where the parameterization is not always known.

Paola Britos, Hernan Grosser, Dario Rodríguez, Ramon Garcia-Martinez

Techniques

Optimal Subset Selection for Classification through SAT Encodings

In this work we propose a method for computing a minimum size training set consistent subset for the Nearest Neighbor rule (also said CNN problem) via SAT encodings. We introduce the SAT–CNN algorithm, which exploits a suitable encoding of the CNN problem in a sequence of SAT problems in order to exactly solve it, provided that enough computational resources are available. Comparison of SAT–CNN with well-known greedy methods shows that SAT–CNN is able to return a better solution. The proposed approach can be extended to several hard subset selection classification problems.

Fabrizio Angiulli, Stefano Basta

Multi-objective Model Predictive Optimization using Computational Intelligence

In many engineering design problems, the explicit function form of objectives/constraints can not be given in terms of design variables. Given the value of design variables, under this circumstance, the value of those functions is obtained by some simulation analysis or experiments, which are often expensive in practice. In order to make the number of analyses as few as possible, techniques for model predictive optimization (also referred to as sequential approximate optimization or metamodeling) which make optimization in parallel with model prediction have been developed. In this paper, we discuss several methods using computational intelligence for this purpose along with applications to multi-objective optimization under static/dynamic environment.

Hirotaka Nakayama, Yeboon Yun

An Intelligent Method for Edge Detection based on Nonlinear Diffusion

Edge detection is an important task in the field of image processing with broad applications in image and vision analysis. In this paper, we present a new intelligent computational mechanism using nonlinear diffusion equations for edge detection. Experimental results show that the proposed method outperforms standard edge detectors as well as other methods that deploy inhibition of texture.

C. A. Z. Barcelos, V. B. Pires

Semantic Web

A Survey of Exploiting WordNet in Ontology Matching

Nowadays, many ontologies are used in industry, public adminstration and academia. Although these ontologies are developed for various purposes and domains, they often contain overlapping information. To build a collaborative semantic web, which allows data to be shared and reused across applications, enterprises, and community boundaries, it is necessary to find ways to compare, match and integrate various ontologies. Different strategies (e.g., string similarity, synonyms, structure similarity and based on instances) for determining similarity between entities are used in current ontology matching systems. Synonyms can help to solve the problem of using different terms in the ontologies for the same concept. The WordNet thesauri can support improving similarity measures. This paper provides an overview of how to apply WordNet in the ontology matching research area.

Feiyu Lin, Kurt Sandkuhl

Using Competitive Learning between Symbolic Rules as a Knowledge Learning Method

We present a new knowledge learning method suitable for extracting symbolic rules from domains characterized by continuous domains. It uses the idea of competitive learning, symbolic rule reasoning and it integrates a statistical measure for relevance analysis during the learning process. The knowledge is in form of standard production rules which are available at any time during the learning process. The competition occurs among the rules for capturing a presented instance and the rules can undergo processes of merging, splitting, simplifying and deleting. Reasoning occurs at both higher level of abstraction and lower level of detail. The method is evaluated on publicly available real world datasets.

F. Hadzic, T. S. Dillon

Knowledge Conceptualization and Software Agent based Approach for OWL Modeling Issues

In this paper, we address the issues of using OWL to model the knowledge captured in relational databases. Some types of knowledge in databases cannot be modeled directly using OWL constructs. Two alternative approaches are proposed with examples of two types of knowledge. Firstly the data value range constraint and secondly the calculation knowledge representation. The first approach to the problem is to conceptualize the data range as a new class and the second solution to the problem is proposed, based on utilizing software agent technology. Examples with OWL code and implementation code are given to demonstrate the problems and solutions.

S. Zhao, P. Wongthongtham, E. Chang, T. Dillon

Representation, Reasoning and Search

Context Search Enhanced by Readability Index

Context search is based on gathering information about user’s sphere of interest before the search process. This information defines context and augments search query in subsequent phases of search to attain better search results. There are several basic methods for context enhanced searching. The main idea of them is to extract keywords of the found document and compare them with those from the context. The keyword recognition process is difficult to describe in a formally complete way. The context search based on it may, but also may not attain better search results. We propose a modification of the context search by broadening the scope of kinds of attributes, i.e. to consider also implicit attributes rather than only keywords (i.e., explicit ones). Our hypothesis is that it will enable the context search method to fetch more relevant results. This work analyzes the relation between readability index of a document and its content. Improvement idea is based on the kind of knowledge which is difficult to express by keywords, e.g., the fact that user is looking for fairy tales rather than science articles.

Pavol Navrat, Tomas Taraba, Anna Bou Ezzeddine, Daniela Chuda

Towards an Enhanced Vector Model to Encode Textual Relations: Experiments Retrieving Information

The constant growth of digital information, facilitated by storage technologies, imposes new challenges for information processing tasks, and maintains the need of effective search mechanisms, oriented towards improving in precision but simultaneously capable of producing useful information in a short time. Hence, this paper presents a document representation to encode textual relations. This representation does not consider each term as one entry in a vector but rather as a pattern, i.e. a set of contiguous entries. To deal with variations inherent in natural language, we plan to express textual relations (such as noun phrases, named entities, subject-verb, verb-object, adjective-noun, and adverb-verb) as composed patterns. An operator is applied to form bindings between terms encoding relations as new “terms”, thereby providing additional descriptive elements for indexing a document collection. The results of our first experiments, using the document representation to conduct information retrieval and incorporating two-word noun phrases, showed that the representation is feasible, retrieves, and improves the ranking of relevant documents, and consequently the values of mean average precision.

Maya Carrillo, A. López-López

Efficient Two-Phase Data Reasoning for Description Logics

Description Logics are used more and more frequently for knowledge representation, creating an increasing demand for efficient automated DL reasoning. However, the existing implementations are inefficient in the presence of large amounts of data. We present an algorithm to transform DL axioms to a set of function-free clauses of first-order logic which can be used for efficient, query oriented data reasoning. The described method has been implemented in a module of the DLog reasoner openly available on SourceForge to download.

Zsolt Zombori

Some Issues in Personalization of Intelligent Systems: An Activity Theory Approach for Meta Ontology Development

Personalization of systems has been part of the renaissance of artificial intelligence in many domains. This paper investigates some emerging issues in the area of personalization as they impact systems from different perspectives. Particular attention is given to the relationship between explicitly and implicitly gathered information, information gathered from other personalization settings and with the generation of a personalization information ontology, based on an activity theory approach. Finally, some privacy issues are considered, potentially limiting information sharing between applications.

Daniel E. O’Leary

Short Papers

Smart communications network management through a synthesis of distributed intelligence and information

Demands on communications networks to support bundled, interdependent communications services (data, voice, video) are increasing in complexity. Smart network management techniques are required to meet this demand. Such management techniques are envisioned to be based on two main technologies: (i) embedded intelligence; and (ii) up-to-the-millisecond delivery of performance information. This paper explores the idea of delivery of intelligent network management as a synthesis of distributed intelligence and information, obtained through information mining of network performance.

J. K. Debenham, S. J. Simoff, J. R. Leaney, V. Mirchandani

An Abductive Multi-Agent System for Medical Services Coordination

We present MeSSyCo, a multi-agent system that integrates and coordinates heterogeneous medical services. Agents in MeSSyCo may perform different tasks such as diagnosis and intelligent resource allocation and coordinate themselves through an infrastructure based on a combination of abductive and probabilistic reasoning. In this way a set of specialized medical service providers could be aggregated into a system able to perform more complex medical tasks.

Anna Ciampolini, Paola Mello, Sergio Storari

A New Learning Algorithm for Neural Networks with Integer Weights and Quantized Non-linear Activation Functions

The hardware implementation of neural networks is a fascinating area of research with for reaching applications. However, the real weights and non-linear activation function are not suited for hardware implementation. A new learning algorithm, which trains neural networks with integer weights and excludes derivatives from the training process, is presented in this paper. The performance of this procedure was evaluated by comparing to multi-threshold method and continuous discrete learning method on XOR and function approximation problems, and the simulation results show the new learning method outperforms the other two greatly in convergence and generalization.

Yan Yi, Zhang Hangping, Zhou Bin

Neural Recognition of Minerals

The design of a neural network is presented for the recognition of six kinds of minerals (chalcopyrite, chalcosine, covelline, bornite, pyrite, and energite) and to determine the percentage of these minerals from a digitized image of a rock sample. The input to the neural network corresponds to the histogram of the region of interest selected by the user from the image that it is desired to recognize, which is processed by the neural network, identifying one of the six minerals learned. The network’s training process took place with 160 regions of interest selected from digitized photographs of mineral samples. The recognition of the different types of minerals in the samples was tested with 240 photographs that were not used in the network’s training. The results showed that 97% of the images used to train the network were recognized correctly in the percentage mode. Of the new images, the network was capable of recognizing correctly 91% of the samples.

Mauricio Solar, Patricio Perez, Francisco Watkins

Bayesian Networks Optimization Based on Induction Learning Techniques

Obtaining a bayesian network from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper, we define an automatic learning method that optimizes the bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees with those of the bayesian networks.

Paola Britos, Pablo Felgaer, Ramon Garcia-Martinez

Application of Business Intelligence for Business Process Management

Companies require highly automated business process management (BPM) functionality, with the flexibility to incorporate business intelligence (BI) at appropriate stages throughout the workflow. Business Activity Monitoring (BAM) unifies these two technologies and provides real-time access to critical performance indicators to improve the speed and effectiveness of business operations. This paper discusses BPM technologies in the context of the supply chain and presents the comprehensive BAM solution that utilizes latest BPM, BI and portal technologies in order to enable decision makers to access and assimilate the right information to make well-informed, timely decisions.

Nenad Stefanovic, Dusan Stefanovic, Milan Misic

Learning Life Cycle in Autonomous Intelligent Systems

Autonomous Intelligent Systems (AIS) integrate planning, learning, and execution in a closed loop, showing an autonomous intelligent behavior. A Learning Life Cycle (LLC) for AISs is proposed. The LLC is based on three different learned operators’ layers: Built-in Operators, Trained Base Operators and World Interaction Operators. The extension of the original architecture to support the new type of operators is presented.

Jorge Ierache, Ramón García-Martínez, Armando De Giusti

A Map-based Integration of Ontologies into an Object-Oriented Programming Language

Today’s programmers have difficulties using ontology in their information-centric applications, where ontology would be useful. This paper addresses the integration technique of ontologies into an object-oriented scripting language. Our technique is based on the use of semantic mapping as a unified form of complicated semantic relations in an ontology system for the class-subclass view of an object-oriented programming modeling. This enables ordinary programmers to write ontology reasoning, such as equivalence and subsumption, without any extended logical constructors.

Kimio Kuramitsu
Weitere Informationen

Premium Partner

    Bildnachweise