Skip to main content

2009 | Buch

Artificial Intelligence Applications and Innovations III

herausgegeben von:  Iliadis,  Maglogiann,  Tsoumakasis,  Vlahavas,  Bramer

Verlag: Springer US

Buchreihe : IFIP Advances in Information and Communication Technology

insite
SUCHEN

Über dieses Buch

The ever expanding abundance of information and computing power enables - searchers and users to tackle highly interesting issues, such as applications prov- ing personalized access and interactivity to multimodal information based on user preferences and semantic concepts or human-machine interface systems utilizing information on the affective state of the user. The general focus of the AIAI conf- ence is to provide insights on how AI can be implemented in real world applications. This volume contains papers selected for presentation at the 5th IFIP Conf- ence on Artificial Intelligence Applications & Innovations (AIAI 2009) being held from 23rd till 25th of April, in Thessaloniki, Greece. The IFIP AIAI 2009 conf- ence is co-organized by the Aristotle University of Thessaloniki, by the University of Macedonia Thessaloniki and by the Democritus University of Thrace. AIAI 2009 is the official conference of the WG12.5 "Artificial Intelligence Appli- tions" working group of IFIP TC12 the International Federation for Information Processing Technical Committee on Artificial Intelligence (AI). It is a conference growing and maintaining high standards of quality. The p- pose of the 5th IFIP AIAI Conference is to bring together researchers, engineers and practitioners interested in the technical advances and business / industrial - plications of intelligent systems. AIAI 2009 is not only focused in providing - sights on how AI can be implemented in real world applications, but it also covers innovative methods, tools and ideas of AI on architectural and algorithmic level.

Inhaltsverzeichnis

Frontmatter
Synergies of AI methods for Robotic Planning and Grabbing, Facial Expressions Recognition and Blind's Navigation

Artificial Intelligent (AI) techniques have reached an acceptable level of maturity as single entities and their application to small and simple problems have offered impressive results. For large scale and complex problems, however, these AI methods individually are not always capable to offer satisfactory results. Thus, synergies of AI methods are used to overcome difficulties and provide solutions to large scale complex problems. This talk presents several synergies of AI methods for solving different complex problems. In particular, the first synergy combines AI planning, stochastic Petri-nets and neural nets for coordinating two robotic hands for boxes placement, and neuro-fuzzy nets for robotic hand grabbing. The second synergy is based on neural color constancy for skin detection and enriched with fuzzy image segmentation & regions synthesis and local global (LG) graphs method for biometrics application by detecting faces and recognizing facial expressions. The third synergy uses several image processing and computer vision techniques in combination with formal modeling of vibrations to offer to the blind 3D sensations of the surrounding space for safe navigation. Examples from other synergistic methodologies, such as, body motion-tracking and robotic 3D brain surgery are also presented.

Nikolaos G. Bourbakis
Neural Networks for Modal and Virtual Learning

This talk will explore the integration of learning modes into a single neural network structure in order to overcome the inherent limitations of any given mode (for example some modes memorize specific features, others average across features and both approaches may be relevant according to the circumstances). Inspiration comes from neuroscience, cognitive science and human learning, where it is impossible to build a serious model of learning without consideration of multiple modes; and motivation also comes from non-stationary input data, or time variant learning oblectives, where the optimal mode is a function of time. Several modal learning ideas will be presented, including the Snap-Drift Neural Network which toggles its learning (across the network or on a neuron-by-neuron basis) between two modes, either unsupervised or guided by performance feedback (reinforcement) and an adaptive function Neural Network (ADFUNN) in which adaption applies simultaneously to both the weights and the individual neuron activation functions. The talk will also focus on a virtual learning environment example that involves the modal learning Neural Network, identifying patterns of student learning that can be used to target diagnostic feedback that guides the learner towards increased states of knowledge.

Dominic Palmer-Brown
A Hybrid Technology for Operational Decision Support in Pervasive Environments

The paper addresses the issue of development of a technology for operational decision support in a pervasive environment. The technology is built around the idea of using Web-services for self-organization of heterogeneous resources of the environment for decision support purposes. The approach focuses on three types of resources to be organized: information, problem-solving, and acting. The final purpose of the resource self-organization is to form an ad-hoc collaborative environment, members of which cooperate with the aim to serve the current needs according to the decision situation. The hybrid technology proposed in the paper integrates technologies of ontology management, context management, constraint satisfaction, Web-services, and intelligent agents. The application of the technology is illustrated by response to a traffic accident.

Alexander Smirnov, Tatiana Levashova, Nikolay Shilov, Alexey Kashevnik
An Expert System Based on Parametric Net to Support Motor Pump Multi-Failure Diagnostic

Early failure detection in motor pumps is an important issue in prediction maintenance. An efficient condition-monitoring scheme is capable of providing warning and predicting the faults at early stages. Usually, this task is executed by humans. The logical progression of the condition-monitoring technologies is the automation of the diagnostic process. To automate the diagnostic process, intelligent diagnostic systems are used. Many researchers have explored artificial intelligence techniques to diagnose failures in general. However, all papers found in literature are related to a specific problem that can appear in many different machines. In real applications, when the expert analyzes a machine, not only one problem appears, but more than one problem may appear together. So, it is necessary to propose new methods to assist diagnosis looking for a set of occurring fails. For some failures, there are not sufficient instances that can ensure good classifiers induced by available machine learning algorithms. In this work, we propose a method to assist fault diagnoses in motor pumps, based on vibration signal analysis, using expert systems. To attend the problems related to motor pump analyses, we propose a parametric net model for multi-label problems. We also show a case study in this work, showing the applicability of our proposed method.

Flavia Cristina Bernardini, Ana Cristina Bicharra Garcia, Inhaúma Neves Ferraz
Providing Assistance during Decision-making Problems Solving in an Educational Modelling Environment

In this paper we present the model-checking module of decision-making models in the frame of ModelsCreator, an educational modelling environment. The model-checking module aims to assist young students to construct qualitative models for decision-making problems solving. We specify the decision-making models that may be built and we explain the model checking mechanism. The model-checking mechanism compares the student model with the reference model constructed by the teacher and provides immediate advice to the student to help him create a valid model. So, the model-checking module of a decision-making model aims to facilitate student to structure convincing decisions in the proper situations.

Panagiotis Politis, Ioannis Partsakoulakis, George Vouros, Christos Fidas
Alternative Strategies for Conflict Resolution in Multi-Context Systems

Multi-Context Systems are logical formalizations of distributed context theories connected through mapping rules, which enable information flow between different contexts. Reasoning in Multi-Context Systems introduces many challenges that arise from the heterogeneity of contexts with regard to the language and inference system that they use, and from the potential conflicts that may arise from the interaction of context theories through the mappings. The current paper proposes four alternative strategies for using context and preference information to resolve conflicts in a Multi-Context Framework, in which contexts are modeled as rule theories, mappings as defeasible rules, and global inconsistency is handled using methods of distributed defeasible reasoning.

Antonis Bikakis, Grigoris Antoniou, Panayiotis Hassapis
Certified Trust Model

This paper presents a certified confidence model which aims to ensure credibility for information exchanged among agents which inhabit an open environment. Generally speaking, the proposed environment shows a supplier agent

b

which delivers service for a customer agent

a

. The agent

a

returns to

b

a crypto-graphed evaluation

r

on the service delivered. The agent

b

will employ

R

as testimonial when requested to perform the same task for a distinct customer agent. Our hypotheses are: (i) control over testimonials can be distributed as they are locally stored by the assessed agents, i.e., each assessed agent is the owner of its testimonials; and (ii) testimonials, provided by supplier agents on their services, can be considered reliable since they are encapsulated with public key cryptography. This approach reduces the limitations of confidence models based, respectively, on the experience resulted from direct interaction between agents (

direct confidence

) and on the indirect experience obtained from reports of witnesses (

propagated confidence

). Direct confidence is a poor-quality measure for a customer agent

a

hardly has enough opportunities to interact with a supplier agent

b

so as to grow a useful knowledge base. Propagated confidence depends on the willingness of witnesses to share their experiences. The empiric model was tested in a multiagent system applied to the stock market, where supplier agents provide recommendations for buying or selling assets and customer agents then choose suppliers based on their reputations. Results demonstrate that the confidence model proposed enables the agents to more efficiently choose partners.

Vanderson Botêlho, Fabríco Enembreck, Bráulio C. ávila, Hilton de Azevedo, Edson E. Scalabrin
A Fuzzy Knowledge-based Decision Support System for Tender Call Evaluation

In the modern business environment, the capability of an enterprise to generate value from its business knowledge influences in an increasingly important way its competitiveness. Towards this direction, knowledge-based systems can be a very effective tool for enhancing the productivity of knowledge workers by providing them with advanced knowledge processing capabilities. In this paper we describe such a system which utilizes organizational and domain knowledge in order to support consultants in the process of evaluating calls for tender.

Panos Alexopoulos, Manolis Wallace, Konstantinos Kafentzis, Aristodimos Thomopoulos
Extended CNP Framework for the Dynamic Pickup and Delivery Problem Solving

In this paper, we investigate in the applicability of the Contract Net Protocol negotiation (CNP) in the field of the dynamic transportation. We address the optimization of the

Dynamic Pickup and Delivery Problem with Time Windows

also called DPDPTW. This problem is a variant of the Vehicle Routing Problem (VRP) that may be described as the problem of finding the least possible dispatching cost of requests concerning the picking of some quantity of goods from a pickup to a delivery location while most of the requests continuously occur during the day. The use of contract nets in dynamic and uncertain domains such as ours has been proved to be more fruitful than the use of centralized problem solving [9].We provide a new automated negotiation based on the CNP. Negotiation process is adjusted to deal intelligently with the uncertainty present in the concerned problem.

Zoulel Kouki, Besma Fayech Chaar, Mekki Ksouri
A Logic-Based Approach to Solve the Steiner Tree Problem

Boolean satisfiability (SAT) is a well-studied

N P

-complete problem for formulating and solving other combinatorial problems like planning and scheduling. The Steiner tree problem (STP) asks to find a minimal cost tree in a graph that spans a set of nodes. STP has been shown to be

N P

-hard. In this paper, we propose to solve the STP by formulating it as a variation of SAT, and to solve it using a heuristic search method guided by the backbone of the problem. The algorithm is tested on a well known set of benchmark instances. Experimental results demonstrate the applicability of the proposed approach, and show that substantial quality improvement can be obtained compared to other heuristic methods.

Mohamed El Bachir Menai
An Argumentation Agent Models Evaluative Criteria

An approach to argumentation attempts to model the partner's evaluative criteria, and by attempting to work with it rather than against it. To this end, the utterances generated aim to influence the partner to believe what we believe to be in his best interests— although it may not be in fact. The utterances aim to convey what is so, and not to point out “where the partner is wrong”. This behaviour is intended to lead to the development of lasting relationships between agents.

John Debenham
ASIC Design Project Management Supported by Multi Agent Simulation

The complexity of Application Specific Integrated Circuits (ASICs) is continuously increasing. Consequently chip-design becomes more and more challenging. To handle this complexity for a fast ASIC development, the existing design process has to become more efficient. To achieve this, we used an approach based on a multi-agent simulation. A mutli-agent system is an intuitive way to represent a team of designers, creating an ASIC. Furthermore, MAS are capable of coping with the natural dynamics of the design process, reacting to and modelling unforeseen events. The resulting Model is capable of an extensive analysis of the design process. It can make reliable predictions on design project courses and identify weak spots within the design process. It can provide status-anlysis of ongoing projects and suggestions on how to organize, plan and execute a new project efficiently.

Jana Blaschke, Christian Sebeke, Wolfgang Rosenstiel
On the Combination of Textual and Semantic Descriptions for Automated Semantic Web Service Classification

Semantic Web services have emerged as the solution to the need for automating several aspects related to service-oriented architectures, such as service discovery and composition, and they are realized by combining Semantic Web technologies and Web service standards. In the present paper, we tackle the problem of automated classification of Web services according to their application domain taking into account both the textual description and the semantic annotations of OWL-S advertisements. We present results that we obtained by applying machine learning algorithms on textual and semantic descriptions separately and we propose methods for increasing the overall classification accuracy through an extended feature vector and an ensemble of classifiers.

Ioannis Katakis, Georgios Meditskos, Grigorios Tsoumakas, Nick Bassiliades, Vlahavas
Revealing Paths of Relevant Information in Web Graphs

In this paper we propose a web search methodology based on the Ant Colony Optimization (ACO) algorithm, which aims to enhance the amount of the relevant information in respect to a user's query. The algorithm aims to trace routes between hyperlinks, which connect two or more relevant information nodes of a web graph, with the minimum possible cost. The methodology uses the Ant-Seeker algorithm, where agents in the web paradigm are considered as ants capable of generating routing paths of relevant information through a web graph. The paper provides the implementation details of the web search methodology proposed, along with its initial assessment, which presents with quite promising results.

Georgios Kouzas, Vassileios Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas
Preferential Infinitesimals for Information Retrieval

In this paper, we propose a preference framework for information retrieval in which the user and the system administrator are enabled to express preference annotations on search keywords and document elements, respectively. Our framework is flexible and allows expressing preferences such as “

A

is infinitely more preferred than

B

,” which we capture by using

hyperreal numbers

. Due to the widespread of XML as a standard for representing documents, we consider XML documents in this paper and propose a consistent preferential weighting scheme for nested document elements. We show how to naturally incorporate preferences on search keywords and document elements into an IR ranking process using the well-known TF-IDF ranking measure.

Maria Chowdhury, Alex Thomo, William W. Wadge
OntoLife: an Ontology for Semantically Managing Personal Information

Personal knowledge management has been studied from various angles, one of which is the Semantic Web. Ontologies, the primary knowledge representation tool for the Semantic Web, can play a significant role in semantically managing personal knowledge. The scope of this paper focuses on addressing the issue of effective personal knowledge management, by proposing an ontology for modelling the domain of biographical events. The proposed ontology also undergoes a thorough evaluation, based on specific criteria presented in the literature.

Eleni Kargioti, Efstratios Kontopoulos, Nick Bassiliades
AIR_POLLUTION_Onto: an Ontology for Air Pollution Analysis and Control

The paper describes an ontology for air pollution analysis and control, AIR_POLLUTION_Onto, and presents its use in two case studies, an expert system, and a multiagent system, both dedicated to monitoring and control of air pollution in urban regions.

Mihaela M. Oprea
Experimental Evaluation of Multi-Agent Ontology Mapping Framework

Ontology mapping is a prerequisite for achieving heterogeneous data integration on the Semantic Web. The vision of the Semantic Web implies that a large number of ontologies are present on the Web that needs to be aligned before one can make use of them e.g. question answering on the Semantic Web. During the recent years a number of mapping algorithms, frameworks and tools have been proposed to address the problem of ontology mapping. Unfortunately comparing and evaluating these tools is not a straightforward task as these solutions are mainly designed for different domains. In this paper we introduce our ontology mapping framework called “DSSim” and present an experimental evaluation based on the tracks of the Ontology Alignment Evaluation Initiative (OAEI 2008).

Miklos Nagy, Maria Vargas-Vera
Visualizing RDF Documents

The Semantic Web (SW) is an extension to the current Web, enhancing the available information with semantics. RDF, one of the most prominent standards for representing meaning in the SW, offers a data model for referring to objects and their interrelations. Managing RDF documents, however, is a task that demands experience and expert understanding. Tools have been developed that alleviate this drawback and offer an interactive graphical visualization environment. This paper studies the visualization of RDF documents, a domain that exhibits many applications. The most prominent approaches are presented and a novel graph-based visualization software application is also demonstrated.

Aris Athanassiades, Efstratios Kontopoulos, Nick Bassiliades
A Knowledge-based System for Translating FOL Formulas into NL Sentences

In this paper, we present a system that translates first order logic (FOL) formulas into natural language (NL) sentences. The motivation comes from an intelligent tutoring system teaching logic as a knowledge representation language, where it is used as a means for feedback to the users. FOL to NL conversion is achieved by using a rule-based approach, where we exploit the pattern matching capabilities of rules. So, the system consists of a rule-based component and a lexicon. The rule-based unit implements the conversion process, which is based on a linguistic analysis of a FOL sentence, and the lexicon provides lexical and grammatical information that helps in producing the NL sentences. The whole system is implemented in Jess, a java-based expert system shell. The conversion process currently covers a restricted set of FOL formulas.

Aikaterini Mpagouli, Ioannis Hatzilygeroudis
Background Extraction in Electron Microscope Images of Artificial Membranes

On-line analysis of Transmission Electron Microscope (TEM) images is a field with great interest that opens up new prospects regarding automatic acquisitions. Presently, our work is focused on the automatic identification of artificial membranes derived from 2D protein crystallization experiments. Objects recognition at medium magnification aims to control the microscope in order to acquire interesting membranes at high magnification. A multiresolution segmentation technique has been proposed for the image partition. This paper presents an analysis of this partition to extract the background. To achieve this goal in very noisy images, it is essential to suppress false contours as they split the background into multiple regions. Statistical properties of such regions are not always sufficient for their identification as background. The analysis of these regions contours was therefore considered. In the proposed solution, the elimination of false contours is based on the statistical examination of the perpendicular gradient component along the contour. After this improved segmentation, the background extraction can be easily effectuated since this resulting region appears bright and large.

A. Karathanou, J.-L. Buessler, H. Kihl, J.-P. Urban
Confidence Predictions for the Diagnosis of Acute Abdominal Pain

Most current machine learning systems for medical decision support do not produce any indication of how reliable each of their predictions is. However, an indication of this kind is highly desirable especially in the medical field. This paper deals with this problem by applying a recently developed technique for assigning confidence measures to predictions, called

conformal prediction

, to the problem of acute abdominal pain diagnosis. The data used consist of a large number of hospital records of patients who suffered acute abdominal pain. Each record is described by 33 symptoms and is assigned to one of nine diagnostic groups. The proposed method is based on Neural Networks and for each patient it can produce either the most likely diagnosis together with an associated confidence measure, or the set of all possible diagnoses needed to satisfy a given level of confidence.

Harris Papadopoulos, Alex Gammerman, Volodya Vovk
Enhanced Human Body Fall Detection Utilizing Advanced Classification of Video and Motion Perceptual Components

The monitoring of human physiological data, in both normal and abnormal situations of activity, is interesting for the purpose of emergency event detection, especially in the case of elderly people living on their own. Several techniques have been proposed for identifying such distress situations using either motion, audio or video data from the monitored subject and the surrounding environment. This paper aims to present an integrated patient fall detection platform that may be used for patient activity recognition and emergency treatment. Both visual data captured from the user's environment and motion data collected from the subject's body are utilized. Visual information is acquired using overhead cameras, while motion data is collected from on-body sensors. Appropriate tracking techniques are applied to the aforementioned visual perceptual component enabling the trajectory tracking of the subjects. Acceleration data from the sensors can indicate a fall incident. Trajectory information and subject's visual location can verify fall and indicate an emergency event. Support Vector Machines (SVM) classification methodology has been evaluated using the latter acceleration and visual trajectory data. The performance of the classifier has been assessed in terms of accuracy and efficiency and results are presented.

Charalampos Doukas, Ilias Maglogiannis, Nikos Katsarakis, Aristodimos Pneumatikakis
An Evolutionary Technique for Medical Diagnostic Risk Factors Selection

This study proposes an Artificial Neural Network (ANN) and Genetic Algorithm model for diagnostic risk factors selection in medicine. A medical disease prediction may be viewed as a pattern classification problem based on a set of clinical and laboratory parameters. Probabilistic Neural Networks (PNNs) were used to face a medical disease prediction. Genetic Algorithm (GA) was used for pruning the PNN. The implemented GA searched for optimal subset of factors that fed the PNN to minimize the number of neurons in the ANN input layer and the Mean Square Error (MSE) of the trained ANN at the testing phase. Moreover, the available data was processed with Receiver Operating Characteristic (ROC) analysis to assess the contribution of each factor to medical diagnosis prediction. The obtained results of the proposed model are in accordance with the ROC analysis, so a number of diagnostic factors in patient's record can be omitted, without any loss in clinical assessment validity.

Dimitrios Mantzaris, George Anastassopoulos, Iliadis, Adam Adamopoulos
Mining Patterns of Lung Infections in Chest Radiographs

Chest radiography is a reference standard and the initial diagnostic test performed in patients who present with signs and symptoms suggesting a pulmonary infection. The most common radiographic manifestation of bacterial pulmonary infections is foci of consolidation. These are visible as bright shadows interfering with the interior lung intensities. The discovery and the assessment of bacterial infections in chest radiographs is a challenging computational task. It has been limitedly addressed as it is subject to image quality variability, content diversity, and deformability of the depicted anatomic structures. In this paper, we propose a novel approach to the discovery of consolidation patterns in chest radiographs. The proposed approach is based on non-negative matrix factorization (NMF) of statistical intensity signatures characterizing the densities of the depicted anatomic structures. Its experimental evaluation demonstrates its capability to recover semantically meaningful information from chest radiographs of patients with bacterial pulmonary infections. Moreover, the results reveal its comparative advantage over the baseline fuzzy C-means clustering approach.

Spyros Tsevas, Dimitris K. Iakovidis, George Papamichalis
Computational Modeling of Visual Selective Attention Based on Correlation and Synchronization of Neural Activity

Within the broad area of computational intelligence, it is of great importance to develop new computational models of human behaviour aspects. In this report we look into the recently suggested theory that neural synchronization of activity in different areas of the brain occurs when people attend to external visual stimuli. Furthermore, it is suspected that this cross-area synchrony may be a general mechanism for regulating information flow through the brain. We investigate the plausibility of this hypothesis by implementing a computational model of visual selective attention that is guided by endogenous and exogenous goals (i.e., what is known as top down and bottom-up attention). The theoretical structure of this model is based on the temporal correlation of neural activity that was initially proposed by Niebur and Koch (1994). While a saliency map is created in the model at the initial stages of processing visual input, at a later stage of processing, neural activity passes through a correlation control system which comprises of coincidence detector neurons. These neurons measure the degree of correlation between endogenous goals and the presented visual stimuli and cause an increase in the synchronization between the brain areas involved in vision and goal maintenance. The model was able to simulate with success behavioural data from the “at-tentional blink” paradigm (Raymond and Sapiro, 1992). This suggests that the temporal correlation idea represents a plausible hypothesis in the quest for understanding attention.

Kleanthis C. Neokleous, Marios N. Avraamides, Christos N. Schizas
MEDICAL_MAS: an Agent-Based System for Medical Diagnosis

The paper describes an agent-based system, MEDICAL_MAS, developed for medical diagnosis. The architecture proposed for the system includes mainly two types of agents: personal agents and information searching agents. Each type of personal agent corresponds to the humans involved in the medical diagnosis and treatment process (i.e. the patient, the physician, the nurse). The information searching agents are helping the personal agents to find information from the databases that can be accessed by the system (e.g. patients databases, diseases databases, medication databases etc). A case study is presented in the paper.

Mihaela Oprea
Heterogeneous Data Fusion to Type Brain Tumor Biopsies

Current research in biomedical informatics involves analysis of multiple heterogeneous data sets. This includes patient demographics, clinical and pathology data, treatment history, patient outcomes as well as gene expression, DNA sequences and other information sources such as gene ontology. Analysis of these data sets could lead to better disease diagnosis, prognosis, treatment and drug discovery. In this paper, we use machine learning algorithms to create a novel framework to perform the heterogeneous data fusion on both metabolic and molecular datasets, including state-of-the-art high-resolution magic angle spinning (HRMAS) proton (1H) Magnetic Resonance Spectroscopy and gene transcriptome profiling, to intact brain tumor biopsies and to identify different profiles of brain tumors. Our experimental results show our novel framework outperforms any analysis using individual dataset.

Vangelis Metsis, Heng Huang, Fillia Makedon, Aria Tzika
Automatic Knowledge Discovery and Case Management: an Effective Way to Use Databases to Enhance Health Care Management

This paper presents a methodology based on automatic knowledge discovery that aims to identify and predict the possible causes that makes a patient to be considered of high cost. The experiments were conducted in two directions. The first was the identification of important relationships among variables that describe the health care events using an association rules discovery process. The second was the discovery of precise prediction models of high cost patients, using classification techniques. Results from both methods are discussed to show that the patterns generated could be useful to the development of a high cost patient eligibility protocol, which could contribute to an efficient case management model.

Luciana SG Kobus, Fabrício Enembreck, Edson Emílio Scalabrin, Joãoda da Silva Dias, Sandra Honorato da Silva
Combining Gaussian Mixture Models and Support Vector Machines for Relevance Feedback in Content Based Image Retrieval

A relevance feedback (RF) approach for content based image retrieval (CBIR) is proposed, which combines Support Vector Machines (SVMs) with Gaussian Mixture (GM) models. Specifically, it constructs GM models of the image features distribution to describe the image content and trains an SVM classifier to distinguish between the relevant and irrelevant images according to the preferences of the user. The method is based on distance measures between probability density functions (pdfs), which can be computed in closed form for GM models. In particular, these distance measures are used to define a new SVM kernel function expressing the similarity between the corresponding images modeled as GMs. Using this kernel function and the user provided feedback examples, an SVM classifier is trained in each RF round, resulting in an updated ranking of the database images. Numerical experiments are presented that demonstrate the merits of the proposed relevance feedback methodology and the advantages of using GMs for image modeling in the RF framework.

Apostolos Marakakis, Nikolaos Galatsanos, Aristidis Likas, Andreas Stafylopatis
Performance Evaluation of a Speech Interface for Motorcycle Environment

In the present work we investigate the performance of a number of traditional and recent speech enhancement algorithms in the adverse non-stationary conditions, which are distinctive for motorcycle on the move. The performance of these algorithms is ranked in terms of the improvement they contribute to the speech recognition rate, when compared to the baseline result, i.e. without speech enhancement. The experimentations on the MoveOn motorcycle speech and noise database suggested that there is no equivalence between the ranking of algorithms based on the human perception of speech quality and the speech recognition performance. The Multi-band spectral subtraction method was observed to lead to the highest speech recognition performance.

Iosif Mporas, Todor Ganchev, Otilia Kocsis, Nikos Fakotakis
Model Identification in Wavelet Neural Networks Framework

The scope of this study is to present a complete statistical framework for model identification of wavelet neural networks (WN). In each step in WN construction we test various methods already proposed in literature. In the first part we compare four different methods for the initialization and construction of the WN. Next various information criteria as well as sampling techniques proposed in previous works were compared to derive an algorithm for selecting the correct topology of a WN. Finally, in variable significance testing the performance of various sensitivity and model-fitness criteria were examined and an algorithm for selecting the significant explanatory variables is presented.

A. Zapranis, A. Alexandridis
Two Levels Similarity Modelling: a Novel Content Based Image Clustering Concept

In this work, we applied a co-clustering concept in content based image recognition field. In this aim, we introduced a two levels similarity modelling (TLSM) concept. This approach is based on a new images similarity formulation using obtained co-clusters. The obtained results show a real improvement of image recognition accuracy in comparison with obtained accuracy obtained using one of classical co-clustering systems.

Amar Djouak, Hichem Maaref
Locating an Acoustic Source Using a Mutual Information Beamformer

Beamforming remains one of the most common methods for estimating the Direction Of Arrival (DOA) of an acoustic source. Beamformers operate using at least two sensors that look among a set of geometrical directions for the one that maximizes received signal power. In this paper we consider a two-sensor beamformer that estimates the DOA of a single source by scanning the broadside for the direction that maximizes the mutual information between the two microphones. This alternative approach exhibits robust behavior even under heavily reverberant conditions where traditional power-based systems fail to distinguish between the true DOA and that of a dominant reflection. Performance is demonstrated for both algorithms with sets of simulations and experiments as a function of different environmental variables. The results indicate that the newly proposed beamforming scheme can accurately estimate the DOA of an acoustic source.

Osama N. Alrabadi, Fotios Talantzis, Anthony G. Constantinides
Intelligent Modification of Colors in Digitized Paintings for Enhancing the Visual Perception of Color-blind Viewers

Color vision deficiency (CVD) is quite common since 8%–12% of the male and 0.5% of the female European population seem to be color-blind to some extent. Therefore there is great research interest regarding the development of methods that modify digital color images in order to enhance the color perception by the impaired viewers. These methods are known as daltonization techniques. This paper describes a novel daltonization method that targets a specific type of color vision deficiency, namely protanopia. First we divide the whole set of pixels into a smaller group of clusters. Subsequently we split the clusters into two main categories: colors that protanopes (persons with protanopia) perceive in a similar way as the general population, and colors that protanopes perceive differently. The color clusters of the latter category are adapted in order to improve perception, while ensuring that the adapted colors do not conflict with colors in the first category. Our experiments include results of the implementation of the proposed method on digitized paintings, demonstrating the effectiveness of our algorithm.

Paul Doliotis, George Tsekouras, Christos-Nikolaos Anagnostopoulos, Vassilis Athitsos
An intelligent Fuzzy Inference System for Risk Estimation Using Matlab Platform: the Case of Forest Fires in Greece

This paper aims in the design of an intelligent Fuzzy Inference System that evaluates risk due to natural disasters. Though its basic framework can be easily adjusted to perform in any type of natural hazard, it has been specifically designed to be applied in the case of forest fire risk in the area of the Greek terrain. Its purpose is to create a descending list of the areas under study, according to their degree of risk. This will provide important aid towards the task of distributing properly fire fighting resources. It is designed and implemented in Matlab's integrated Fuzzy Logic Toolbox. It estimates two basic kinds of risk indices, namely the man caused risk and the natural one. The fuzzy membership functions used in this project are the Triangular and the Semi-Triangular.

T Tsataltzinos, L Iliadis, S Spartalis
MSRS: Critique on its Usability via a Path Planning Algorithm Implementation

In recent years an expanding number of robotics software platforms have emerged, with Microsoft expressing its interest in the field by releasing its own in 2006. This fact has created a highly competitive environment, as the majority of the products are mostly incompatible to each other, with every platform trying to establish itself as the field's standard. Thus, the question that arises is whether a platform is suited for educational purposes or creating a complete robotics intelligence package. This paper provides a study on the learnability, usability and features of Microsoft Robotics Studio, by creating and integrating into it a version of the Lifelong Planning A* algorithm (LPA*) algorithm.

George Markou, Ioannis Refanidis
Automated Product Pricing Using Argumentation

This paper describes an argumentation-based approach for automating the decision making process of an autonomous agent for pricing products. Product pricing usually involves different decision makers with different - possibly conflicting - points of view. Moreover, when considering firms in the retail business sector, they have hundreds or thousands of products to apply a pricing policy. Our approach allows for applying a price policy to each one of them by taking into account different points of view expressed through different arguments and the dynamic environment of the application. This is done because argumentation is a reasoning mechanism based on the construction and the evaluation of interacting conflicting arguments. We also show how we conceived and developed our agent using the Agent Systems Engineering Methodology (ASEME).

Nikolaos Spanoudakis, Pavlos Moraitis
User Recommendations based on Tensor Dimensionality Reduction

Social Tagging is the process by which many users add metadata in the form of keywords, to annotate and categorize items (songs, pictures, web links, products etc.). Social tagging systems (STSs) can recommend users with common social interest based on common tags on similar items. However, users may have different interests for an item, and items may have multiple facets. In contrast to the current recommendation algorithms, our approach develops a model to capture the three types of entities that exist in a social tagging system: users, items, and tags. These data are represented by a 3-order tensor, on which latent semantic analysis and dimensionality reduction is performed using the Higher Order Singular Value Decomposition (HOSVD) method. We perform experimental comparison of the proposed method against a baseline user recommendation algorithm with a real data set (BibSonomy), attaining significant improvements.

Panagiotis Symeonidis
A Genetic Algorithm for the Classification of Earthquake Damages in Buildings

In this paper an efficient classification system in the area of earthquake engineering is reported. The proposed method uses a set of artificial accelerograms to examine several types of damages in specific structures. With the use of seismic accelerograms, a set of twenty seismic parameters have been extracted to describe earthquakes. Previous studies based on artificial neural networks and neuro-fuzzy classification systems present satisfactory classification results in different types of earthquake damages. In this approach a genetic algorithm (GA) was used to find the optimal feature subset of the seismic parameters that minimizes the computational cost and maximizes the classification performance. Experimental results indicate that the use of the GA was able to classify the structural damages with classification rates up to 92%.

Petros-Fotios Alvanitopoulos, Ioannis Andreadis, Anaxagoras Elenas
Mining Retail Transaction Data for Targeting Customers with Headroom - A Case Study

We outline a method to model customer behavior from retail transaction data. In particular, we focus on the problem of recommending relevant products to consumers. Addressing this problem of filling

holes in the baskets

of consumers is a fundamental aspect for the success of targeted promotion programs. Another important aspect is the identification of customers who are most likely to spend significantly and whose potential spending ability is not being fully realized. We discuss how to identify such customers with

headroom

and describe how relevant product categories can be recommended. The data consisted of individual transactions collected over a span of 16 months from a leading retail chain. The method is based on Singular Value Decomposition and can generate significant value for retailers.

Madhu Shashanka, Michael Giering
Adaptive Electronic Institutions for Negotiations

The expansion of web technologies pushes human activities over methodologies and software that could ease reactions by means of software transactions. Distribution of human and software agents over the web and their operation under dynamically changing conditions necessitate the need for dynamic intelligent environments. Electronic institutions can play an “umbrella” role for agents' transactions, where institutions' norms could protect and support movements and decisions made through negotiations. However, dynamic information provision may force changes in structures and behaviors, driving electronic institutions' adaptation to changing needs. Viewing negotiation structures as electronic institutions, this paper investigates the impact of a dynamically changing environment to negotiations' electronic institutions.

Manolis Sardis, George Vouros
A Multi-agent Task Delivery System for Balancing the Load in Collaborative Grid Environment

This paper focuses on improving load balancing algorithms in grid environments by means of multi-agent systems. The goal is endowing the environment with an efficient scheduling, taking into account not only the computational capabilities of resources but also the task requirements and resource configurations in a given moment. In fact, task delivery makes use of a Collaborative/Cooperative Awareness Management Model (CAM) which provides information of the environment. Next, a Simulated Annealing based method (SAGE) which optimizes the process assignment. Finally, a historic database which stores information about previous cooperation/collaborations in the environment aiming to learn from experience and infer to obtain more suitable future cooperation/collaboration. The integration of these three subjects allows agents define a system to cover all the aspects related with load-balancing problem in collaborations grid environment.

Mauricio Paletta, Pilar Herrero
Backing-up Fuzzy Control of a Truck-trailer Equipped with a Kingpin Sliding Mechanism

For articulated vehicles met in robotics and transportation fields, even for an experienced operator, backing-up leads usually to jack-knifing. This paper presents a fuzzy logic controller for back-driving a truck-trailer vehicle into a predefined parking task. The truck-trailer link system is equipped with a kingpin sliding mechanism acting as an anti-jackknife excitation input. By applying fuzzy logic control techniques, precise system model is not required. The developed controller with thirty four rules works well as the presented simulation results demonstrate the avoidance of jack-knife and the accuracy of the backing-up technique.

G. Siamantas, S. Manesis
Sensing Inertial and Continuously-Changing World Features

Knowledge and causality play an essential role in the attempt to achieve commonsense reasoning in cognitive robotics. As agents usually operate in dynamic and uncertain environments, they need to acquire information through sensing inertial aspects, such as the state of a door, and continuously changing aspects, such as the location of a moving object. In this paper, we extend an Event Calculus-based knowledge framework with a method for sensing world features of different types in a uniform and transparent to the agent manner. The approach results in the modeling of agents that remember and forget, a cognitive skill particularly suitable for the implementation of real-world applications.

Theodore Patkos, Dimitris Plexousakis
MobiAct: Supporting Personalized Interaction with Mobile Context-aware Applications

In this paper we present a conceptual framework for interaction with mobile context aware applications. The framework focuses especially on public and semi-public environments. Based on this framework a generic abstract architecture has been designed and several of its parts have been implemented. We discuss the implications and the support that this architecture provides for personalization of interaction. The architecture supports high interoperability and flexibility, with capability of tackling issues like privacy and degree of user control. The framework has been tested in typical spaces: a library and a museum. The paper concludes with a set of examples of use of the defined framework that cover typical situations for intra-space and across spaces usage.

Adrian Stoica, Nikolaos Avouris
Defining a Task's Temporal Domain for Intelligent Calendar Applications

Intelligent calendar assistants have many years ago attracted researchers from the areas of scheduling, machine learning and human computer interaction. However, all efforts have concentrated on automating the meeting scheduling process, leaving personal tasks to be decided manually by the user. Recently, an attempt to automate scheduling personal tasks within an electronic calendar application resulted in the deployment of a system called S

elf

P

lanner

. The system allows the user to define tasks with duration, temporal domain and other attributes, and then automatically accommodates them within her schedule by employing constraint satisfaction algorithms. Both at the design phase and while using the system, it has been made clear that the main bottleneck in its use is the definition of a task's temporal domain. To alleviate this problem, a new approach based on a combination of template application and manual editing has been designed. This paper presents the design choices underlying temporal domain definition in S

elf

P

lanner

and some computational problems that we had to deal with.

Anastasios Alexiadis, Ioannis Refanidis
Managing Diagnosis Processes with Interactive Decompositions

In the scientific literature, it is generally assumed that models can be completely established before the diagnosis analysis. However, in the actual maintenance problems, such models appear difficult to be reached in one step. It is indeed difficult to formalize a whole complex system. Usually, understanding, modelling and diagnosis are interactive processes where systems are partially depicted and some parts are refined step by step. Therefore, a diagnosis analysis that manages different abstraction levels and partly modelled components would be relevant to actual needs. This paper proposes a diagnosis tool managing different modelling abstraction levels and partly depicted systems.

Quang-Huy GIAP, Stephane PLOIX, Jean-Marie FLAUS
Computer Log Anomaly Detection Using Frequent Episodes

In this paper, we propose a set of algorithms to automate the detection of

anomalous frequent episodes

. The algorithms make use of the hierarchy and frequency of episodes present in an examined sequence of log data and in a history preceding it. The algorithms identify changes in a set of frequent episodes and their frequencies. We evaluate the algorithms and describe tests made using live computer system log data.

Perttu Halonen, Markus Miettinen, Kimmo Hätönen
Semi-tacit Adaptation of Intelligent Environments

This paper presents a semi-tacit adaptation system for implementing and configuring a new generation of intelligent environments referred to as adaptive ambient ecologies. These are highly distributed systems, which require new ways of communication and collaboration to support the realization of people's tasks. Semi-tacit adaptation is based on a mixed initiative approach in human-system dialogue management and is supported by three types of intelligent agents: Fuzzy Task Agent, Planning Agent and Interaction Agent. These agents use an ontology as a common repository of knowledge and information about the services and state of the ambient ecology.

Tobias Heinroth, Achilles Kameas, Hani Hagras, Yacine Bellik
A Formal Fuzzy Framework for Representation and Recognition of Human Activities

This paper focuses on the problem of human activity representation and automatic recognition. We first describe an approach for human activity representation. We define the concepts of roles, relations, situations and temporal graph of situations (the context model). This context model is transformed into a Fuzzy Petri Net which naturally expresses the smooth changes of activity states from one state to another with gradual and continuous membership functions. Afterward, we present an algorithm for recognizing human activities observed in a scene. The recognition algorithm is a hierarchical fusion model based on fuzzy measures and fuzzy integrals. The fusion process nonlinearly combines events, produced by an activity representation model, based on an assumption that all occurred events support the appearance of a modeled scenario. The goal is to determine, from an observed sequence, the confidence factor that each modeled scenario (predefined in a library) is indeed describing this sequence. We have successfully evaluated our approach on the video sequences taken from the European CAVIAR project

1

.

Suphot Chunwiphat, Patrick Reignier, Augustin Lux
Multi-modal System Architecture for Serious Gaming

Human-computer interaction (HCI), especially in the games domain, targets to mimic as much as possible the natural human-to-human interaction, which is multimodal, involving speech, vision, haptic, etc. Furthermore, the domain of serious games, aiming to value-added games, makes use of additional inputs, such as biosensors, motion tracking equipment, etc. In this context, game development has become complex, expensive and burdened with a long development cycle. This creates barriers to independent game developers and inhibits the introduction of innovative games, or new game genres. In this paper the PlayMancer platform is introduced, a work in progress aiming to overcome such barriers by augmenting existing 3D game engines with innovative modes of interaction. Playmancer integrates open source existing systems, such as a game engine and a spoken dialog management system, extended by newly implemented components, supporting innovative interaction modalities, such as emotion recognition from audio data, motion tracking, etc, and advanced configuration tools.

Otilia Kocsis, Todor Ganchev, Iosif Mporas, George Papadopoulos, Nikos Fakotakis
Reconstruction-based Classification Rule Hiding through Controlled Data Modification

In this paper, we propose a reconstruction—based approach to classification rule hiding in categorical datasets. The proposed methodology modifies transactions supporting both sensitive and nonsensitive classification rules in the original dataset and then uses the supporting transactions of the nonsensitive rules to produce its sanitized counterpart. To further investigate some interesting properties of this methodology, we explore three variations of the main technique which differ in the way they select and sanitize transactions supporting sensitive rules. Finally, through extensive experimental evaluation, we demonstrate the effectiveness of the proposed algorithms towards effectively shielding the sensitive knowledge.

Aliki Katsarou, Gkoulalas-Divanis Aris, Vassilios S. Verykios
Learning Rules from User Behaviour

Pervasive computing requires infrastructures that adapt to changes in user behaviour while minimising user interactions. Policy-based approaches have been proposed as a means of providing adaptability but, at present, require policy goals and rules to be explicitly defined by users. This paper presents a novel, logic-based approach for automatically learning and updating models of users from their observed behaviour. We show how this task can be accomplished using a nonmonotonic learning system, and we illustrate how the approach can be exploited within a pervasive computing framework.

Domenico Corapi, Oliver Ray, Alessandra Russo, Arosha Bandara, Emil Lupu
Behaviour Recognition using the Event Calculus

We present a system for recognising human behaviour given a symbolic representation of surveillance videos. The input of our system is a set of timestamped short-term behaviours, that is, behaviours taking place in a short period of time — walking, running, standing still, etc — detected on video frames. The output of our system is a set of recognised long-term behaviours — fighting, meeting, leaving an object, collapsing, walking, etc — which are pre-defined temporal combinations of short-term behaviours. The definition of a long-term behaviour, including the temporal constraints on the short-term behaviours that, if satisfied, lead to the recognition of the long-term behaviour, is expressed in the Event Calculus. We present experimental results concerning videos with several humans and objects, temporally overlapping and repetitive behaviours.

Alexander Artikis, George Paliouras
Multi-Source Causal Analysis: Learning Bayesian Networks from Multiple Datasets

We argue that causality is a useful, if not a necessary concept to allow the integrative analysis of multiple data sources. Specifically, we show that it enables learning causal relations from (a) data obtained over different experimental conditions, (b) data over different variable sets, and (c) data over semantically similar variables that nevertheless cannot be pulled together for various technical reasons. The latter case particularly, often occurs in the setting of analyzing multiple gene-expression datasets. For cases (a) and (b) above there already exist preliminary algorithms that address them, albeit with some limitations, while for case (c) we develop and evaluate a new method. Preliminary empirical results provide evidence of increased learning performance of causal relations when multiple sources are combined using our method versus learning from each individual dataset. In the context of the above discussion we introduce the problem of Multi-Source Causal Analysis (MSCA), defined as the problem of inferring and inducing causal knowledge from multiple sources of data and knowledge. The grand vision of MSCA is to enable the automated or semi-automated, large-scale integration of available data to construct causal models involving a significant part of human concepts.

Ioannis Tsamardinos, Asimakis P. Mariglis
A Hybrid Approach for Improving Prediction Coverage of Collaborative Filtering

In this paper we present a hybrid filtering algorithm that attempts to deal with low prediction Coverage, a problem especially present in sparse datasets. We focus on Item HyCoV, an implementation of the proposed approach that incorporates an additional User-based step to the base Item-based algorithm, in order to take into account the possible contribution of users similar to the active user. A series of experiments were executed, aiming to evaluate the proposed approach in terms of Coverage and Accuracy. The results show that Item HyCov significantly improves both performance measures, requiring no additional data and minimal modification of existing filtering systems.

Manolis G. Vozalis, Angelos I. Markos, Konstantinos G. Margaritis
Towards Predicate Answer Set Programming via Coinductive Logic Programming

Answer Set Programming (ASP) is a powerful paradigm based on logic programming for non-monotonic reasoning. Current ASP implementations are restricted to “grounded range-restricted function-free normal programs” and use an evaluation strategy that is “bottom-up” (i.e., not goal-driven). Recent introduction of coinductive Logic Programming (co-LP) has allowed the development of top-down goal evaluation strategies for ASP. In this paper we present this novel goal-directed, top-down approach to executing predicate answer set programs with co-LP. Our method eliminates the need for grounding, allows functions, and effectively handles a large class of predicate answer set programs including possibly infinite ones.

Richard Min, Ajay Bansal, Gopal Gupta
An Adaptive Resource Allocating Neuro-Fuzzy Inference System with Sensitivity Analysis Resource Control

Adaptability in non-stationary contexts is a very important property and a constant desire for modern intelligent systems and is usually associated with dynamic system behaviors. In this framework, we present a novel methodology of dynamic resource control and optimization for neurofuzzy inference systems. Our approach involves a neurofuzzy model with structural learning capabilities that adds rule nodes when necessary during the training phase. Sensitivity analysis is then applied to the trained network so as to evaluate the network rules and control their usage in a dynamic manner based on a confidence threshold. Therefore, on one hand, we result in a well-balanced structure with an improved adaptive behavior and, on the other hand, we propose a way to control and restrict the “curse of dimensionality”. The experimental results on a number of classification problems prove clearly the strengths and benefits of this approach.

Minas Pertselakis, Natali Raouzaiou, Andreas Stafylopatis
A Lazy Approach for Machine Learning Algorithms

Most machine learning algorithms are eager methods in the sense that a model is generated with the complete training data set and, afterwards, this model is used to generalize the new test instances. In this work we study the performance of different machine learning algorithms when they are learned using a lazy approach. The idea is to build a classification model once the test instance is received and this model will only learn a selection of training patterns, the most relevant for the test instance. The method presented here incorporates a dynamic selection of training patterns using a weighting function. The lazy approach is applied to machine learning algorithms based on different paradigms and is validated in different classification domains.

Inés M. Galván, Josà M. Valls, Nicolas Lecomte, Pedro Isasi
TELIOS: A Tool for the Automatic Generation of Logic Programming Machines

In this paper the tool TELIOS is presented, for the automatic generation of a hardware machine, corresponding to a given logic program. The machine is implemented using an FPGA, where a corresponding inference machine, in application specific hardware, is created on the FPGA, based on a BNF parser, to carry out the inference mechanism. The unification mechanism is based on actions embedded between the non-terminal symbols and implemented using special modules on the FPGA.

Alexandros C. Dimopoulos, Christos Pavlatos, George Papakonstantinou
GF-Miner: a Genetic Fuzzy Classifier for Numerical Data

Fuzzy logic and genetic algorithms are well-established computational techniques that have been employed to deal with the problem of classification as this is presented in the context of data mining. Based on

Fuzzy Miner

which is a recently proposed state-of-the-art fuzzy rule based system for numerical data, in this paper we propose

GF-Miner

which is a genetic fuzzy classifier that improves

Fuzzy Miner

firstly by adopting a clustering method for succeeding a more natural fuzzy partitioning of the input space, and secondly by optimizing the resulting fuzzy if-then rules with the use of genetic algorithms. More specifically, the membership functions of the fuzzy partitioning are extracted in an unsupervised way by using the fuzzy c- means clustering algorithm, while the extracted rules are optimized in terms of the volume of the rulebase and the size of each rule, using two appropriately designed genetic algorithms. The efficiency of our approach is demonstrated through an extensive experimental evaluation using the IRIS benchmark dataset.

Vicky Tsikolidaki, Nikos Pelekis, Yannis Theodoridis
Fuzzy Dependencies between Preparedness and Learning Outcome

There is a large number of learning management systems as well as intelligent tutoring systems supporting today's educational process. Some of these systems relay heavily on use and reuse of learning objects. A lot of work has been done on creating, storing, classifying and filtering learning objects with respect to a specific subject. Considerable amount of research focuses on facilitating the process of reusing already available learning objects. This work is devoted to a study of a decision making process related to recommending the most appropriate learning objects to each particular student.

S. Encheva, S. Tumin
Backmatter
Metadaten
Titel
Artificial Intelligence Applications and Innovations III
herausgegeben von
Iliadis
Maglogiann
Tsoumakasis
Vlahavas
Bramer
Copyright-Jahr
2009
Verlag
Springer US
Electronic ISBN
978-1-4419-0221-4
Print ISBN
978-1-4419-0220-7
DOI
https://doi.org/10.1007/978-1-4419-0221-4

Premium Partner