Skip to main content

Über dieses Buch

The International Symposium on Distributed Computing and Artificial Intelligence 2012 (DCAI 2012) is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. This conference is a forum in which applications of innovative techniques for solving complex problems will be presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and industry sector is essential to facilitate the development of systems that can meet the ever increasing demands of today's society.

This edition of DCAI brings together past experience, current work, and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the Bioinformatics, Intelligent System and Educational Technology Research Group ( of the University of Salamanca. The present edition will be held in Salamanca, Spain, from 28th to 30th March 2012.



Mixed Odor Classification for QCM Sensor Data by Neural Networks

Compared with metal oxide semiconductor gas sensors, quarts crystal microbalance (QCM) sensors are sensitive for odors. Using an array of QCM sensors, we measure mixed odors and classify them into an original odor class before mixing based on neural networks. For simplicity we consider the case that two kinds of odor are mixed since more than two becomes too complex to analize the classification results. We have used eight sensors and four kinds of odor are used as the original odors. The neural network used here is a conventional layered neural network. The classification is acceptable although the perfect classification could not been achieved.

Sigeru Omatu, Hideo Araki, Toru Fujinaka, Michifumi Yoshioka, Hiroyuki Nakazumi

A Predictive Search Method of FAQ Corresponding to a User’s Incomplete Inquiry by Statistical Model of Important Words Co-occurrence

We address a predictive search of FAQ corresponding to a user’s incomplete inquiry that a user is inputting with important words defined in each FAQ. The important words co-occur in a user’s inquiries and the rates of the co-occurrences depend on which FAQ the user’s inquiry corresponds to. The co-occurrence rates of important words in inquiries are estimated from a statistical model of important words co-occurrence generated with past inquiries and FAQ corresponding to them. When the highest co-occurrence rate of them is larger than a threshold set on each FAQ, the inquiry is regarded as a corresponding FAQ. Experimental results show that the proposed method can improve the recall rate by 40% for short inquiries and the precision rate by 27% for long inquiries.

Masaki Samejima, Yuichi Saito, Masanori Akiyoshi, Hironori Oka

Decision Making for Sustainable Manufacturing Utilizing a Manufacturing Technology Database

Sustainability is an important issue in manufacturing technologies in these days. However, for manufacturing engineers, quality has been the most important goal for long. Thus, in evaluating sustainability of manufacturing technologies, not only environmental aspects, but also quality of manufacturing should be taken into account. The existing work has proposed an evaluation method for manufacturing technologies which was named “Total Performance Analysis (TPA).” It can quantify the balance of the value created by the manufacturing process versus the cost and environmental impact. TPA can point out segment manufacturing processes which should be improved most. Through a simple case study, the paper concludes that the proposed method is helpful to evaluate real sustainability of manufacturing processes. In addition, by combining with manufacturing technologies database, not only pointing out the manufacturing processes to be improved, but also suggesting alternative processes will be possible.

Nozomu Mishima

Solving Time-Dependent Traveling Salesman Problems Using Ant Colony Optimization Based on Predicted Traffic

In this paper, we propose an ant colony optimization based on the predicted traffic for time-dependent traveling salesman problems (TDTSP), where the travel time between cities changes with time. Prediction values required for searching is assumed to be given in advance. We previously proposed a method to improve the search rate of Max-Min Ant System (MMAS) for static TSPs. In the current work, the method is extended so that the predicted travel time can be handled and formalized in detail. We also present a method of generating a TDTSP to use in evaluating the proposed method. Experimental results using benchmark problems with 51 to 318 cities suggested that the proposed method is better than the conventional MMAS in the rate of search.

Hitoshi Kanoh, Junichi Ochiai

Modeling Shared-Memory Metaheuristic Schemes for Electricity Consumption

This paper tackles the problem of modeling a shared-memory metaheuristic scheme. The use of a model of the execution time allows us to decide at running time the number of threads to use to obtain a reduced execution time. A parameterized metaheuristic scheme is used, so different metaheuristics and hybridations can be applied to a particular problem, and it is easier to obtain a satisfactory metaheuristic for the problem. The model of the execution time and consequently the optimum number of threads depend on a number of factors: the problem to be solved, the metaheuristic scheme and the implementation of the basic functions in it, the computational system where the problem is being solved, etc. So, obtaining a satisfactory model and an autotuning methodology is not an easy task. This paper presents an autotuning methodology for shared-memory parameterized metaheuristic schemes, and its application to a problem of minimization of electricity consumption in exploitation of wells. The model and the methodology work satisfactorily, which allows us to reduce the execution time and to obtain lower electricity consumptions than previously obtained.

Luis-Gabino Cutillas-Lozano, José-Matías Cutillas-Lozano, Domingo Giménez

Virtual Laboratory for the Training of Health Workers in Italy

Virtual immersive laboratory for the training of health workers helps in the learning process of Italian as a second language for the foreign learners. Knowledge of the language is important for health workers because only through communication it is possible to build relationships between carer and user. We propose a persistent virtual environment under controlled conditions within the private network of the classroom, shared between teacher and students, which can be implemented at low cost with in a realistic situation. Few example scenarios and use cases are shown.We also introduced a frequently asked question chat-bot avatar as a language training tool to increase the attention of the learner in order to improve her/his second language skill.

Antonella Gorrino, Giovanni De Gasperis

The “Good” Brother: Monitoring People Activity in Private Spaces

Population over 50 will rise by 35% until 2050. Thus, attention to the needs of the elderly and disabled is today in all developed countries one of the great challenges of social and economic policies. There is a worldwide interest in systems for the analysis of people’s activities, especially those most in need.

Vision systems for surveillance and behaviour analysis have spread in recent years. While cameras are widely used in outdoor environments there are few employed in private spaces, being replaced by other devices that provide fewer information. This is mainly due to people worries about maintaining privacy and their feeling of being continuously monitored by “big brother”.

We propose a methodology for the design of a multisensor network in private spaces that meets privacy requirements. People would accept video-based surveillance and safety services if the system can ensure their privacy under any circumstance, as a kind of “good brother”.

Jose R. Padilla-López, Francisco Flórez-Revuelta, Dorothy N. Monekosso, Paolo Remagnino

Distribution and Selection of Colors on a Diorama to Represent Social Issues Using Cultural Algorithms and Graph Coloring

We present a problem know in writing about social modeling associated with the adequate distribution and color selection of societies in a diorama to specify relationships between them; and also between their principal attributes to represent the symbolic capital of a society. Our case study is related to the diversity of cultural patterns described in Memory Alpha. Thus, we use 8 principal attributes with a range of 64 colors. The purpose of this research is to apply the cultural algorithms approach with color graph to solve the proposed problem and subsequently represent the solution within a diorama. The Memory Alpha is conformed by 1087 societies, which permits to demonstrate that the matching of social issues allows correct distribution and color selection. In summary we are proposing an innovative representation for societies location.

Víctor Ricardo Cruz-Álvarez, Fernando Montes-Gonzalez, Alberto Ochoa, Rodrigo Edgar Palacios-Leyva

‘Believable’ Agents Build Relationships on the Web

In this paper we present the Believable Negotiator — the formalism behind a Web business negotiation technology that treats relationships as a commodity. It supports relationship building, maintaining, evolving, and passing to other agents, and utilises such relationships in agent interaction. The Believable Negotiator also takes in account the “relationship gossip” — the information, supplied by its information providing agents, about the position of respective agents in their networks of relationships beyond the trading space. It is embodied in a 3D web space, that is translated to different virtual worlds platforms, enabling the creation of an integrated 3D trading space, geared for Web 3.0.

John Debenham, Simeon Simoff

Speed-Up Method for Neural Network Learning Using GPGPU

GPU is the dedicated circuit to draw the graphics, so it has a characteristic that the many simple arithmetic circuits are implemented. This characteristic is hoped to apply the massive parallelism not only graphic processing. In this paper, the neural network, one of the pattern recognition algorithms is applied to be faster by using GPU. In the learning of the neural network, there are many points to be processed at the same time. We propose a method which makes the neural network be parallelized in three points. The parallelizations are implemented in neural networks which have different initial weight coefficients, the learning patterns or neurons in a layer of neural network. These methods are used in combination, but the first method can be processed independently. Therefore one of the three methods, the first method, is employed as comparison to compare with the proposed methods. As the result, the proposed method is 6 times faster than comparison method.

Yuta Tsuchida, Michifumi Yoshioka, Sigeru Omatu

A Fast Heuristic Solution for the Commons Game

Game Theory can be used to capture and model the phenomenon of the exploitation of the environment by human beings. The



is a simple and concise game that elegantly formulates the different behaviors of humans toward the exploitation of resources (also known as “commons”) as seen from a game-theoretic perspective. The game is particularly difficult because it is a multi-player game which requires both competition and cooperation. Besides, to augment the complexity, the various players are unaware of the moves made by the others – they merely observe the


of their respective moves. This makes the game extremely difficult to analyze, and























. In the



, an ensemble of approaches towards the exploitation of the commons can be modeled by colored cards. In this paper, we consider the cases when, with some probability, the user is aware of the approach (color) which the other players will use in the exploitation of the commons. We investigate the problem of determining the best probability value with which a


player can play each color in order to maximize his ultimate score. Our solution to this problem is an “intelligent” heuristic algorithm which determines (i.e., locates in the corresponding space) feasible probability values to be used so as to obtain the maximum average score. The basis for such a strategy is that we believe that results obtained in this manner can be used to work towards a “


” solution, or for a solution which utilizes such values for a UCT-type algorithm. The solution has been rigorously tested by simulations, and the results obtained are, in our opinion, quite impressive. Indeed, as far as we know, this is a pioneering AI-based heuristic solution to the game under the present model of computation.

Rokhsareh Sakhravi, Masoud T. Omran, B. John Oommen

Picture Information Shared Conversation Agent: Pictgent

Recently, the various chatterbots have been proposed to simulate conversation. In this study, we propose a novel chatterbot system utilizing “pictures” of a model of situation in order to share common knowledge between users and the chatterbot by showing pictures to users. We show the basic concept and system structure of proposed system. We also describe the algorithm of estimating user interests.

Miki Ueno, Naoki Mori, Keinosuke Matsumoto

A Mixed Portfolio Selection Problem

The mixed portfolio selection problem studied in this paper corresponds to a situation of financial risk management in which some return rates are mathematically described by random variables and others are described by fuzzy numbers. Both Markowitz probabilistic model and a possibilistic portfolio selection model are generalized. A calculation formula for the optimal solution of the portfolio problem and a formula which gives the minimum value of the associated risk are proved.

Irina Georgescu, Jani Kinnunen

Towards a Service Based on “Train-to-Earth” Wireless Communications for Remotely Managing the Configuration of Applications Inside Trains

This paper describes a distributed service based on “train-to-earth” wireless communication, which is able to allow maintenance engineers to remotely monitor and update the configuration of the applications running inside a fleet of trains. It is a very useful tool that avoids engineers to move from their offices to trains for doing the maintenance tasks. This service is the result of five years of collaboration with a railway company in the north of Spain. Moreover, it represents the following step after the implantation of an innovative “train-to-earth” communication architecture in this company. Today, both the communications architecture and this service are being tested within the installations of the railway company with successful results.

Itziar Salaberria, Roberto Carballedo, Asier Perallos

CDEBMTE: Creation of Diverse Ensemble Based on Manipulation of Training Examples

Ensemble methods like Bagging and Boosting which combine the decisions of multiple hypotheses are among the strongest existing machine learning methods. The diversity of the members of an ensemble is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, named CDEBMTE (Creation of Diverse Ensemble Based on Manipulation of Training Examples), that directly constructs diverse hypotheses using manipulation of training examples in three ways: (1) sub-sampling training examples, (2) decreasing/increasing error-prone training examples and (3) decreasing/increasing neighbor samples of error-prone training examples. Experimental results using two well-known classifiers as two base learners demonstrate that this approach consistently achieves higher predictive accuracy than both the base classifier, Adaboost and Bagging. CDEBMTE also outperforms Adaboost more prominent when training data size is becomes larger.

Hamid Parvin, Sajad Parvin, Zahra Rezaei, Moslem Mohamadi

Enhancing the Localization Precision in Aircraft Parking Areas of Airports through Advanced Ray-Tracing Techniques

Parking areas in airports where the aircraft is loaded, refueled and boarded, present a high risk of accidents due to the large number of vehicles and people involved in the handling activity. For airport ground surveillance, different technologies are deployed: Radar, CCTV, GPS and Trilateration systems. All these solutions have important limitations when are using for surveillance in apron and stands areas near the terminal buildings. To solve this problem, we propose new algorithms for localization based on fingerprinting techniques that, using the available WLAN infrastructure and the ray-tracing multipath information provided by newFASANT simulation tool, allow increasing accuracy and safety in outdoor areas of the airports.

Antonio del Corte, Oscar Gutierrez, José Manuel Gómez

Variability Compensation Using NAP for Unconstrained Face Recognition

The variability presented in unconstrained environments represents one of the open challenges in automated face recognition systems. Several techniques have been proposed in the literature to cope with this problem, most of them tailored to compensate one specific source of variability, e.g., illumination or pose. In this paper we present a general variability compensation scheme based on the Nuisance Attribute Projection (NAP) that can be applied to compensate for any kind of variability factors that affects the face recognition performance. Our technique reduces the intra-class variability by finding a low dimensional variability subspace. This approach is assessed on a database from the NIST still face recognition challenge “The Good, the Bad, and the Ugly” (GBU). The results achieved using our implementation of a state-of-the-art system based on sparse representation are improved significantly by incorporating our variability compensation technique. These results are also compared to the GBU challenge results, highlighting the benefits of adequate variability compensation schemes in these kind of uncontrolled environments.

Pedro Tome, Ruben Vera-Rodriguez, Julian Fierrez, Javier Ortega-García

Comparing Features Extraction Techniques Using J48 for Activity Recognition on Mobile Phones

Nowadays, mobile phones are not only used for mere communication such as calling or sending text messages. Mobile phones are becoming the main computer device in people’s lives. Besides, thanks to the embedded sensors (Accelerometer, digital compass, gyroscope, GPS, and so on) is possible to improve the user experience. Activity recognition aims to recognize actions and goals of individual from a series of observations of themselves, in this case is used an accelerometer.

Gonzalo Blázquez Gil, Antonio Berlanga de Jesús, José M. Molina Lopéz

INEF12Basketball Dataset and the Group Behavior Recognition Issue

Activity recognition is one of the most prolific fields of research. For this reason, there are new fields of research that expand the possibilities of the activity recognition: Group behavior recognition. This field does not limit the number of elements in the scene, and there are a lot of new elements that must be analyzed. Each group, like each individual element, has its behavior, but this behavior depends on their elements, and the relationships between these elements. All these new elements cause that group behavior recognition was a new field of research, with some similar elements but it must be studied apart. This way, group behavior recognition is a novel field, in which there are not many researches and there are not many datasets that could be used by researchers. This situation causes the slow advance of the science in this field. This paper tries to show a complete description of the problem domain, with all the possible variants, a formal description and show a novel architecture used to solve this issue. Also describes a specific group behavior recognition dataset, and shows how it could be used.

Alberto Pozo, Jesús García, Miguel A. Patricio

Development of Interactive Virtual Voice Portals to Provide Municipal Information

In this paper, we describe a Voice Portal designed to provide municipal information by phone. It includes the set of modules required to automatically recognize users’ utterances, understand their meaning, decide the following response and generate a speech response. The different functionalities include to consult information about the City Council, access city information, carry out several steps and procedures, complete surveys, access citizen’s mailbox to leave messages for suggestions and complaints, and be transferred to the City Council to be attended by a teleoperator. The voice portal is, therefore, pioneer in offering an extensive and comprehensive range of public services accessible through speech, creating a new communication channel which is useful, efficient, and easy to use. In addition, the voice portal improves the support of public services by increasing the availability, flexibility, control and reducing costs and missed calls. The paper describes the application software, infrastructures required for its operation 24 hours a day, and preliminary results of a quality assessment.

David Griol, María García-Jiménez

Gesture Recognition Using Mobile Phone’s Inertial Sensors

The availability of inertial sensors embedded in mobile devices has enabled a new type of interaction based on the movements or “gestures” made by the users when holding the device. In this paper we propose a gesture recognition system for mobile devices based on accelerometer and gyroscope measurements. The system is capable of recognizing a set of predefined gestures in a user-independent way, without the need of a training phase. Furthermore, it was designed to be executed in real-time in resource-constrained devices, and therefore has a low computational complexity. The performance of the system is evaluated offline using a dataset of gestures, and also online, through some user tests with the system running in a smart phone.

Xian Wang, Paula Tarrío, Eduardo Metola, Ana M. Bernardos, José R. Casar

Towards a Lightweight Mobile Semantic-Based Approach for Enhancing Interaction with Smart Objects

This work describes a semantic extension for a user-smart object interaction model based on the ECA paradigm (Event-Condition-Action). In this approach, smart objects publish their sensing (event) and action capabilities in the cloud and mobile devices are prepared to retrieve them and act as mediators to configure personalized behaviours for the objects. In this paper, the information handled by this interaction system has been shaped according several semantic models that, together with the integration of an embedded ontological and rule-based reasoner, are exploited in order to


automatically detect incompatible ECA rules configurations and to


support complex ECA rules definitions and execution. This semantic extension may significantly improve the management of smart spaces populated with numerous smart objects from mobile personal devices, as it facilitates the configuration of coherent ECA rules.

Josué Iglesias, Ana M. Bernardos, Luca Bergesio, Jesús Cano, José R. Casar

Using GPU for Multi-agent Multi-scale Simulations

Multi-Agent System (MAS) is an interesting way to create models and simulators and is widely used to model complex systems. As the complex system community tends to build up larger models to fully represent real systems, the need for computing power raise significantly. Thus MAS often lead to long computing intensive simulations. Parallelizing such a simulation is complex and it execution requires the access to large computing resources. In this paper, we present the adaptation of a MAS system, Sworm, to a Graphical Processing Unit.We show that such an adaptation can improve the performance of the simulator and advocate for a more wider use of the GPU in Agent Based Models in particular for simple agents.

G. Laville, K. Mazouzi, C. Lang, N. Marilleau, L. Philippe

MultiAgent Systems for Production Planning and Control in Supply Chains

This paper presents a generic and extensible multiagent model for production planning and control in supply chains. The purpose of this work is to simulate different control architectures for a given supply chain in order to allow practitioners choose the control strategy that realizes the best benefits and the greater reactivity. The production planning is carried out using the material requirements planning method. Three types of control architectures are simulated on the basis of a test case application: centralized, distributed and mixed architecture.

Faten Ben Hmida, Anne Seguy, Rémy Dupas

Multi-agent Bidding Mechanism with Contract Log Learning Functionality

This paper addresses the agent-based bidding mechanism under trading actions from supplier sites to demand sites. Bidding includes unit price, amount, and storage cost. This trading environment assumes to be completely competitive, which means an agent cannot detect the competitive agent information. To increase the success rate of bidding, the agent must learn its bidding strategy from the past trading log. Our agent estimates the appropriate bidding price from the past “success bids” and “failure bids” by using statistical analysis. Experimental results shows an agent with such learning functionality increases its rate of “success bids” by 45.6%, compared to the agent without such functionality.

Kazuhiro Abe, Masanori Akiyoshi, Norihisa Komoda

Bio-inspired Self-adaptive Agents in Distributed Systems

Cellular differentiation is the mechanism by which cells in a multicellular organism become specialized to perform specific functions in a variety of tissues and organs. Different kinds of cell behaviors can be observed during embryogenesis: cells double, change in shape, and attach at and migrate to various sites. We construct a framework for building and operating distributed applications with the notion of cellular differentiation and division in cellular slime molds, e.g., dictyostelium discoideum and mycelium. It is almost impossible to exactly know the functions that each of the components should provide, since distributed systems are dynamic and may partially have malfunctioned, e.g., network partitioning. The framework enables software components, called agents, to differentiate their functions according to their roles in whole applications and resource availability, as just like cells. It involves treating the undertaking/delegation of functions in agents from/to other agents as their differentiation factors. When an agent delegates a function to another agent, if the former has the function, its function becomes less-developed and the latter’s function becomes well-developed.

Ichiro Satoh

PANGEA – Platform for Automatic coNstruction of orGanizations of intElligent Agents

This article presents PANGEA, an agent platform to develop open multiagent systems, specifically those including organizational aspects such as virtual agent organizations. The platform allows the integral management of organizations and offers tools to the end user. Additionally, it includes a communication protocol based on the IRC standard, which facilitates implementation and remains robust even with a large number of connections. The introduction of a CommunicationAgent and a Sniffer make it possible to offer web services for the distributed control of interaction.

Carolina Zato, Gabriel Villarrubia, Alejandro Sánchez, Ignasi Barri, Edgar Rubión, Alicia Fernández, Carlos Rebate, José A. Cabo, Téresa Álamos, Jesús Sanz, Joaquín Seco, Javier Bajo, Juan M. Corchado

An OOP Agent-Based Model for the Activated Studge Process Using MATLAB

The aim of this work is to study the feasibility of using agent-based modelling and Object Oriented Programming (OOP) to study the activated sludge process. A model in MATLAB has been proposed, and experiments have been developed analyzing the behaviour of the model when some of its parameters are changed.

María Pereda, Jesús M. Zamarreño

A Case of Dictator Game in Public Finances–Fiscal Illusion between Agents

This paper discusses Fiscal Illusion as a special case of Agents and Multiagents Systems. Under fiscal illusion, each taxpayer does not realize how much he/she really pays to the State; therefore, he/she does not evaluate well the public actions. We will study this issue as a particular case of a ‘Dictator game’ with relevant applications not only for Public Finances but also for specific domains like Pervasive agents and Ambient Intelligence, or for User-centered applications and Assisting Agents.

Paulo Mourão

Decentralised Regression Model for Intelligent Forecasting in Multi-agent Traffic Networks

The distributed nature of complex stochastic systems, such as traffic networks, can be suitably represented by multi-agent architecture. Centralised data processing and mining methods experience difficulties when data sources are geographically distributed and transmission is expensive or not feasible. It is also known from practice that most drivers rely primarily on their own experience (historical data). We consider the problem of decentralised travel time estimation. The vehicles in the system are modelled as autonomous agents consisting of an intellectual module for data processing and mining. Each agent uses a local linear regression model for prediction. Agents can adjust their model parameters with others on demand, using the proposed resampling-based consensus algorithm.We illustrate our approach with case studies, considering decentralised travel time prediction in the southern part of the city of Hanover (Germany).

Jelena Fiosina

Multi-agent Applications in a Context-Aware Global Software Development Environment

The need for skilled workers, the improving communication facilities, growing applications complexity, software development time and cost restrictions, and also the need for quality and accuracy are part of the new scenario where global software development was introduced. However, it has brought new challenges such as: communication, coordination and culture. Context information could help participants to be aware of events occurring and so improve their communication as well as their interactions. This paper presents a multi-agent mechanism for processing context information, as well as a mechanism for allocation of human resources. It is also introduced a framework to encapsulate some functionality required by a knowledge-based multi-agent system.

Helio H. L. C. Monte-Alto, Alberto B. Biasão, Lucas O. Teixeira, Elisa H. M. Huzita

SPAGE: An Action Generation Engine to Support Spatial Patterns of Interaction in Multi-agent Simulations

Space is a significant resource in human interaction. In this paper, we analyse the prospects of utilising space as an important resource in agent interaction. To do this, we created a software engine called SPAGE that generates communicative action signals for an agent based on the current state of the agent and its environment. These action signals are then evaluated against a set of conditions that are logically deduced from the literature on human face-to-face interaction. Depending upon the success or failure outcomes of the evaluation, the agent then receives a reward or a punitive signal. In either case, the states of both the agent and its environment are updated. The ultimate rationale is to maximise the number of rewards for an agent. SPAGE is incorporated into a simulation platform called the K-space in order to verify the believability of the action signals, and also to analyse the effects a sequence of actions can have in giving rise to spatial-orientational patterns of agent interaction. SPAGE is modular in nature which makes future modifications or extensions easy.

Kavin Preethi Narasimhan

A Multi-Agent Recommender System

The large amount of pages in Websites is a problem for users who waste time looking for the information they really want. Knowledge about users’ previous visits may provide patterns that allow the customization of the Website. This concept is known as Adaptive Website: a Website that adapts itself for the purpose of improving the user’s experience. Some Web Mining algorithms have been proposed for adapting a Website. In this paper, a recommender system using agents with two different algorithms (associative rules and collaborative filtering) is described. Both algorithms are incremental and work with binary data. Results show that this multi-agent approach combining different algorithms is capable of improving user’s satisfaction.

A. Jorge Morais, Eugénio Oliveira, Alípio Mário Jorge

SGP: Security by Guaranty Protocol for Ambient Intelligence Based Multi Agent Systems

Ambient intelligence (AmI) is an emerging multidisciplinary area based on ubiquitous computing, ubiquitous communication and intelligent user interface. AmI promises a world where people are surrounded by intelligent interfaces merged in daily life objects. However, the development of AmI systems is complex, and needs robust technologies that respond to AmI requirements such as autonomy, adaptability and context aware. One of the most prevalent alternatives is Multi Agent System (MAS) which can bring most of suitable characteristics. Yet, the success of AmI based MAS will depend on how secure it can be made. This paper presents an approach for AmI based MAS security, where each agent represents an object or a user. The set of agents is called group, and each one has a specific agent called representant. The key idea is based on encryption keys and guaranty. Members of same group share a Common Public Key (CPK), and to communicate with representant they share a Communication Key (Ck). Furthermore, if a new agent wants to communicate with the group, at least one agent belonging to this group must know it. This agent is called Guarantor Agent. The aim is to ensure that sensitive data and messages circulated among agents remain private and only trusted agents can access.

Nardjes Bouchemal, Ramdane Maamri

Discrete Dynamical System Model of Distributed Computing with Embedded Partial Order

The traditional models of distributed computing systems employ the mathematical formalisms of discrete event dynamical systems along with Petri Nets. However, such models are complex to analyze and computationally expensive. Interestingly, the evolving distributed computing systems closely resemble the discrete dynamical systems. This paper constructs and analyzes the model of distributed computing systems by applying mathematical formalisms of the discrete dynamical systems. The proposed model embeds the partial ordering of states under happened-before relation in the set of globally observable states. The stability of the proposed model is analyzed using the first order linear approximation.

Susmit Bagchi

Evaluation of High Performance Clusters in Private Cloud Computing Environments

In recent years, an increasing number of organizations — including universities, research centers, and businesses — have begun to use Cloud Computing technology as an essential and promising tool for optimizing existing computing resources and increase their efficiency. Among the most important advantages it offers is a scalable and low-cost computing system which is adapted to the needs of the client, who only pays for the resources used. On the other hand, the use of High Performance Clusters for solving complex problems is increasing. If we unify both technologies, we will be able to produce flexible, scalable, and low-cost High Performance Clusters. This paper analyzes the operation and performance of a High Performance Cluster on a Cloud Infrastructure.

J. Gómez, E. Villar, G. Molero, A. Cama

Power Conservation in Wired Wireless Networks

A joint university / industry collaboration has designed a system for conserving power in LTE (Long Term Evolution) wireless networks for mobile devices such as phones. The solution may be applied to any wireless technology in which all stations are wired to a backbone (e.g. it may


be applied to an 802.11 mash). This paper describes the solution method that is based on a distributed multiagent system in which one agent is associated with each and every station. Extensive simulations show that the system delivers robust performance: the computational overhead is within acceptable limits, the solution is stable in the presence of unexpected fluctuations in demand patterns, and scalability is achieved by the agents making decisions locally.

John Debenham, Simeon Simoff

Risk Assessment Modeling in Grids at Component Level: Considering Grid Resources as Repairable

Service level agreements (SLAs), as formal contractual agreements, increase the confidence level between the End user and the Grid Resource provider, as compared to the best effort approach. On the other end, SLAs fall short of assessing the risk in acceptance of the SLA, risk assessment in Grid computing fills that gap. The current approaches to risk assessment are based on node level risk assessment. This work is differentiated by that it provides risk assessment information at the granularity level of components. A risk assessment model at the component level based on Non-Homogeneous Poisson Process (NHPP) model is proposed. Grid failure data is used for the experimentation at the component level. The Grid risk model selection is validated by using a goodness of fit test along with graphical approaches. The experimental results provide detailed risk assessment information at the component level which can be used by Grid Resource provider to manage and use the Grid resources efficiently.

Asif Sangrasi, Karim Djemame

R & D Cloud CEIB: Management System and Knowledge Extraction for Bioimaging in the Cloud

The management system and knowledge extraction of bioimaging in the cloud (R & D Cloud CEIB) which is proposed in this article will use the services offered by the centralization of bioimaging through Valencian Biobank Medical Imaging (GIMC in Spanish) as a basis for managing and extracting knowledge from a bioimaging bank, providing that knowledge as services with high added value and expertise to the Electronic Patient History System (HSE), thus bringing the results of R & D to the patient, improving the quality of the information contained therein. R & D Cloud CEIB has four general modules: Search engine (SE), manager of clinical trials (GEBID), anonymizer (ANON) and motor knowledge (BIKE). The BIKE is the central module and through its sub modules analyses and generates knowledge to provide to the HSE through services. The technology used in R & D Cloud CEIB is completely based on Open Source.

Within the BIKE, we focus on the development of the classifier module (BIKEClassifier), which aims to establish a method for the extraction of biomarkers for bioimaging and subsequent analysis to obtain a classification in bioimaging available pools following GIMC diagnostic experience.

Jose Maria Salinas, Maria de la Iglesia-Vaya, Luis Marti Bonmati, Rosa Valenzuela, Miguel Cazorla

Performance Comparison of Hierarchical Checkpoint Protocols Grid Computing

Grid infrastructure is a large set of nodes geographically distributed and connected by a communication. In this context, fault tolerance is a necessity imposed by the distribution as any node can fail at any moment and the average time between failures highly decreases. To improve the robustness of supercomputing applications in the presence of failures, many techniques have been developed to provide resistance to these faults of the system. Fault tolerance is intended to allow the system to provide service as specified in spite of occurrences of faults. To meet this need, several techniques have been proposed in the literature. We will study the protocols based on rollback recovery classified into two categories: checkpoint-based rollback recovery protocols and message logging protocols. However, the performance of a protocol depends on the characteristics of the system, network and applications running. Faced with the constraints of large-scale environments, many of algorithms of the literature showed inadequate.Given an application environment and a system, it is not easy to identify the recovery protocol that is most appropriate for a cluster or hierarchical environment, like grid computing. Hence there is a need to implement these protocols in a hierarchical fashion to compare their performance in grid computing. In this paper, we propose hierarchical version of these protocols. We have implemented and compare their performance in clusters and grid computing using the Omnet++ simulator.

Ndeye Massata Ndiaye, Pierre Sens, Ousmane Thiare

A Scientific Computing Environment for Accessing Grid Computing Systems Using Cloud Services

This paper shows how virtualization techniques can be introduced into the grid computing infrastructure to provide a transparent and homogeneous scientific computing environment. Today’s trends in grid computing propose a shared model where different organizations make use of a heterogeneous grid, frequently a cluster of clusters (CoC) of computing and network resources. This paper shows how a grid computing model can be virtualized, obtaining a simple and homogeneous interface that can be offered to the clients. The proposed system called

virtual grid

, uses virtualization support and is developed from integration of standard grid and cloud computing technologies.Furthermore, a Scientific Computing Environment (SCE) has been developed to provide uniform access to the virtual grid.

Mariano Raboso, José A. de la Varga, Myriam Codes, Jesús Alonso, Lara del Val, María I. Jiménez, Alberto Izquierdo, Juan J. Villacorta

Grid Computing and CBR Deployment: Monitoring Principles for a Suitable Engagement

This paper presents a mathematical technique for modeling the generation of Grid-solutions employing a Case based reasoning system (CBR). Roughly speaking, an intelligent system that tries to be adapted to highly dynamic environment needs an efficient integration of high-level processes (deliberative and time-costly) within low-level (reactive, faster but poorer in quality) processes. The most relevant aspect of our current approach is that, unexpectedly, the performance of the CBR-system do not get worse any time that it retrieves worse cases in situations even when it has enough time to generate better solutions. We concentrate on formal aspects of the proposed Grid-CBR system without establishing which should be the most adequate procedure in a subsequent implementation stage. The advantage of the presented scheme is that it does not depend on neither the particular problem nor a concrete environment. It consists in a formal approach that only requires, on one hand, local information about the averaged-time spent by the system in obtaining a solution and, on the other hand, an estimation about their temporal restrictions. The potential use of industry standard technologies to implement such a Grid-enabled CBR system is discussed here too.

Luis F. Castillo, Gustavo Isaza, Manuel Glez Bedia, Miguel Aguilera, Juan David Correa

A Low-Cost Solution to Distribute Repetitive and Independent Tasks by Clustering

This paper evaluates three different solutions to distribute a list of independent tasks among the nodes of a cluster of computers. Operative systems are unable to take advantage of modern multicore or multiprocessor based computers in the execution of this kind of work when it is not paralellized, thus, the availability of a software tool that allows the concurrent use of a set of computers can be very useful in terms of reduction of the execution time and efficiency.

The three solutions have been evaluated using a set of tasks related to the implementation of digital electronic circuits in Field Programmable Gate Arrays (FPGAs), but they can be applied to any other set of independent tasks.

The results have been obtained using a cluster composed by four relatively old and low performance computers, and have been compared with the results obtained by a quad-core processor based computer executing the same task list in a purely sequential way. This comparison clearly shows the power of concurrent execution, that allows a reduction of execution time by a factor between three and four.

Ignacio Traverso Ribón, Ma Ángeles Cifredo Chacón, Ángel Quirós-Olozábal, Juan Barrientos Villar

Identifying Gene Knockout Strategies Using a Hybrid of Bees Algorithm and Flux Balance Analysis for in Silico Optimization of Microbial Strains

Genome-scale metabolic networks reconstructions from different organisms have become popular in recent years. Genetic engineering is proven to be able to obtain the desirable phenotypes. Optimization algorithms are implemented in previous works to identify the effects of gene knockout on the results. However, the previous works face the problem of falling into local minima. Thus, a hybrid of Bees Algorithm and Flux Balance Analysis (BAFBA) is proposed in this paper to solve the local minima problem and to predict optimal sets of gene deletion for maximizing the growth rate of certain metabolite. This paper involves two case studies that consider the production of succinate and lactate as targets, by using


as model organism. The results from this experiment are the list of knockout genes and the growth rate after the deletion. BAFBA shows better results compared to the other methods. The identified list suggests gene modifications over several pathways and may be useful in solving challenging genetic engineering problems.

Yee Wen Choon, Mohd Saberi Mohamad, Safaai Deris, Chuii Khim Chong, Lian En Chai, Zuwairie Ibrahim, Sigeru Omatu

Inferring Gene Regulatory Networks from Gene Expression Data by a Dynamic Bayesian Network-Based Model

Enabled by recent advances in bioinformatics, the inference of gene regulatory networks (GRNs) from gene expression data has garnered much interest from researchers. This is due to the need of researchers to understand the dynamic behavior and uncover the vast information lay hidden within the networks. In this regard, dynamic Bayesian network (DBN) is extensively used to infer GRNs due to its ability to handle time-series microarray data and modeling feedback loops. However, the efficiency of DBN in inferring GRNs is often hampered by missing values in expression data, and excessive computation time due to the large search space whereby DBN treats all genes as potential regulators for a target gene. In this paper, we proposed a DBN-based model with missing values imputation to improve inference efficiency, and potential regulators detection which aims to lessen computation time by limiting potential regulators based on expression changes. The performance of the proposed model is assessed by using time-series expression data of yeast cell cycle. The experimental results showed reduced computation time and improved efficiency in detecting gene-gene relationships.

Lian En Chai, Mohd Saberi Mohamad, Safaai Deris, Chuii Khim Chong, Yee Wen Choon, Zuwairie Ibrahim, Sigeru Omatu

A Hybrid of SVM and SCAD with Group-Specific Tuning Parameter for Pathway-Based Microarray Analysis

The incorporation of pathway data into the microarray analysis had lead to a new era in advance understanding of biological processes. However, this advancement is limited by the two issues in quality of pathway data. First, the pathway data are usually made from the biological context free, when it comes to a specific cellular process (e.g. lung cancer development), it can be that only several genes within pathways are responsible for the corresponding cellular process. Second, pathway data commonly curated from the literatures, it can be that some pathway may be included with the uninformative genes while the informative genes may be excluded. In this paper, we proposed a hybrid of support vector machine and smoothly clipped absolute deviation with group-specific tuning parameters (gSVM-SCAD) to select informative genes within pathways before the pathway evaluation process. Our experiments on lung cancer and gender data sets show that gSVM-SCAD obtains significant results in classification accuracy and in selecting the informative genes and pathways.

Muhammad Faiz Misman, Mohd Saberi Mohamad, Safaai Deris, Raja Nurul Mardhiah Raja Mohamad, Siti Zaiton Mohd Hashim, Sigeru Omatu

A Reliable ICT Solution for Organ Transport Traceability and Incidences Reporting Based on Sensor Networks and Wireless Technologies

This paper describes an ICT solution based on an intelligent onboard system which is able to trace theorgans inside a medical vanduring delivery routes to thehospitals, without altering the carriers daily tasks. The intelligent onboard systemis able to ensure the safety of the cargo by means of a sensor networkwho permanently evaluatestheir status.The van understands its environment, including: its location, the temperature and the humidity of the transported organs; and can report incidences instantly via wireless communications to anyone interested. It is a non-intrusive solution which represents a successful experience of using smart environments in a viable way to resolve a real social and healthcare necessity.

Asier Moreno, Ignacio Angulo

Applying Lemur Query Expansion Techniques in Biomedical Information Retrieval

The increase in the amount of available biomedical information has resulted in a higher demand on biomedical information retrieval systems. However, traditional information retrieval systems do not achieve the desired performance in this area. Query expansion techniques have improved the effectiveness of ranked retrieval by automatically adding additional terms to a query. In this work we test several automatic query expansion techniques using the Lemur Language Modelling Toolkit. The objective is to evaluate a set of query expansion techniques when they are applied to biomedical information retrieval. In the first step of the information retrieval searching, indexing, we compare the use of several techniques of stemming and stopwords. In the second step, matching, we compare the well-known weighting algorithms Okapi and TF-IDF BM25. The best results are obtained with the combination of Krovetz stemmer, SMART stopword list and TF-IDF. Moreover, we analyze the document retrieval based on Abstract, Title and Mesh fields. We conclude that seems more effective than looking at each of these fields individually. Also, we show that the use of feedback in document retrieval results a improvement in retrieving. The corpus used in the experiments was extracted from the biomedical text Cystic Fibrosis Corpus (CF).

A. R. Rivas, L. Borrajo, E. L. Iglesias, R. Romero

Genetic Evaluation of the Class III Dentofacial in Rural and Urban Spanish Population by AI Techniques

The etiology of skeletal class III malocclusion is multifactorial, complex and likely results from mutations in numerous genes. In this study, we sought to understand genotype correlation of the class III dentofacial deformity in rural and urban spanish population of more than one generation. The genetic analyze was made using a Genome-wide scan. It will hold a novel classification using Artificial Intelligence techniques highlighting the difference between the two groups at the level of polymorphism. Our phenotypic and genetic analysis highlights that each group is unique.

Marta Muñoz, Manuel Rodríguez, Ma Encarnación Rodríguez, Sara Rodríguez

A Methodology for Learning Validation in Neural Cultures

In this paper a bio-hybrid system based on neural cultured is described and the learning processes for programming this biological neuroprocessor are revised. Different authors proposed many different learning techniques for managing neural plasticity, however it is necessary to provide a formal methodology for verifying this induced plasticity and validating this bio-hybrid programming paradigm.

V. Lorente, F. de la Paz, E. Fernández, J. M. Ferrández

Intelligent Working Environments, Handling of Medical Data and the Ethics of Human Resources

The development of Ambient Intelligence (AmI) will radically transform our everyday life and social representations. These transformations will notably impact the working environment. The objective of this paper is to offer a first survey of the main ethical issues raised by the development of intelligent working environments (IWEs). It especially focuses on the capacity of such environments to collect and handle personal medical data. The first section describes the features of intelligent environments in general and presents some of their applications at the workplace. The second section of this paper points out some of the main ethical issues raised by these environments and their capacity to collect and handle medical data. The third and final section attempts to offer some elements of reflection regarding the ethical principles that should guide the development of IWEs in the future.

Céline Ehrwein Nihan

An Interpretable Guideline Model to Handle Incomplete Information

Healthcare institutions are both natural and emotionalstressful environments; indeed, the healthcare professionals may fall intopractices that may lead to medical errors, undesirable variations in clinical doing and defensive medicine. On the other hand, Clinical Guidelines may offer an effective response to these irregularities in clinical practice, if the issues concerning their availability during the clinical process are solved. Hence, in this work it is proposed a model intended to provide a formal representation of Computer-Interpretable Guidelines, in terms of the extensions of the predicates that make their universe of discourse, as well as a Decision Support System framework to handle Incomplete Information. It will be used an extension to the language of Logic Programming, where an assessment of the Quality-of-Information of the extensions of the predicates referred to above is paramount.

Tiago Oliveira, João Neves, Ângelo Costa, Paulo Novais, José Neves

Modeling a Mobile Robot Using a Grammatical Model

Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it.

A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot.

Gabriel López-García, Javier Gallego-Sánchez, J. Luis Dalmau-Espert, Rafael Molina-Carmona, Patricia Compañ-Rosique

A Classification Method of Knowledge Cards in Japanese and Chinese by Using Domain-Specific Dictionary

This paper addresses a classification method to classify the knowledge cards written in Japanese and Chinese by using domain-specific dictionary at offshore software development company. The method has two phases in classification process; pre-process for morphological analysis, translation or filtering original cards, and sample-based categorization process with statistical information. Finally, we take the classification experiment to verify the feasibility of our method and also discuss the experimental results.

Xiaopeng Liu, Li Cai, Masanori Akiyoshi, Norihisa Komoda

A Taxonomy Construction Approach Supported by Web Content

The growth of unstructured information available inside organizations and on the Web has motivated the building of structures for represent and manipulate such information. Particularly, an ontology provide a structural organizational knowledge to support the exchange and sharing. A crucial element within an ontology is the taxonomy. For building a taxonomy, the identification of hypernymy/hyponymy relations between terms is essential. Lexical patterns have been used in analysis of text for recovering hypernyms. In addition, the Web has been used as source of collective knowledge and it seems a good option for finding appropriate hypernyms. This paper describes an approach to get hypernymy relations between terms belonging to a specific domain knowledge. This approach combines WordNet synsets and context information for building an extended query set. This query set is sent to a web search engine in order to retrieve the most representative hypernym for a term.

Ana B. Rios-Alvarado, Ivan Lopez-Arevalo, Victor Sosa-Sosa

Semantic Graph-Based Approach for Document Organization

Actual document search engines base searches on the file name or syntactic content, which means that the word or part of the word to search must exactly match. This article proposes a semantic graph-based method for document search. The approach allows to organize, search, and display documents or groups of documents. Groups are formed according to topics contained in documents.

Erika Velazquez-Garcia, Ivan Lopez-Arevalo, Victor Sosa-Sosa

Identifying Concepts on Specific Domain by a Unsupervised Graph-Based Approach

This paper presents an unsupervised approach to Word Sense Disambiguation on a specific domain to automatically to assign the right sense to a given ambiguous word. The approach proposed relies on integration of two source information: context and semantic similarity information. The experiments were carried on English test data of SemEval 2010 and evaluated with a variety of measures that analyze the connectivity of graph structure. The obtained result were evaluated using precision and recall measures and compared with the results of SemEval 2010 the approach is currently under test with another semantic similarity measures, preliminary results look promising.

Franco Rojas-Lopez, Ivan Lopez-Arevalo, Victor Sosa-Sosa

The Problem of Learning Non-taxonomic Relationships of Ontologies from Text

Manual construction of ontologies by domain experts and knowledge engineers is a costly task. Thus, automatic and/or semi-automatic approaches to their development are needed. Ontology Learning aims at identifying its constituent elements, such as non-taxonomic relationships, from textual information sources. This article presents a discussion of the problem of Learning Non-Taxonomic Relationships of Ontologies and defines its generic process. Three techniques representing the state of the art of Learning Non-Taxonomic Relationships of Ontologies are described and the solutions they provide are discussed along with their advantages and limitations.

Ivo Serra, Rosario Girardi, Paulo Novais

Improving Persian Text Classification and Clustering Using Persian Thesaurus

This paper proposes an innovative approach to improve the classification performance of Persian texts. The proposed method uses a thesaurus as a helpful knowledge to obtain more representative word-frequencies in the corpus. Two types of word relationships are considered in our used thesaurus. This is the first attempt to use a Persian thesaurus in the field of Persian information retrieval. Experimental results indicate the performance of text classification improves significantly in the case of employing Persian thesaurus rather the case of ignoring Persian thesaurus.

Hamid Parvin, Atousa Dahbashi, Sajad Parvin, Behrouz Minaei-Bidgoli

Towards Incremental Knowledge Warehousing and Mining

In this paper, we propose new ideas around the concepts of knowledge warehousing and mining. More precisely, we focus on the mining part and develop original approaches for incremental clustering based on k-means for knowledge bases. Instead of addressing the prohibitive amounts of knowledge, the latter is gradually exploited by packets in order to reduce the problem complexity. We introduce original algorithms named ICPK/k-means for

Incremental Clustering by Packets of Knowledge

, ICPKG/k-means for

Incremental Algorithm by Packets of Knowledge and Grouping of clusters

for determining the number of desired clusters, LICPK/k-means for

Learning Incremental Clustering by Packets of Knowledge

and LIGPKG/k-means for

Learning Incremental Clustering by Packets of Knowledge and Grouping of clusters

for handling the clustering of large amount of knowledge. Experimental results prove the effectiveness of our algorithms.

Habiba Drias, Asma Aouichat, Aicha Boutorh

Analysis of Sequential Events for the Recognition of Human Behavior Patterns in Home Automation Systems

Learning users’ frequent patterns is very useful to develop real human-centered environments. By analyzing the occurrences of events over time in a home automation system, it’s possible to find periodic patterns based on action-time relationships. However, the human behavior could be better defined if it’s related to chained actions, creating action-action relationships. This work presents IntelliDomo’s learning layer, a data mining approach based on ontologies and production rules that aims to achieve those objectives. This module is able to acquire users’ habits and automatically generate production rules for behavior patterns to anticipate the user’s periodic actions. The learning layer includes new features looking for the adaptability and personification of the environment.

Adolfo Lozano-Tello, Vicente Botón-Fernández

Reflective Relational Learning for Ontology Alignment

We propose an application of a statistical relational learning method as a means for automatic detection of semantic correspondences between concepts of OWL ontologies. The presented method is based on an algebraic data representation which, in contrast to well-known graphical models, imposes no arbitrary assumption with regard to the data model structure. We use a probabilistic relevance model as the basis for the estimation of the most plausible matches.We experimentally evaluate the proposed method employing datasets developed for the Ontology Alignment Evaluation Initiative (OAEI) Anatomy track, for the task of identifying matches between concepts of AdultMouse Anatomy ontology and NCI Thesaurus ontology on the basis of expert matches partially provided to the system.

Andrzej Szwabe, Pawel Misiorek, Przemyslaw Walkowiak

On the Evolutionary Search for Data Reduction Method

One of the key applications of statistical analysis and data mining is the development of the classification and prediction models. In both cases, significant improvements can be attained by limiting the number of model inputs. This can be done at two levels, namely by eliminating unnecessary attributes [3] and reducing the dimensionality of the data [12].Variety of methods have been proposed in both fields.

Hanna Lacka, Maciej Grzenda

Handwritten Character Recognition with Artificial Neural Networks

Apart from differents techniques studied in an increasing order of difficulty, this work presents a new approach based on the use of a multilayer perceptron with the optimal hidden neurons. Idea is to compute the training stage by using two classes of prototypes, to represent data already known ; hidden layers are then initialized by that two classes of prototypes. One of the advantages of this technique is the use of the second hiden layer which allows the network to filter better the case of nearby data. The results come from the

Yann Le Cun

database [9], and show that the approach based on the use of a multilayer perceptron with two hidden layers is very promising, though improvable.

Stephane Kouamo, Claude Tangha

Design of a CMAC-Based PID Controller Using Operating Data

In industrial processes, PID control strategy is still applied in a lot of plants. However, real process systems are nonlinear, thus it is difficult to obtain the desired control performance using fixed PID parameters. Cerebellar model articulation controller (CMAC) is attractive as an artificial neural network in designing control systems for nonlinear systems. The learning cost is drastically reduced when compared with other multi-layered neural networks. On the other hand, theories which directly calculate control parameters without system parameters represented by Virtual Reference Feedback Tuning (VRFT) or Fictitious Reference Iterative Tuning (FRIT) have received much attention in the last few years. These methods can calculate control parameters using closed-loop data and are expected to reduce time and economic costs. In this paper, an offline-learning scheme of CMAC is newly proposed. According to the proposed scheme, CMAC is able to learn PID parameters by using a set of closed-loop data. The effectiveness of the proposed method is evaluated by a numerical example.

Shin Wakitani, Yoshihiro Ohnishi, Toru Yamamoto

Autonomous Control of Octopus-Like Manipulator Using Reinforcement Learning

In this paper, we apply reinforcement learning to an octopus-like manipulator. We employ grasping and calling tasks. We show that by designing the manipulator to utilize properties of the real world, the state-action space can be abstracted, and the real-time learning and lack of generalization ability problems can be solved.

So Kuroe, Kazuyuki Ito

Relationship between Quality of Control and Quality of Service in Mobile Robot Navigation

This article presents the experimental work developed to test the viability and to measure the efficiency of an intelligent control distributed architecture. To do this, a simulated navigation scenario of Braitenberg vehicles has been developed. To test the efficiency, the architecture uses the performance as QoS parameter. The measuring of the quality of the navigation is done through the ITAE QoC parameter. Tested scenarios are: an environment without QoS and QoC managing, an environment with a relevant message filtering and an environment with a predictive filtering by the type of control. The results obtained show that some of the processing performed in the control nodes can be moved to the middleware to optimize the robot navigation.

José-Luis Poza-Luján, Juan-Luis Posadas-Yagüe, José-Enrique Simó-Ten

Fusing Facial Features for Face Recognition

Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude of the, the second uses the phase, and the third uses the magnitude with phase of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

Jamal Ahmad Dargham, Ali Chekima, Ervin Gubin Moung

Hybrid Component-Based Face Recognition System

Face recognition system is a fast growing research field because of its potential as an eminent tool for security surveillance, human-computer interaction, identification declaration and other applications. Face recognition techniques can be categorized into 3 categories namely holistic approach, feature-based approach, and hybrid approach. In this paper, a hybrid component-based system is proposed. Linear discriminant analysis (LDA) is used to extract the feature from each component. The outputs from the individual components are then combined to give the final recognition output. Two methods are used to obtain the components, namely the facial landmarks and the sub-images. It was found out that the fusion of the components does improve the recognition rate compared to individual results of each component. From the sub-image method, it can be seen that as the size of the components get smaller, the recognition rate tends increase but not always.

Jamal Ahmad Dargham, Ali Chekima, Munira Hamdan

A Mixed Pixels Estimation Method for Landsat-7/ETM+ Images

In this paper, the estimation method of the mixed pixel for satellite images has been proposed. A mixed pixel consists of several categories and the aim of this study is to estimate the mixture ratios of the categories. The filter of neighborhood pixels had been proposed. In this paper, the optimal filter coefficients have been considered in detail.

Seiji Ito, Yoshinari Oguro

Phoneme Recognition Using Support Vector Machine and Different Features Representations

Although Support Vector Machines (SVMs) have been proved to be very powerful classifiers, there performance still depending on the features representations which they used. This paper describes an application of SVM to multiclass phoneme recognition using 7 sub-phoneme units and different features representations as MFCC, PLP and RASTA-PLP. The phoneme recognition system is tested and experimentally evaluated using speech signals of 49 speakers from TIMIT Corpus in order to find the adequate feature coefficient for our phonemes databases. The experimental results show that, MFCC and PLP are significantly superior and get better recognition rates than RASTA-PLP (52% vs. 29%).

Rimah Amami, Dorra Ben Ayed, Noureddine Ellouze

Localization Systems for Older People in Rural Areas: A Global Vision

Location applications can be considered promising tools to improve the quality of life for the elderly. They remove architectural barriers, promote independence and social inclusion and increase personal autonomy. This work presents a review of the status of location systems thought the analysis of several research projects, patents and commercial products. The obtained conclusions are completed with the direct opinions of the users in rural area. Data were collected through several workshops with elderly people and caregivers from Teruel (Spain). The results will provide a basis for researchers, developers and manufactures to discover strengths, weakness and needs that should be met at the future in order to improve the location systems for the elderly in rural areas.

L. Martín, I. Plaza, M. Rubio, R. Igual

A Maritime Piracy Scenario for the n-Core Polaris Real-Time Locating System

There is a wide range of applications where Wireless Sensor Networks (WSN) and Multi-Agent Systems (MAS) can be used to build context-aware systems. On the one hand, WSNs comprise an ideal technology to develop sensing systems, as well as Real-Time Locating Systems (RTLS) aimed at indoor environments, where existing global navigation satellite systems, such as GPS, do not work correctly. On the other hand, agent technology is an essential piece in the analysis of data from distributed sensors and gives them the ability to work together and analyze complex situations. In this sense, n-Core Polaris is an indoor and outdoor RTLS based on ZigBee WSNs and an innovative set of locating and automation engines. This paper describes the main components of the n-Core Polaris system, as well as a proposed scenario where n-Core Polaris can be used to support boarding and rescue operations in maritime piracy environments.

Óscar García, Ricardo S. Alonso, Dante I. Tapia, Fabio Guevara, Fernando de la Prieta, Raúl A. Bravo

A Serious Game for Android Devices to Help Educate Individuals with Autism on Basic First Aid

Within the group of individuals with autism, we can find a certain number of people who interact well with mobile devices and other types of technology. To improve their knowledge on first aid, the main aim was to create an application composed of a set of Serious Games oriented towards first aid education: i.e. how to handle specific situations, basic knowledge about healthcare or medical specialties…all employing the use of current technologies such as Smartphones or Tablets, specifically running the Android operating system. Not only technological results have been investigated, but also feedback was taken from opinions and experiences by both users and specialists taking part in the practical validation and testing of the application.

Zelai Sáenz de Urturi, Amaia Méndez Zorrilla, Begoña García Zapirain

User-Independent Human Activity Recognition Using a Mobile Phone: Offline Recognition vs. Real-Time on Device Recognition

Real-time human activity recognition on a mobile phone is presented in this article. Unlike in most other studies, not only the data were collected using the accelerometers of a smartphone, but also models were implemeted to the phone and the whole classification process (preprocessing, feature extraction and classification) was done on the device. The system is trained using phone orientation independent features to recognize five everyday activities: walking, running, cycling, driving a car and sitting/standing while the phone is in the pocket of the subject’s trousers. Two classifiers were compared,


nn (


nearest neighbours) and QDA (quadratic discriminant analysis). The models for real-time activity recognition were trained offline using a data set collected from eight subjects and these offline results were compared to real-time recognition rates. Real-time recognition on the device was tested by seven subjects, three of which were subjects who had not collected data to train the models. Activity recognition rates on the smartphone were encouracing, in fact, the recognition accuracies obtained are approximately as high as offline recognition rates. The real-time recognition accuracy using QDA was as high as 95.8%, while using


nn it was 93.9%.

Pekka Siirtola, Juha Röning

Implementing a Spatial Agenda in Android Devices

Smartphone are becoming common in everyday’s life. They provide new possibilities as the integrate communication, geo-positioning, multi-touch capabilities, and so on. Increasingly, these devices are used as proactive tools assisting people with their day-to-day activities, making everyones life more comfortable. In this paper, we present an Android application which is based on the location technology and Google maps in order to help the user to know what ads or warnings has in a specific location and time.

C. N. Ojeda-Guerra

M-Learning for Elderlies: A Case Study

In this article a case study about m-learning for elderlies using mobile devices is presented. The study focuses on a practical study about language learning for elderly students that attend classes in the so-called Inter-university program for Elderly People. The recent surge of university programs for elderly people requires novel solutions related to learning methods, directed by the special needs of this sector of the society.

Fernando de la Prieta, Antonia Macarro, Amparo Jiménez, Amparo Casado, Kasper Hallenborg, Juan F. De Paz, Sara Rodríguez, Javier Bajo

Mobile Device to Measure Ubiquitous Danger in a Great City Based on Cultural Algorithms

Beginning in 2007, Juarez City has been a victim of the consequences brought about organized crime, such as killing of innocent people as the main fact assaults, kidnappings, multi homicides, burglary, among other consequences. What has pushed the population of the city to take action of different kinds to minimize the violence in a society with 15700 violent homicides during this period of time; the reason which we present the following project, which aims to provide people moving in this city a technological tool that provides an indicator to the user information based statistics compiled by the Centre for Social Research at Juarez City University and public sources so uncertain is the place where you are. This research try to combine a Mobile Device based on Cultural Algorithms and Data Mining to determine the danger of stay in a part of the city in a specific time.

Alberto Ochoa, Erick Trejo, Daniel Azpeitia, Néstor Esquinca, Rubén Jaramillo, Jöns Sánchez, Saúl González, Arturo Hernández

Simulated Annealing for Constructing Mixed Covering Arrays

Combinatorial testing is a method that can reduce costs and increase the effectiveness of software testing for many applications. It is based on constructing test-suites of economical size, which provide coverage of the most prevalent configurations of parameters. Mixed Covering Arrays (MCAs) are combinatorial structures which can be used to represent these test-suites. This paper presents a new Simulated Annealing (SA) algorithm for Constructing MCAs. This algorithm incorporates several distinguishing features including an efficient heuristic to generate good quality initial solutions, a compound neighborhood function which carefully combines two designed neighborhoods and a fine-tuned cooling schedule. The experimental evidence showed that our SA algorithm improves the obtained results by other approaches reported in the literature, finding the optimal solution in some of the solved cases.

Himer Avila-George, Jose Torres-Jimenez, Vicente Hernández, Loreto Gonzalez-Hernandez

Improve the Adaptive Capability of TMA-OR

In T-detector Maturation Algorithm with Overlap Rate (TMA-OR), the parameters Omin and self radius r


are required to be set by experience. To solve the problem, negative selection operator and self radius learning mechanism are proposed. The results of experiment show that the proposed algorithm can achieve the same effect when KDD and iris are as data set.

Jungan Chen, Qiaowen Zhang, Zhaoxi Fang

A New Hybrid Firefly Algorithm for Complex and Nonlinear Problem

Global optimization methods play an important role to solve many real-world problems. However, the implementation of single methods is excessively preventive for high dimensionality and nonlinear problems, especially in term of the accuracy of finding best solutions and convergence speed performance. In recent years, hybrid optimization methods have shown potential achievements to overcome such challenges. In this paper, a new hybrid optimization method called Hybrid Evolutionary Firefly Algorithm (HEFA) is proposed. The method combines the standard Firefly Algorithm (FA) with the evolutionary operations of Differential Evolution (DE) method to improve the searching accuracy and information sharing among the fireflies. The HEFA method is used to estimate the parameters in a complex and nonlinear biological model to address its effectiveness in high dimensional and nonlinear problem. Experimental results showed that the accuracy of finding the best solution and convergence speed performance of the proposed method is significantly better compared to those achieved by the existing methods.

Afnizanfaizal Abdullah, Safaai Deris, Mohd Saberi Mohamad, Siti Zaiton Mohd Hashim

A General Framework for Naming Qualitative Models Based on Intervals

In qualitative spatial and temporal reasoning we can distinguish between




magnitudes. In particular,


qualitative models allow humans to express spatio-temporal concepts such as “That horse is really


”. In colloquial terms,


concepts are called relative. In this paper we present a general framework to solve the representation magnitude and a general algorithm to solve the basic step of inference process of qualitative models based on intervals. The general method is based on the definition of two algorithms: the qualitative sum and the qualitative difference.

Ester Martínez-Martín, M. Teresa Escrig, Angel P. del Pobil

Approach of Genetic Algorithms with Grouping into Species Optimized with Predator-Prey Method for Solving Multimodal Problems

Over recent years, Genetic Algorithms have proven to be an appropriate tool for solving certain problems. However, it does not matter if the search space has several valid solutions, as their classic approach is insufficient. To this end, the idea of dividing the individuals into species has been successfully raised. However, this solution is not free of drawbacks, such as the emergence of redundant species, overlapping or performance degradation by significantly increasing the number of individuals to be evaluated. This paper presents the implementation of a method based on the predator-prey technique, with the aim of providing a solution to the problem, as well as a number of examples to prove its effectiveness.

Pablo Seoane, Marcos Gestal, Julián Dorado, J. Ramón Rabuñal, Daniel Rivero

BTW: A New Distance Metric for Classification

In this paper, BTW, a new method for similar case search, is presented. The main objective is to optimize the metrics employed in classical approaches in order to obtain an intense compression in the data and a deterministic real-time behavior; and without compromising the performance of the classification task. BTW tries to conjugate the best of three well-known techniques: Nearest Neighbor, Fisher discriminant and optimization.

Julio Revilla Ocejo, Evaristo Kahoraho Bukubiye

Using an Improved Differential Evolution Algorithm for Parameter Estimation to Simulate Glycolysis Pathway

This paper presents an improved Differential Evolution algorithm (IDE). It is aimed at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Nonetheless, due to the noisy data and difficulty of the system in estimating myriad of parameters, many computation algorithms face obstacles and require longer computational time to estimate the relevant parameters. The IDE proposed in this paper is a hybrid of a Differential Evolution algorithm (DE) and a Kalman Filter (KF). The outcome of IDE is proven to be superior than a Genetic Algorithm (GA) and DE. The results of IDE from this experiment show estimated optimal kinetic parameters values, shorter computation time and better accuracy of simulated results compared to the other estimation algorithms.

Chuii Khim Chong, Mohd Saberi Mohamad, Safaai Deris, Shahir Shamsir, Afnizanfaizal Abdullah, Yee Wen Choon, Lian En Chai, Sigeru Omatu

Structural Graph Extraction from Images

We present three new algorithms to model images with graph primitives. Our main goal is to propose algorithms that could lead to a broader use of graphs, especially in pattern recognition tasks. The first method considers the q-tree representation and the neighbourhood of regions. We also propose a method which, given any region of a q-tree, finds its neighbour regions. The second algorithm reduces the image to a structural grid. This grid is postprocessed in order to obtain a directed acyclic graph. The last method takes into account the skeleton of an image to build the graph. It is a natural generalization of similar works on trees [8, 12]. Experiments show encouraging results and prove the usefulness of the proposed models in more advanced tasks, such as syntactic pattern recognition tasks.

Antonio-Javier Gallego-Sánchez, Jorge Calera-Rubio, Damián López

Mutagenesis as a Diversity Enhancer and Preserver in Evolution Strategies

Mutagenesis is a process which forces the coverage of certain zones of the search space during the generations of an evolution strategy, by keeping track of the covered ranges for the different variables in the so called gene matrix. Originally introduced as an artifact to control the automated stopping criterion in a memetic algorithm, ESLAT, it also improved the exploration capabilities of the algorithm, even though this was considered a secondary matter and not properly analyzed or tested. This work focuses on this diversity enhancement, redefining mutagenesis to increase this characteristic, measuring this improvement over a set of twenty-seven unconstrained optimization functions to provide statistically significant results.

José L. Guerrero, Alfonso Gómez-Jordana, Antonio Berlanga, José M. Molina

Nonlinear Approaches to Automatic Elicitation of Distributed Oscillatory Clusters in Adaptive Self-organized System

Chaotic neural networks find more and more applications in pattern recognition systems. However hybrid multidisciplinary solutions that combine advances from physics and artificial intelligence fields tend to enrich the complexity of designed systems and add more discussion points. This paper questions the applicability of well known chaotic time-series metrics (Shannon entropy, Kolmogorov entropy, Fractal dimension, Gumenyuk metric, complete and lag synchronization estimations) to simplify elicitation of distributed oscillatory clusters that store clustering results of a problem. Computer modeling results gives evidence that in case of clustering simple datasets the metrics are rather effective; however the concept of averaging out agent’s dynamics fails when the clusters in the input dataset are linearly non-separable.

Elena N. Benderskaya, Sofya V. Zhukova

Classifier Ensemble Framework Based on Clustering

This paper proposes an innovative combinational method how to select the number of clusters in the Classifier Selection by Clustering (CSC) to improve the performance of classifier ensembles both in stabilities of their results and in their accuracies as much as possible. The CSC uses bagging as the generator of base classifiers. Base classifiers are kept fixed as either decision trees or multilayer perceptron during the creation of the ensemble. Then it partitions the classifiers using a clustering algorithm. After that by selecting one classifier per each cluster, it produces the final ensemble. The weighted majority vote is taken as consensus function of the ensemble. Here it is probed how the cluster number affects the performance of the CSC method and how we can switch to a well approximation option for a dataset adaptively. We expand our studies on a large number of real datasets of UCI repository to reach a well conclusion.

Hamid Parvin, Sajad Parvin, Zahra Rezaei, Moslem Mohamadi

Resource Allocation Strategies to Maximize Network Survivability Considering of Average DOD

In this paper, an innovative metric called Average Degree of Disconnectivity (Average DOD) is proposed. The Average DOD combining the concept of the probability calculated by contest success function with the DOD metric would be used to evaluate the damage degree of network. The larger value of the Average DOD, the more damage degree of the network would be. An attack-defense scenario as a mathematical model would be used to support network operators to predict that all the likelihood strategies both cyber attacker and network defender would take. The attacker could use the attack resources to launch attack on the nodes of network. On the other hand, the network defender allocates existed resources of defender to protect survival nodes of network. In the process of problem solving, the “gradient method” and “game theory” would be adopted to find the optimal resource allocation strategies for both cyber attacker and network defender.

Frank Yeong-Sung Lin, Pei-Yu Chen, Quen-Ting Chen

Comparative Analysis of Two Distribution Building Optimization Algorithms

This paper proposes the modification of genetic algorithm, which uses genetic operators, effecting not on particular solutions, but on the probabilities distribution of solution vector’s components. This paper also compares reliability and efficiency of basic algorithm and proposed modification using the set of benchmark functions and real-world problem of dynamic scheduling of truck painting.

Pavel Galushin, Olga Semenkina, Andrey Shabalov

Development of a Computational Recommender Algorithm for Digital Resources for Education Using Case-Based Reasoning and Collaborative Filtering

This paper describes the development proposal of a Computational Recommender Algorithm (ReCom), to support user to find the correct Digital Resource of Education (DRE), in this case the learning objects (LO) that meet the needs and study preferences of the user. The search is performed on a database that contains a collection of metadata of learning objects of different topics related to Software Architect. The algorithm ReCom proposed uses the technique Colla borative Filtering (CF) and artificial intelligence technique known as Case-Based Reasoning (CBR) using for it the framework jCOLIBRI2. The preliminary test plan is presented to evaluate the effectiveness of the recommendations for the user, considering the user profile and the value of variables of influence. It also presents the proposal of a mathematical equation to measure the degree of satisfaction of user recommendations.

Guadalupe Gutiérrez, Lourdes Margain, Alberto Ochoa, Jesús Rojas


Weitere Informationen

Premium Partner