Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 15 Ibero-American Conference on Artificial Intelligence, IBERAMIA 2016, held in San José, Costa Rica, in November 2016. The 34 papers presented were carefully reviewed and selected from 75 submissions. The papers are organized in the following topical sections: knowledge engineering, knowledge representation and probabilistic reasoning; agent technology and multi-agent systems; planning and scheduling; natural language processing; machine learning; big data, knowledge discovery and data mining; computer vision and pattern recognition; computational intelligence soft computing; AI in education, affective computing, and human-computer interaction.

Inhaltsverzeichnis

Frontmatter

Knowledge Engineering, Knowledge Representation and Probabilistic Reasoning

Frontmatter

Towards an Integration of Workflows and Clinical Guidelines: A Case Study

The integration of workflows and guidelines modeling healthcare processes is a hot topic of research in Artificial Intelligence in Medicine, and is likely to provide a major advance in the IT support to healthcare [1]. In this position paper, we use a case study in order to identify commonalities and differences between workflows and guidelines. As a result of the analysis, we argue in favor of an integrated architecture in which workflow and guideline models are independently managed and supported, while integration is obtained through a mapping onto a system-internal format, where traditional AI-style inferential capabilities are supported.

Paolo Terenziani, Salvatore Femiano

Anomalies Detection in the Behavior of Processes Using the Sensor Validation Theory

Behavior can be defined as combination of variable’s values according to external inputs or environmental changes. This definition can be applied to persons, equipment, social systems or industrial processes. This paper proposes a probabilistic mechanism to represent the behavior of industrial equipment and an algorithm to identify deviations to this behavior. The anomaly detection mechanisms, together with the sensor validation theory are combined to propose an efficient manner to diagnose industrial equipment. A case study is presented with the failure identification of a wind turbine. The diagnosis is conducted when detecting deviations to the turbine normal behavior.

Pablo H. Ibargüengoytia, Uriel A. García, Alberto Reyes, Mónica Borunda

Explanatory Relations Revisited: Links with Credibility-Limited Revision

We study binary relations $$\rhd $$ over propositional formulas built over a finite set of variables. The meaning of $$\alpha \rhd \gamma $$ is that $$\gamma $$ is a preferred explanation of the observation $$\alpha $$. These relations are called Explanatory or abductive relations. We find two important families of abductive relations characterized by his axiomatic behavior: the ordered explanatory relations and the weakly reflexive explanatory relations. We show that both families have tight links with the framework of Credibility limited revision. These relationships allow to establish semantical representations for each family. An important corollary of our representations results is that our axiomatizations allow us to overcome the background theory present in most axiomatizations of abduction.

María Victoria León, Ramón Pino Pérez

Semantic Enrichment of Web Service Operations

In this paper we describe the process by which semantic relatedness assertions are discovered and defined between Web service operations. The general approach relies on a global ontology model that describes Web services. Obtaining semantic similarities between operations is performed by calculating eight semantic relatedness measures between all operations pairs. The entire process consists of: Web service parsing, Web service data extraction, automatic Web service ontology population, similarity measures calculation, similarity discovery; and finally, object property assertion between web service operations.

Maricela Bravo, José A. Reyes-Ortiz, Roberto Alcántara-Ramírez, Leonardo Sánchez

Agent Technology and Multi-agent Systems

Frontmatter

The ICARO Goal Driven Agent Pattern

ICARO is an open source platform for the implementation of multi-agent systems (MAS), which provides architectural patterns for several types of agent models, following well established software engineering principles. This paper describes a pattern of cognitive agent, whose main characteristic is to be goal-driven, and its logic described as a rule based system. This has been used in different real projects and as a tool in a master course on the development of intelligent agent applications. Some of these are used to illustrate its use and explain some of the conclusions derived from these experiences, mostly from a software engineer point of view.

Francisco Garijo, Juan Pavón

A Study for Self-adapting Urban Traffic Control

Nowadays, managing traffic in cities is a complex problem involving considerable physical and economical resources. However, traffic can be simulated by multi-agent systems (MAS), since cars and traffic lights can be modeled as agents that interact to obtain an overall goal: to reduce the average waiting times for the traffic users. In this paper, we present a self-organizing solution to efficiently manage urban traffic. We compare our proposal with other classical and alternative self-organizing approaches, observing that ours provides better results. Then, we present the main contributions of the paper that analyze the effects of different traffic conditions over our cheap and easy-to-implement method for self-organizing urban traffic management. We consider several scenarios where we explore the effects of dynamic traffic density, a reduction in the percentage of sensors needed to support the traffic management system, and the possibility of using communication among cross-points.

P. S. Rodríguez-Hernández, J. C. Burguillo, Enrique Costa-Montenegro, Ana Peleteiro

Planning and Scheduling

Frontmatter

A Constraint-Based Approach for the Conciliation of Clinical Guidelines

The medical domain often arises new challenges to Artificial Intelligence. An emerging challenge is the support for the treatment of patients affected by multiple pathologies (comorbid patients). In the medical context, clinical practice guidelines (CPGs) are usually adopted to provide physicians with evidence-based recommendations, considering only single pathologies. To support physicians in the treatment of comorbid patients, suitable methodologies must be devised to “merge” CPGs. Techniques like replanning or scheduling, traditionally adopted in AI to “merge” plans, must be extended and adapted to fit the requirements of the medical domain. In this paper, we propose a novel methodology, that we term “conciliation”, to merge multiple CPGs, supporting the treatments of comorbid patients.

Luca Piovesan, Paolo Terenziani

Intelligence Amplification Framework for Enhancing Scheduling Processes

The scheduling process in a typical business environment consists of predominantly repetitive tasks that have to be completed in limited time and often containing some form of uncertainty. The intelligence amplification is a symbiotic relationship between a human and an intelligent agent. This partnership is organized to emphasize the strength of both entities, with the human taking the central role of the objective setter and supervisor, and the machine focusing on executing the repetitive tasks. The output efficiency and effectiveness increase as each partner can focus on its native tasks. We propose the intelligence amplification framework that is applicable in typical scheduling problems encountered in the business domain. Using this framework we build an artifact to enhance scheduling processes in synchromodal logistics, showing that a symbiotic decision maker performs better in terms of efficiency and effectiveness.

Andrej Dobrkovic, Luyao Liu, Maria-Eugenia Iacob, Jos van Hillegersberg

A Column Generation Approach for Solving a Green Bi-objective Inventory Routing Problem

The aim of this paper is present a multi-objective algorithm embedded with column generation to solve a green bi-objective inventory routing problem. In contrast with the classic Inventory Routing Problem where the main objective is to minimize the total cost overall supply chain network, in the green logistics besides this objective a minimization of the $$ CO_{2} $$ emisions is included. For solving the bi-objective problem, we proposed the use of NISE (Noninferior Set Estimation) algorithm combined with column generation for reduce the amount of variables in the problem.

Carlos Franco, Eduyn Ramiro López-Santana, Germán Méndez-Giraldo

Natural Language Processing

Frontmatter

Enhancing Semi-supevised Text Classification Using Document Summaries

The vast amount of electronic documents available on the Internet demands for automatic tools that help people finding, organizing and easily accessing to all this information. Although current text classification methods have alleviated some of the above problems, such strategies depend on having a large and reliable set of labeled data. In order to overcome such limitation, this work proposes an alternative approach for semi-supervised text classification, which is based on a new strategy for diminishing the sensitivity to the noise contained on labeled data by means of automatic text summarization. Experimental results showed that our proposed approach outperforms traditional semi-supervised text classification techniques; additionally, our results also indicate that our approach is suitable for learning from only one labeled example per category.

Esaú Villatoro-Tello, Emmanuel Anguiano, Manuel Montes-y-Gómez, Luis Villaseñor-Pineda, Gabriela Ramírez-de-la-Rosa

A Comparison Between Two Spanish Sentiment Lexicons in the Twitter Sentiment Analysis Task

Sentiment analysis aims to determine people’s opinions towards certain entities (e.g., products, movies, people, etc.). In this paper we describe experiments performed to determine sentiment polarity on tweets of the Spanish corpus used in the TASS workshop. We explore the use of two Spanish sentiment lexicons to find out the effect of these resources in the Twitter sentiment analysis task. Rule based and supervised classification methods were implemented and several variations over those approaches were performed. The results show that the information of both lexicons improve the accuracy when is provided as a feature to a Naïve Bayes classifier. Despite the simplicity of the proposed strategy, the supervised approach obtained better results than several participant teams of the TASS workshop and even the rule based approach overpass the accuracy of one team which used a supervised algorithm.

Omar Juárez Gambino, Hiram Calvo

Is This a Joke? Detecting Humor in Spanish Tweets

While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84 % and a recall of 69 %.

Santiago Castro, Matías Cubero, Diego Garat, Guillermo Moncecchi

Evaluating Topic-Based Representations for Author Profiling in Social Media

The Author Profiling (AP) task aims to determine specific demographic characteristics such as gender and age, by analyzing the language usage in groups of authors. Notwithstanding the recent advances in AP, this is still an unsolved problem, especially in the case of social media domains. According to the literature most of the work has been devoted to the analysis of useful textual features. The most prominent ones are those related with content and style. In spite of the success of using jointly both kinds of features, most of the authors agree in that content features are much more relevant than style, which suggest that some profiling aspects, like age or gender could be determined only by observing the thematic interests, concerns, moods, or others words related to events of daily life. Additionally, most of the research only uses traditional representations such as the BoW, rather than other more sophisticated representations to harness the content features. In this regard, this paper aims at evaluating the usefulness of some topic-based representations for the AP task. We mainly consider a representation based on Latent Semantic Analysis (LSA), which automatically discovers the topics from a given document collection, and a simplified version of the Linguistic Inquiry and Word Count (LIWC), which consists of 41 features representing manually predefined thematic categories. We report promising results in several corpora showing the effectiveness of the evaluated topic-based representations for AP in social media.

Miguel A. Álvarez-Carmona, A. Pastor López-Monroy, Manuel Montes-y-Gómez, Luis Villaseñor-Pineda, Ivan Meza

Using Robustness to Learn to Order Semantic Properties in Referring Expression Generation

A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms aim to select attributes that unambiguously identify an entity with respect to a set of distractors. Previous work has defined a methodology to evaluate REG algorithms using real life examples with naturally occurring alterations in the properties of referring entities. It has been found that REG algorithms have key parameters tuned to exhibit a large degree of robustness. Using this insight, we present here experiments for learning the order of semantic properties used by a high performing REG algorithm. Presenting experiments on two types of entities (people and organizations) and using different versions of DBpedia (a freely available knowledge base containing information extracted from Wikipedia pages) we found that robustness of the tuned algorithm and its parameters do coincide but more work is needed to learn these parameters from data in a generalizable fashion.

Pablo Ariel Duboue, Martin Ariel Domínguez

Conditional Random Fields for Spanish Named Entity Recognition Using Unsupervised Features

Unsupervised features based on word representations such as word embeddings and word collocations have shown to significantly improve supervised NER for English. In this work we investigate whether such unsupervised features can also boost supervised NER in Spanish. To do so, we use word representations and collocations as additional features in a linear chain Conditional Random Field (CRF) classifier. Experimental results (82.44 % F-score on the CoNLL-2002 corpus) show that our approach is comparable to some state-of-art Deep Learning approaches for Spanish, in particular when using cross-lingual word representations.

Jenny Copara, Jose Ochoa, Camilo Thorne, Goran Glavaš

Machine Learning

Frontmatter

Detection of Fraud Symptoms in the Retail Industry

Data mining is one of the most effective methods for fraud detection. This is highlighted by 25 % of organizations that have suffered from economic crimes [1]. This paper presents a case study using real-world data from a large retail company. We identify symptoms of fraud by looking for outliers. To identify the outliers and the context where outliers appear, we learn a regression tree. For a given node, we identify the outliers using the set of examples covered at that node, and the context as the conjunction of the conditions in the path from the root to the node. Surprisingly, at different nodes of the tree, we observe that some outliers disappear and new ones appear. From the business point of view, the outliers that are detected near the leaves of the tree are the most suspicious ones. These are cases of difficult detection, being observed only in a given context, defined by a set of rules associated with the node.

Rita P. Ribeiro, Ricardo Oliveira, João Gama

A Machine Learning Model for Occupancy Rates and Demand Forecasting in the Hospitality Industry

Occupancy rate forecasting is a very important step in the decision-making process of hotel planners and managers. Popular strategies as Revenue Management feature forecasting as a vital activity for dynamic pricing, and without accurate forecasting, errors in pricing will negatively impact hotel financial performance. However, having accurate enough forecasts is no simple task for a wealth of reasons, as the inherent variability of the market, lack of personnel with statistical skills, and the high cost of specialized software. In this paper, several machine learning techniques were surveyed in order to construct models to forecast daily occupancy rates for a hotel, given historical records of bookings and occupation. Several approaches related to dataset construction and model validation are discussed. The results obtained in terms of the Mean Absolute Percentage Error (MAPE) are promising, and support the use of machine learning models as a tool to help solve the problem of occupancy rates and demand forecasting.

William Caicedo-Torres, Fabián Payares

A Machine Learning Model for Triage in Lean Pediatric Emergency Departments

High demand periods and under-staffing due to financial constraints cause Emergency Departments (EDs) to frequently exhibit over-crowding and slow response times to provide adequate patient care. In response, Lean Thinking has been applied to help alleviate some of these issues and improve patient handling, with success. Lean approaches in EDs include separate patient streams, with low-complexity patients treated in a so-called Fast Track, in order to reduce total waiting time and to free-up capacity to treat more complicated patients in a timely manner. In this work we propose the use of Machine Learning techniques in a Lean Pediatric ED to correctly predict which patients should be admitted to the Fast Track, given their signs and symptoms. Charts from 1205 patients of the emergency department of Hospital Napoleón Franco Pareja in Cartagena - Colombia, were used to construct a dataset and build several predictive models. Validation and test results are promising and support the validity of this approach and further research on the subject.

William Caicedo-Torres, Gisela García, Hernando Pinzón

An Empirical Validation of Learning Schemes Using an Automated Genetic Defect Prediction Framework

Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports a benchmarking study that uses a genetic algorithm that automatically generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms). Performance of the software development defect prediction models (using AUC, Area Under the Curve) was validated using NASA-MDP and PROMISE data sets. Twelve data sets from NASA-MDP (8) and PROMISE (4) projects were analyzed running a $$M\times N$$-fold cross-validation. We used a genetic algorithm to select the components of the learning schemes automatically, and to evaluate and report those with the best performance. In all, 864 learning schemes were studied. The most common learning schemes were: data preprocessors: Log and CoxBox + attribute selectors: Backward Elimination, BestFirst and LinearForwardSelection + learning algorithms: NaiveBayes, NaiveBayesSimple, SimpleLogistic, MultilayerPerceptron, Logistic, LogitBoost, BayesNet, and OneR. The genetic algorithm reported steady performance and runtime among data sets, according to statistical analysis.

Juan Murillo-Morera, Carlos Castro-Herrera, Javier Arroyo, Rubén Fuentes-Fernández

Machine Learning Approaches to Estimate Simulated Cardiac Ejection Fraction from Electrical Impedance Tomography

The ejection fraction (EF) is a parameter that represents the amount of blood pumped out of each ventricle in each cardiac cycle and can be used for analyzing the heart failure. There are several diagnostic tests to determine whether the person has heart failure but some are expensive tests and they do not allow obtaining continuous estimations of EF. However, use the Electrical Impedance Tomography (EIT) with Regression Models is an alternative to obtain continuous estimations of EF. The quality of EIT is that it allows a quick diagnosis about the heart’s health, combining low cost and high portability. This paper it proposed four regression models, using the electrical measures from EIT, to estimate the EF : Gaussian Processes (GP), Support Vector Regression (SVR), Elastic Net Regression (ENR) and Multivariate Adaptive Regression Splines (MARS). The overall evaluation of results show that all models achieved competitive results and the method SVR has produced better results than the others tested.

Tales L. Fonseca, Leonardo Goliatt, Luciana C. D. Campos, Flávia S. Bastos, Luis Paulo S. Barra, Rodrigo W. dos Santos

Machine Learning Models for Early Dengue Severity Prediction

Infection by dengue-virus is prevalent and a public health issue in tropical countries worldwide. Also, in developing nations, child populations remain at risk of adverse events following an infection by dengue virus, as the necessary care is not always accessible, or health professionals are without means to cheaply and reliably predict how likely is for a patient to experience severe Dengue. Here, we propose a classification model based on Machine Learning techniques, which predicts whether or not a pediatric patient will be admitted into the pediatric Intensive Care Unit, as a proxy for Dengue severity. Different Machine Learning techniques were trained and validated using Stratified 5-Fold Cross-Validation, and the best model was evaluated on a disjoint test set. Cross-Validation results showed an SVM with Gaussian Kernel outperformed the other models considered, with an 0.81 Receiver Operating Characteristic Area Under the Curve (ROC AUC) score. Subsequent results over the test set showed a 0.75 ROC AUC score. Validation and test results are promising and support further research and development.

William Caicedo-Torres, Ángel Paternina, Hernando Pinzón

Early Prediction of Severe Maternal Morbidity Using Machine Learning Techniques

Severe Maternal Morbidity is a public health issue. It may occur during pregnancy, delivery, or puerperium due to conditions (hypertensive disorders, hemorrhages, infections and others) that put in risk the women’s or baby’s life. These conditions are really difficult to detect at an early stage. In response to the above, this work proposes using several machine learning techniques, which are considered most relevant in a bio-medical setting, in order to predict the risk level for Severe Maternal Morbidity in patients during pregnancy. The population studied correspond to pregnant women receiving prenatal care and final attention at E.S.E Clínica de Maternidad Rafael Calvo in Cartagena, Colombia. This paper presents the preliminary results of an ongoing project, as well as methods and materials considered for the construction of the learning models.

Eugenia Arrieta Rodríguez, Francisco Edna Estrada, William Caicedo Torres, Juan Carlos Martínez Santos

Big Data, Knowledge Discovery and Data Mining

Frontmatter

Collaborative Filtering with Semantic Neighbour Discovery

Nearest neighbour collaborative filtering (NNCF) algorithms are commonly used in multimedia recommender systems to suggest media items based on the ratings of users with similar preferences. However, the prediction accuracy of NNCF algorithms is affected by the reduced number of items – the subset of items co-rated by both users – typically used to determine the similarity between pairs of users. In this paper, we propose a different approach, which substantially enhances the accuracy of the neighbour selection process – a user-based CF (UbCF) with semantic neighbour discovery (SND). Our neighbour discovery methodology, which assesses pairs of users by taking into account all the items rated at least by one of the users instead of just the set of co-rated items, semantically enriches this enlarged set of items using linked data and, finally, applies the Collinearity and Proximity Similarity metric (CPS), which combines the cosine similarity with Chebyschev distance dissimilarity metric. We tested the proposed SND against the Pearson Correlation neighbour discovery algorithm off-line, using the HetRec data set, and the results show a clear improvement in terms of accuracy and execution time for the predicted recommendations.

Bruno Veloso, Benedita Malheiro, Juan C. Burguillo

Distributed and Parallel Algorithm for Computing Betweenness Centrality

Today, online social networks have millions of users, and continue growing up. For that reason, the graphs generated from these networks usually do not fit into a single machine’s memory and the time required for its processing is very large. In particular, to compute a centrality measure like betweenness could be expensive on those graphs. To address this challenge, in this paper we present a parallel and distributed algorithm to compute betweenness. Also, we develop a heuristic to reduce the overall time, which accomplish a speedup over 80x in the best of cases.

Mirlayne Campuzano-Alvarez, Adrian Fonseca-Bruzón

Principal Curves and Surfaces to Interval Valued Variables

In this paper we propose a generalization to symbolic interval valued variables, of the Principal Curves and Surfaces method proposed by Hastie in [6]. Given a data set X with n observations and m continuous variables, the main idea of Principal Curves and Surfaces method is to generalize the principal component line, providing a smooth one-dimensional curved approximation to a set of data points in $$\mathbb {R}^m$$. A principal surface is more general, providing a curved manifold approximation of dimension 2 or more. In our case we are interested in finding the main principal curve that approximates better symbolic interval data variables. In [3, 4], authors proposed the Centers Method and the Vertices Method to extend the well-known principal components analysis method to a particular kind of symbolic objects characterized by multi-valued variables of interval type. In this paper we generalize both, the Centers Method and the Vertices Method, finding a smooth curve that passes through the middle of the data X in an orthogonal sense. Some comparisons of the proposed method regarding the Centers and the Vertices Methods are made, this was done with the RSDA package using Ichino data set, see [1, 10]. To make these comparisons we have used the correlation index.

Jorge Arce G., Oldemar Rodríguez R.

In Defense of Online Kmeans for Prototype Generation and Instance Reduction

The nearest neighbor rule is one of the most popular algorithms for data mining tasks due in part to its simplicity and theoretical/empirical properties. However, with the availability of large volumes of data, this algorithm suffers from two problems: the computational cost to classify a new example, and the need to store the whole training set. To alleviate these problems instance reduction algorithms are often used to obtain a new condensed training set that in addition to reducing the computational burden, in some cases they improve the classification performance. Many instance reduction algorithms have been proposed so far, obtaining outstanding performance in mid size data sets. However, the application of the most competitive instance reduction algorithms becomes prohibitive when dealing with massive data volumes. For this reason, in recent years, it has become crucial the development of large scale instance reduction algorithms. This paper elaborates on the usage of a classic algorithm for clustering: K-means for tackling the instance reduction problem in big data. We show that this traditional algorithm outperforms most state of the art instance reduction methods in mid size data sets. In addition, this algorithm can cope well with massive data sets and still obtain quite competitive performance. Therefore, the main contribution of this work is showing the validity of this often depreciated algorithm for a quite relevant task in a quite relevant scenario.

Mauricio García-Limón, Hugo Jair Escalante, Alicia Morales-Reyes

Computer Vision and Pattern Recognition

Frontmatter

In Search of Truth: Analysis of Smile Intensity Dynamics to Detect Deception

Detection of deceptive facial expressions, including estimating smile genuineness, is an important and challenging research topic that draws increasing attention from the computer vision and pattern recognition community. The state-of-the-art methods require localizing a number of facial landmarks to extract sophisticated facial characteristics. In this paper, we explore how to exploit fast smile intensity detectors to extract temporal features. This allows for real-time discrimination between posed and spontaneous expressions at the early smile onset phase. We report the results of experimental validation, which indicate high competitiveness of our method for the UvA-NEMO benchmark database.

Michal Kawulok, Jakub Nalepa, Karolina Nurzynska, Bogdan Smolka

Sign Languague Recognition Without Frame-Sequencing Constraints: A Proof of Concept on the Argentinian Sign Language

Automatic sign language recognition (SLR) is an important topic within the areas of human-computer interaction and machine learning. On the one hand, it poses a complex challenge that requires the intervention of various knowledge areas, such as video processing, image processing, intelligent systems and linguistics. On the other hand, robust recognition of sign language could assist in the translation process and the integration of hearing-impaired people, as well as the teaching of sign language for the hearing population.SLR systems usually employ Hidden Markov Models, Dynamic Time Warping or similar models to recognize signs. Such techniques exploit the sequential ordering of frames to reduce the number of hypothesis. This paper presents a general probabilistic model for sign classification that combines sub-classifiers based on different types of features such as position, movement and handshape. The model employs a bag-of-words approach in all classification steps, to explore the hypothesis that ordering is not essential for recognition. The proposed model achieved an accuracy rate of 97 % on an Argentinian Sign Language dataset containing 64 classes of signs and 3200 samples, providing some evidence that indeed recognition without ordering is possible.

Franco Ronchetti, Facundo Quiroga, César Estrebou, Laura Lanzarini, Alejandro Rosete

Computational Intelligence Soft Computing

Frontmatter

Automatic Generation of Type-1 and Interval Type-2 Membership Functions for Prediction of Time Series Data

The use of type-1 or type-2 membership functions in fuzzy systems offers a wide range of research opportunities. In this aspect, there are neither formal recommendations, methods that can help to decide which type of membership function should be chosen nor has the process of generating these membership functions been formalized. Against this background, this paper describes a study comparing the results of employing both a Genetic Algorithm and a Simulated Annealing for automatic generation of type-1 and interval type-2 membership functions. The paper also describes tests with different degrees of uncertainty inherent both to the input data and the fuzzy system rules. Experiments were conducted to predict the Mackey-Glass time series and the results were verified using statistical tests. The data obtained from statistical analysis can be used to determine which type of membership function is most appropriate for the problem.

Andréia Alves dos Santos Schwaab, Silvia Modesto Nassar, Paulo José de Freitas Filho

Calibration of Microscopic Traffic Flow Simulation Models Using a Memetic Algorithm with Solis and Wets Local Search Chaining (MA-SW-Chains)

Traffic models require calibration to provide an adequate representation of the actual field conditions. This study presents the adaptation of a memetic algorithm (MA-SW-Chains) based on Solis and Wets local search chains, for the calibration of microscopic traffic flow simulation models. The effectiveness of the proposed MA-SW-Chains approach was tested using two vehicular traffic flow models (McTrans and Reno). The results were superior compared to two state-of-the-art approaches found in the literature: (i) a single-objective genetic algorithm that uses simulated annealing (GASA), and (ii) a stochastic approximation simultaneous perturbation algorithm (SPSA). The comparison was based on tuning time, runtime and the quality of the calibration, measured by the GEH statistic (which calculates the difference between the counts of real and simulated links) .

Carlos Cobos, Carlos Daza, Cristhian Martínez, Martha Mendoza, Carlos Gaviria, Cristian Arteaga, Alexander Paz

A New Strategy Based on Feature Selection for Fault Classification in Transmission Lines

The transmission lines are the element most susceptible to faults on power systems, and the short circuit faults are the worst type of faults than can happen on this element. In order to avoid further problems due to these faults, a fault diagnostic is necessary, and the use of front ends is required. However, the selection process for choosing the front ends is not a simple one because it behaves differently for each. Therefore, this paper presents a new front end, called Concat front end, which integrates other front ends, such as wavelet, raw and Root Mean Square. Furthermore, we have applied feature selection techniques based on filter in order to decrease the dimension of the input data. Thus, we used the following classifiers: neural network, K-nearest neighbor, Random Forest and support vector machine. We used a public dataset called UFPAFaults to train and test the classifiers. As a result, the concatenation of front ends, on most cases, had achieved the lowest error rates. In addition, the feature selection techniques applied showed that it is possible to get higher accuracy using less features on the process.

Márcia Homci, Paulo Chagas, Brunelli Miranda, Jean Freire, Raimundo Viégas, Yomara Pires, Bianchi Meiguins, Jefferson Morais

AI in Education, Affective Computing, and Human-Computer Interaction

Frontmatter

Context Ontologies in Ubiquitous Learning Environments

Modeling ubiquitous context in u-learning systems is a challenging task. The model must include support for a variety of context information that differs in nature and granularity level. As such, a representation mechanism that allows an adequate manipulation of this kind of information is needed. Many researchers have proposed using ontologies to embody ubiquitous learning context information. In this paper, an overview of such models is presented, followed by a discussion of their main characteristics regarding the ontology model itself and the support the model provides to the u-learning system.

Gabriela González, Elena Durán, Analía Amandi

An Interactive Tool to Support Student Assessment in Programming Assignments

The paper presents an interactive tool for analysis of a set of source code submissions made by students when solving a programming assignment. The goal of the tool is to give a concise but informative overview of the different solutions submitted by the students, identifying groups of similar solutions and visualizing their relationships in a graph. Different strategies for calculating the solution groups as well as for visualizing the solution graphs were evaluated over a set of real codes submitted by students of an algorithms class.

Lina F. Rosales-Castro, Laura A. Chaparro-Gutiérrez, Andrés F. Cruz-Salinas, Felipe Restrepo-Calle, Jorge Camargo, Fabio A. González

Hidden Markov Models for Artificial Voice Production and Accent Modification

In this paper, we consider the problem of accent modification between Castilian Spanish and Mexican Spanish. This is an interesting application area for tasks such as the automatic dubbing of pictures and videos with different accents. We initially apply statistical parametric speech synthesis to produce two artificial voices, each with the required accent, using Hidden Markov Models (HMM). This type of speech synthesis technique is capable of learning and reproducing certain essential parameters of the voice in question. We then propose a way to adapt these parameters between the two accents. The prosodic differences in the voices are modeled and transformed directly using this adaptation method. In order to produce the voices initially, we use a speech database that was developed by professional actors from Spain and Mexico. The results obtained from subjective and objective tests are promising, and the method is essentially applicable to accent modification between other Spanish accents.

Marvin Coto-Jiménez, John Goddard-Close

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise