Skip to main content
Top

2017 | Book

Advances in Computational Intelligence

15th Mexican International Conference on Artificial Intelligence, MICAI 2016, Cancún, Mexico, October 23–28, 2016, Proceedings, Part I

insite
SEARCH

About this book

The two-volume set LNAI 10061 and 10062 constitutes the proceedings of the 15th Mexican International Conference on Artificial Intelligence, MICAI 2016, held in Cancún, Mexico, in October 2016.
The total of 86 papers presented in these two volumes was carefully reviewed and selected from 238 submissions. The contributions were organized in the following topical sections:

Part I: natural language processing; social networks and opinion mining; fuzzy logic; time series analysis and forecasting; planning and scheduling; image processing and computer vision; robotics.

Part II: general; reasoning and multi-agent systems; neural networks and deep learning; evolutionary algorithms; machine learning; classification and clustering; optimization; data mining; graph-based algorithms; and intelligent learning environments.

Table of Contents

Frontmatter

Natural Language Processing

Frontmatter
Relevance of Named Entities in Authorship Attribution

Named entities (NE) are words that refer to names of people, locations, organization, etc. NE are present in every kind of documents: e-mails, letters, essays, novels, poems. Automatic detection of these words is very important task in natural language processing. Sometimes, NE are used in authorship attribution studies as a stylometric feature. The goal of this paper is to evaluate the effect of the presence of NE in texts for the authorship attribution task: are we really detecting the style of an author or are we just discovering the appearance of the same NE. We used the corpus that consists of 91 novels of 7 authors of XVIII century. These authors spoke and wrote English, their native language. All novels belong to fiction genre. The used stylometric features were character n-grams, word n-gram and n-gram of POS tags of various sizes (2-grams, 3-grams, etc.). Five novels were selected for each author, these novels contain between 4 and 7% of the NE. All novels were divided into blocks, each block contains 10,000 terms. Two kinds of experiment were conducted: automatic classification of blocks containing NE and of the same blocks without NE. In some cases, we use only the most frequent n-grams (500, 2,000 and 4,000 n-grams). Three machine learning algorithms were used for classification task: NB, SVM (SMO) and J48. The results show that as a tendency the presence of the NE helps to classify (improvements from 5% to 20%), but there are specific authors when NE do not help and even make the classification worse (about 10% of experimental data).

Germán Ríos-Toledo, Grigori Sidorov, Noé Alejandro Castro-Sánchez, Alondra Nava-Zea, Liliana Chanona-Hernández
A Compact Representation for Cross-Domain Short Text Clustering

Nowadays, Twitter depicts a rich source of on-line reviews, ratings, recommendations, and other forms of opinion expressions. This scenario has created the compelling demand to develop innovative mechanisms to store, search, organize and analyze all this data automatically. Unfortunately, it is seldom available to have enough labeled data in Twitter, because of the cost of the process or due to the impossibility to obtain them, given the rapid growing and change of this kind of media. To avoid such limitations, unsupervised categorization strategies are employed. In this paper we face the problem of cross-domain short text clustering through a compact representation that allows us to avoid the problems that arise with the high dimensionality and sparseness of vocabulary. Our experiments, conducted on a cross-domain scenario using very short texts, indicate that the proposed representation allows to generate high quality groups, according to the value of Silhouette coefficient obtained.

Alba Núñez-Reyes, Esaú Villatoro-Tello, Gabriela Ramírez-de-la-Rosa, Christian Sánchez-Sánchez
Characteristics of Most Frequent Spanish Verb-Noun Combinations

We study most frequent Spanish verb-noun combinations retrieved from the Spanish Web Corpus. We present the statistics of these combinations and analyze the degree of cohesiveness of their components. For the verb-noun combinations which turned out to be collocations, we determined their semantics in the form of lexical functions. We also observed what word senses are most typical for polysemous words in the verb-noun combinations under study and determined the level of generalization which characterizes the semantics of words in the combinations, that is, at what level of the hyperonymy-hyponymy tree they are located. The data collected by us can be used in various applications of natural language processing, especially, in predictive models in which most frequent cases are taken into account.

Olga Kolesnikova, Alexander Gelbukh
Sentence Paraphrase Graphs: Classification Based on Predictive Models or Annotators’ Decisions?

As part of our project ParaPhraser on the identification and classification of Russian paraphrase, we have collected a corpus of more than 8000 sentence pairs annotated as precise, loose or non-paraphrases. The corpus is annotated via crowdsourcing by naïve native Russian speakers, but from the point of view of the expert, our complex paraphrase detection model can be more successful at predicting paraphrase class than a naive native speaker.Our paraphrase corpus is collected from news headlines and therefore can be considered a summarized news stream describing the most important events. By building a graph of paraphrases, we can detect such events.In this paper we construct two such graphs: based on the current human annotation and on the complex model prediction. The structure of the graphs is compared and analyzed and it is shown that the model graph has larger connected components which give a more complete picture of the important events than the human annotation graph. Predictive model appears to be better at capturing full information about the important events from the news collection than human annotators.

Ekaterina Pronoza, Elena Yagunova, Nataliya Kochetkova
Mathematical Model of an Ontological-Semantic Analyzer Using Basic Ontological-Semantic Patterns

In this work we propose a mathematical model of a Russian-text semantic analyzer based on semantic rules. We provide a working algorithm of the semantic analyzer and demonstrate some examples of its software implementation in Java language. According to the developed algorithm, the text that proceeds to the input of the semantic analyzer is gradually reduced: some syntaxemes from the analyzed text, in accordance with semantic rules, are added to a queue with priority; then on each iteration the syntaxeme corresponding to the highest priority element is removed from the text. When a syntaxeme is removed from the text, the corresponding element is removed from the queue. Priority definition for a queue element bases on the value of the group priority of the semantic dependence which is described by a semantic rule, as well as on the position in the analyzed syntaxeme corresponding to the queue element.

Anastasia Mochalova, Vladimir Mochalov
CookingQA: A Question Answering System Based on Cooking Ontology

We present an approach to develop a Question Answering (QA) system over cooking recipes that makes use of Cooking Ontology management. QA systems are designed to satisfy the user’s specific information need whereas ontology is the conceptualization of knowledge and it exhibits the hierarchical structure. The system is an Information retrieval (IR) based system where the various tasks to be handled like question classification, answer pattern recognition, indexing, final answer generation. Our proposed QA System use Apache Lucene for document retrieval. All cooking related documents are indexed using Apache Lucene. Stop words are removed from each cooking related question and formed the query words which are identified to retrieve the most relevant document using Lucene. Relevant paragraphs are selected from the retrieved documents based on the tf-idf of the matching query words along with n-gram overlap of the paragraph with the original question. This paper also presents a way to develop an ontology model in such a way that the queries can be processed with the help of the ontology knowledge base and generate the exact answer.

Riyanka Manna, Partha Pakray, Somnath Banerjee, Dipankar Das, Alexander Gelbukh
LEXIK. An Integrated System for Specialized Terminology

The paper presents LEXIK, an intelligent terminological architecture that is able to efficiently obtain specialized lexical resources for elaborating dictionaries and providing lexical support for different expert tasks. LEXIK is designed as a powerful tool to create a rich knowledge base for lexicography. It will process big amounts of data in a modular system, that combines several applications and techniques for terminology extraction, definition generation, example extraction and term banks, that have been partially developed so far. Such integration is a challenge for the area, which lacks an integrated system for extracting and defining terms from a non-preprocessed corpus.

Gerardo Sierra, Jorge Lázaro, Gemma Bel-Enguix
Linguistic Restrictions in Automatic Translation from Written Spanish to Mexican Sign Language

This document shows the lexical and syntactic study made to evaluate the distinguishing features of the Mexican Sign Language (MSL), it also shows the work accomplished in a model for automatic translation from written Spanish to MSL. The aims of this work were to encourage the conditions of a greater social inclusion of deaf and hearing impaired people, it also looked and contributed to the creation of linguistic resources in order to support further studies of Mexican Sign Language. This project allowed the development of a corpora of signs of MSL, an small interlingua collocations dictionary between Spanish an MSL, and a dictionary of synonyms in MSL. There were identified essentials cases of MSL with regard to Spanish. The application was developed using the system of translation by rules, and lexical and syntactic transfer. It required the design of a model with validation of basic syntactic structures and equivalences rules in Spanish.

Obdulia Pichardo-Lagunas, Bella Martínez-Seis, Alejandro Ponce-de-León-Chávez, Carlos Pegueros-Denis, Ricardo Muñoz-Guerrero
Intra-document and Inter-document Redundancy in Multi-document Summarization

Multi-document summarization differs from single-document summarization in excessive redundancy of mentions of some events or ideas. We show how the amount of redundancy in a document collection can be used for assigning importance to sentences in multi-document extractive summarization: for instance, an idea could be important if it is redundant across documents because of its popularity; on the other hand, an idea could be important if it is not redundant across documents because of its novelty. We propose an unsupervised graph-based technique that, based on proper similarity measures, allows us to experiment with intra-document and inter-document redundancy. Our experiments on DUC corpora show promising results.

Pabel Carrillo-Mendoza, Hiram Calvo, Alexander Gelbukh
Indexing and Searching of Judicial Precedents Using Automatic Summarization

With the aim of democratizing access to justice, the Colombian legal system has recognized the importance of judicial precedent. Judicial precedent allows citizens to request judicial decisions based on previous sentences (court rulings). Some tools are provided to search these sentences. These tools are based on keywords and therefore the search may obtain many non-relevant results. The low efficacy of these tools may make it difficult to find the right sentence. Recently, an approach for sentences searching in the Colombian context was presented. This approach explores the full content of the sentences, which leads to high processing times. One solution to this problem may be to generate automatic summaries before performing the search. This paper presents a comparative analysis of algorithms for automatic summaries. The experimental evaluation shows promising results.

Armando Ordóñez, Diego Belalcazar, Manuel Calambas, Angela Chacón, Hugo Ordoñez, Carlos Cobos
Discriminatory Capacity of the Most Representative Phonemes in Spanish: An Evaluation for Forensic Voice Comparison

In this paper, a study of the discriminatory capacity of the most representative segments for forensic speaker comparison in Mexican Spanish is presented. The study is based on two corpora in order to assess the discriminatory capacity of the fundamental frequency and the three first vocalic formants acoustic parameters for reading and semi-spontaneous speech. We found that the context /sa/ has 73% of discriminatory capacity to classify speakers using the three first formants of the vowel /a/ with a dynamic analysis. We used several statistical techniques and found that the best methodology for the recognition of patterns consists of using linear regression with a quadratic fitting to reduce the number of predictors to a manageable level and apply discriminant analysis on the reduced set. This result is consistent with previous research data despite the methodology for Mexican Spanish had never been used.

Fernanda López-Escobedo, Luis Alberto Pineda Cortés
A Speech-Based Web Co-authoring Platform for the Blind

Today, even in the presence of IT applications built for the blind, they are facing many problems. Either the applications are for individual’s use or if offer collaborative writing, they are not up to the mark. In the collaborative writing, for instance, in Microsoft Word Add-In and the Google Docs UI, the authors do not know about changes made in the document. Though the former facilitates this feature through adding comments in the document, but the dynamic actions, e.g. notifications (the user sign-in, editing the document, closing it) and communication option is yet to be considered. We describe a web application for the blind community to work on a shared document from different locations. Users are able to write, read, edit, and update the collaborative document in a controlled way. We aim to enhance the self-reliance among users and improve performance.

Madeeha Batool, Mirza Muhammad Waqar, Ana Maria Martinez-Enriquez, Aslam Muhammad

Social Networks and Opinion Mining

Frontmatter
Friends and Enemies of Clinton and Trump: Using Context for Detecting Stance in Political Tweets

Stance detection, the task of identifying the speaker’s opinion towards a particular target, has attracted the attention of researchers. This paper describes a novel approach for detecting stance in Twitter. We define a set of features in order to consider the context surrounding a target of interest with the final aim of training a model for predicting the stance towards the mentioned targets. In particular, we are interested in investigating political debates in social media. For this reason we evaluated our approach focusing on two targets of the SemEval-2016 Task 6 on Detecting stance in tweets, which are related to the political campaign for the 2016 U.S. presidential elections: Hillary Clinton vs. Donald Trump. For the sake of comparison with the state of the art, we evaluated our model against the dataset released in the SemEval-2016 Task 6 shared task competition. Our results outperform the best ones obtained by participating teams, and show that information about enemies and friends of politicians help in detecting stance towards them.

Mirko Lai, Delia Irazú Hernández Farías, Viviana Patti, Paolo Rosso
Additive Regularization for Topic Modeling in Sociological Studies of User-Generated Texts

Social studies of the Internet have adopted large-scale text mining for unsupervised discovery of topics related to specific subjects. A recently developed approach to topic modeling, additive regularization of topic models (ARTM), provides fast inference and more control over the topics with a wide variety of possible regularizers than developing LDA extensions. We apply ARTM to mining ethnic-related content from Russian-language blogosphere, introduce a new combined regularizer, and compare models derived from ARTM with LDA. We show with human evaluations that ARTM is better for mining topics on specific subjects, finding more relevant topics of higher or comparable quality. We also include a detailed analysis of how to tune regularization coefficients in ARTM models.

Murat Apishev, Sergei Koltcov, Olessia Koltsova, Sergey Nikolenko, Konstantin Vorontsov
Effects of the Inclusion of Non-newsworthy Messages in Credibility Assessment

Social media has become influential and affects large public perception. Anyone can post and share messages on social networking sites. However, not all posts are trustworthy. Many online messages contain misleading or false information. There has been an extensive research to assess the credibility of social media data. Previous studies evaluate all online messages, which may be inappropriate due to a large amount of such data that can result in ineffectiveness of the system. This paper studies and presents the effects of the inclusion of such data—namely, non-newsworthy messages—in credibility assessment. Our findings affirm a negative effect of training a model with non-newsworthy data. The degree of performance degradation is also shown to have a strong connection to a degree of non-newsworthiness in training data.

Chaluemwut Noyunsan, Tatpong Katanyukul, Carson K. Leung, Kanda Runapongsa Saikaew
On the Impact of Neighborhood Selection Strategies for Recommender Systems in LBSNs

Location-based social networks (LBSNs) have emerged as a new concept in online social media, due to the widespread adoption of mobile devices and location-based services. LBSNs leverage technologies such as GPS, Web 2.0 and smartphones to allow users to share their locations (check-ins), search for places of interest or POIs (Point of Interest), look for discounts, comment about specific places, connect with friends and find the ones who are near a specific location. To take advantage of the information that users share in these networks, Location-based Recommender Systems (LBRSs) generate suggestions based on the application of different recommendation techniques, being collaborative filtering (CF) one of the most traditional ones. In this article we analyze different strategies for selecting neighbors in the classic CF approach, considering information contained in the users’ social network, common visits, and place of residence as influential factors. The proposed approaches were evaluated using data from a popular location based social network, showing improvements over the classic collaborative filtering approach.

Carlos Ríos, Silvia Schiaffino, Daniela Godoy

Fuzzy Logic

Frontmatter
For Multi-interval-valued Fuzzy Sets, Centroid Defuzzification Is Equivalent to Defuzzifying Its Interval Hull: A Theorem

In the traditional fuzzy logic, the expert’s degree of certainty in a statement is described either by a number from the interval [0, 1] or by a subinterval of such an interval. To adequately describe the opinion of several experts, researchers proposed to use a union of the corresponding sets – which is, in general, more complex than an interval. In this paper, we prove that for such set-valued fuzzy sets, centroid defuzzification is equivalent to defuzzifying its interval hull.As a consequence of this result, we prove that the centroid defuzzification of a general type-2 fuzzy set can be reduced to the easier-to-compute case when for each x, the corresponding fuzzy degree of membership is convex.

Vladik Kreinovich, Songsak Sriboonchitta
Metric Spaces Under Interval Uncertainty: Towards an Adequate Definition

In many practical situations, we only know the bounds on the distances. A natural question is: knowing these bounds, can we check whether there exists a metric whose distance always lies within these bounds – or such a metric is not possible and thus, the bounds are inconsistent. In this paper, we provide an answer to this question. We also describe possible applications of this result to a description of opposite notions in commonsense reasoning.

Mahdokht Afravi, Vladik Kreinovich, Thongchai Dumrongpokaphoan
A Study of Parameter Dynamic Adaptation with Fuzzy Logic for the Grey Wolf Optimizer Algorithm

The main goal of this paper is to present a general study of the Grey Wolf Optimizer algorithm. We perform tests to determine in the first part which parameters are candidates to be dynamically adjusted and in the second stage to determine which are the parameters that have the greatest effect in the performance of the algorithm. We also present a justification and results of experiments as well as the benchmark functions that were used for the tests that are presented. In addition we are presenting a simple fuzzy system with the results obtained based on this general study.

Luis Rodríguez, Oscar Castillo, José Soria
Interval Type-2 Fuzzy Logic for Parameter Adaptation in the Gravitational Search Algorithm

In this paper we are presenting a modification of the Gravitational Search Algorithm (GSA) using type-2 fuzzy logic to dynamically change the alpha parameter and provide a different gravitation and acceleration values to each agent in order to improve its performance. We test this approach with benchmark mathematical functions. Simulation results show the advantages of the proposed approach.

Beatriz González, Fevrier Valdez, Patricia Melin
Water Cycle Algorithm with Fuzzy Logic for Dynamic Adaptation of Parameters

This paper describes the enhancement of the Water Cycle Algorithm (WCA) using a fuzzy inference system to adapt its parameters dynamically. The original WCA is compared regarding performance with the proposed method called Water Cycle Algorithm with Dynamic Parameter Adaptation (WCA-DPA). Simulation results on a set of well-known test functions show that the WCA can be improved with a fuzzy dynamic adaptation of the parameters.

Eduardo Méndez, Oscar Castillo, José Soria, Patricia Melin, Ali Sadollah
Off-line Tuning of a PID Controller Using Type-2 Fuzzy Logic

Tuning of PID controllers is a hard task, sometimes that’s almost an art. Different paradigms have been proposed to optimize the tuning of these controllers, some of them have given acceptable results, others have not done it. From heuristics to artificial intelligence have been used, but in this paper we used Type-2 Fuzzy Logic to tuning a PID controller applied in three transfer functions, and we obtained their step responses, then we compared them with the Type-1 Fuzzy Logic’s results.

Heberi R. Tello-Rdz, Luis M. Torres-Treviño, Angel Rodríguez-Liñan
Detection of Faults in Induction Motors Using Texture-Based Features and Fuzzy Inference

The most popular rotating machine in the industry is the induction motor, and the harmful states on such motors may have consequences in costs, product quality, and safety. In this paper, a methodology that allows to detect faults in induction motors is proposed. Such methodology is based on the use of texture-inspired features in a fuzzy inference system. The features are extracted from the start-up current signal using the histograms of sum and differences, which have not been used for this kind of applications. The detected states in a given motor are: misalignment, motor with one broken bar and motor in good condition. The proposed methodology shows satisfactory results, using real signals of faulty motors, providing a new approach to detect faults in an automatic manner using only the current signals from the start-up stage.

Uriel Calderon-Uribe, Rocio A. Lizarraga-Morales, Carlos Rodriguez-Donate, Eduardo Cabal-Yepez

Time Series Analysis and Forecasting

Frontmatter
Trend Detection in Gold Worth Using Regression

A mapping chase autoregression shape is applied to predict gold worth here. Previous works centered on prediction of the instability of gold worth to reveal the characteristics of gold market. By the way, due to the fact that the data of gold worth have high dimensionality, MCAF is suitable and able to predict gold worth more accurately rather than other mechanisms. In this paper, the MCAF is used to the everyday worth of gold. The experimental results indicate MCAF outperforms BPNN technique, especially on stability, which reveals the advantage of MCAF technique in dealing with huge amounts of data.

Seyedeh Foroozan Rashidi, Hamid Parvin, Samad Nejatian
Internet Queries as a Tool for Analysis of Regional Police Work and Forecast of Crimes in Regions

In the paper we propose two technologies for processing web search queries related to criminal activity. The first one is based on the comparison of the relative number of crimes and the corresponding queries in different regions. Such analysis allows to evaluate the work of the regional police. The second technology uses the correlation between the dynamics of crimes and the dynamics of queries for the Group Method of Data Handling (GMDH) application. It allows to forecast crimes in regions. The source data is taken from Internet depositories of the Yandex company and the Office of the Prosecutor General of Russia. The results of analysis coincide with the official governmental statistics. The results of forecast prove to be very high (2% – 6% of errors). This circumstance gives us the possibility to recommend the proposed technologies for the police departments.

Anna Boldyreva, Mikhail Alexandrov, Olexiy Koshulko, Oleg Sobolevskiy
Creating Collections of Descriptors of Events and Processes Based on Internet Queries

Search queries to Internet are a real reflection of events and processes that happen in the informative society. Moreover, the recent research shows that search queries can be an effective tool for the analysis and forecast of these events and processes. In the paper, we present our experience in creating databases of descriptors (queries and their combinations) to be used in real problems. An example related to the analysis and forecast of regional economy illustrates an application of the mentioned descriptors. The paper is intended for those who use or plan to use Internet queries in their applied research and practical applications.

Anna Boldyreva, Oleg Sobolevskiy, Mikhail Alexandrov, Vera Danilova

Planning and Scheduling

Frontmatter
Using a Grammar Checker to Validate Compliance of Processes with Workflow Models

Grammar checking has been used for a long time to validate if a given sentence - a sequence of words - complies with grammar rules. Recently attribute grammars have been proposed as a formal model to describe workflows. A workflow specifies valid processes, which are sequences of actions. In this paper we show how a grammar checker developed for natural languages can be used to validate whether or not a given process complies with the workflow model expressed using an attribute grammar. The checker can also suggest possible corrections of the process to become a valid process.

Roman Barták, Vladislav Kuboň
On Verification of Workflow and Planning Domain Models Using Attribute Grammars

Recently, attribute grammars have been suggested as a unifying framework to describe workflow and planning domains models. One of the critical aspects of the domain model is its soundness, that is, the model should not contain any dead-ends and should describe at least one plan. In this paper we describe how the domain model can be verified by using the concept of reduction of attribute grammars. Two verification methods are suggested, one based on transformation to context-free grammars and one direct method exploiting constraint satisfaction.

Roman Barták, Tomáš Dvořák
Tramp Ship Scheduling Problem with Berth Allocation Considerations and Time-Dependent Constraints

This work presents a model for the Tramp Ship Scheduling problem including berth allocation considerations, motivated by a real case of a shipping company. The aim is to determine the travel schedule for each vessel considering multiple docking and multiple time windows at the berths. This work is innovative due to the consideration of both spatial and temporal attributes during the scheduling process. The resulting model is formulated as a mixed-integer linear programming problem, and a heuristic method to deal with multiple vessel schedules is also presented. Numerical experimentation is performed to highlight the benefits of the proposed approach and the applicability of the heuristic. Conclusions and recommendations for further research are provided.

Francisco López-Ramos, Armando Guarnaschelli, José-Fernando Camacho-Vallejo, Laura Hervert-Escobar, Rosa G. González-Ramírez
Hierarchical Task Model for Resource Failure Recovery in Production Scheduling

Attaining optimal results in real-life scheduling is hindered by a number of problems. One such problem is dynamics of manufacturing environments with breaking-down resources and hot orders coming during the schedule execution. A traditional approach to react to unexpected events occurring on the shop floor is generating a new schedule from scratch. Complete rescheduling, however, may require excessive computation time. Moreover, the recovered schedule may deviate a lot from the ongoing schedule. Some works have focused on tackling these shortcomings, but none of the existing approaches tries to substitute jobs that cannot be executed with a set of alternative jobs. This paper describes the scheduling model suitable for dealing with unforeseen events using the possibility of alternative processes and proposes an efficient heuristic-based approach to recover an ongoing schedule from a resource failure.

Roman Barták, Marek Vlk
A Multi-objective Hospital Operating Room Planning and Scheduling Problem Using Compromise Programming

This paper proposes a hybrid compromise programming local search approach with two main characteristics: a capacity to generate non-dominated solutions and the ability to interact with the decision maker. Compromise programming is an approach where it is not necessary to determine the entire set of Pareto-optimal solutions but only some of them. These solutions are called compromise solutions and represent a good tradeoff between conflicting objectives. Another advantage of this type of method is that it allows the inclusion of the decision maker’s preferences through the definition of weights included in the different metrics used by the method. This approach is tested on an operating room planning process. This process incorporates the operating rooms and the nurse planning simultaneously. Three different objectives were considered: to minimize operating room costs, to minimize the maximum number of nurses needed to participate in surgeries and to minimize the number of open operating rooms. The results show that it is a powerful decision tool that enables the decision makers to apply compromise alongside optimal solutions during an operating room planning process.

Alejandra Duenas, Christine Di Martinelly, G. Yazgı Tütüncü, Joaquin Aguado
Solving Manufacturing Cell Design Problems Using the Black Hole Algorithm

In this paper we solve the Manufacturing Cell Design Problem. This problem considers the grouping of different machines into sets or cells with the objective of minimizing the movement of material. To solve this problem we use the Black Hole algorithm, a modern population-based metaheuristic that is inspired by the phenomenon of the same name. At each iteration of the search, the best candidate solution is selected to be the black hole and other candidate solutions, known as stars, are attracted by the black hole. If one of these stars get too close to the black hole it disappears, generating a new random star (solution). Our approach has been tested by using a well-known set of benchmark instances, reaching optimal values in all of them.

Ricardo Soto, Broderick Crawford, Nicolás Fernandez, Víctor Reyes, Stefanie Niklander, Ignacio Araya

Image Processing and Computer Vision

Frontmatter
Efficient Computation of the Euler Number of a 2-D Binary Image

A new method to compute the Euler number of a 2-D binary image is described in this paper. The method employs three comparisons unlike other proposals that utilize more comparisons. We present two variations, one useful for the case of images containing only 4-connected objects and one useful in the case of 8-connected objects. To numerically validate our method, we firstly apply it to a set of very simple examples; to demonstrate its applicability, we test it next with a set of images of different sizes and object complexities. To show competitiveness of our method against other proposals, we compare it in terms of processing times with some of the state-of-the-art-formulations reported in literature.

Juan Humberto Sossa-Azuela, Ángel A. Carreón-Torres, Raúl Santiago-Montero, Ernesto Bribiesca-Correa, Alberto Petrilli-Barceló
Image Filter Based on Block Matching, Discrete Cosine Transform and Principal Component Analysis

An algorithm for filtering the images contaminated by additive white Gaussian noise is proposed. The algorithm uses the groups of Hadamard transformed patches of discrete cosine coefficients to reject noisy components according to Wiener filtering approach. The groups of patches are found by the proposed block similarity search algorithm of reduced complexity performed on block patches in transform domain. When the noise variance is small, the proposed filter uses an additional stage based on principal component analysis; otherwise the experimental Wiener filtering is performed. The obtained filtering results are compared to the state of the art filters in terms of peak signal-to-noise ratio and structure similarity index. It is shown that the proposed algorithm is competitive in terms of signal to noise ratio and almost in all cases is superior to the state of the art filters in terms of structure similarity.

Alejandro I. Callejas Ramos, Edgardo M. Felipe-Riveron, Pablo Manrique Ramirez, Oleksiy Pogrebnyak
Support to the Diagnosis of the Pap Test, Using Computer Algorithms of Digital Image Processing

40 years ago, uterine cervix cancer represented one of the greatest threats of cancer death among women. With continued advances in medicine and technology, deaths from this disease have declined significantly. The investigations concerning this issue have been determined key symptoms to detect the disease in time to give timely treatment. Conventional cytology is one of the most commonly used techniques being widely accepted because it´s inexpensive, and provide many control mechanisms. In order to alleviate the workload to specialists, some researchers have proposed the development of computer vision tools to detect and classify the changes in the cells of the cervix region. This research aims to provide researchers with an automatic classification tool applicable to the conditions in medical and research centers in the country. This tool classifies the cells of the cervix, based solely on the features extracted from the nucleus region and reduces the rate of false negative Pap test. From the study, a tool using the technique k-nearest neighbors with distance Manhattan, which showed high performance while maintaining AUC values greater than 91% and reaching 97.1% with a sensitivity of 96% and 88% of obtained specificity.

Solangel Rodríguez-Vázquez
Implementation of Computer Vision Guided Peg-Hole Insertion Task Performed by Robot Through LabVIEW

This paper presents a computer vision guided peg hole insertion task conducted by a robot. We mounted two cameras: one capturing top view and the other capturing side view to calibrate the three dimensional co-ordinates of the center position of the peg and hole found in the image to the actual world co-ordinates so that the robot can grab and insert the peg into the hole automatically. We exploit normalized cross correlation based template matching and distortion model (grid) calibration algorithm for our experiment. We exploit a linear equation for the linear and rotational displacement of the arm and gripper of the robot respectively for computing the pulse required for the encoder. We utilize gantry robot to conduct the experiment. The implementation was carried out in LabVIEW environment. We achieved significant amount of accuracy with an experimental error of 5% for template matching and $${\pm } 2.5$$ mm for calibration algorithm.

Andres Sauceda Cienfuegos, Enrique Rodriguez, Jesus Romero, David Ortega Aranda, Baidya Nath Saha
Object Tracking Based on Modified TLD Framework Using Compressive Sensing Features

Visual object tracking is widely researched but still challenging as both accuracy and efficiency must be considered in a single system. CT tracker can achieve a good real-time performance but is not very robust to fast movements. TLD framework has the ability to re-initialize object but can’t handle rotation and runs with low efficiency. In this paper, we propose a tracking algorithm combining the CT into TLD framework to overcome the disadvantages of each other. With the scale information obtained by an optical-flow tracker, we select samples for detector and use the detection result to correct the optical-flow tracker. The features are extracted using compressive sensing to improve the processing speed. The classifier parameters are updated by online learning. Considering the situation of continuous loss of object, a sliding window searching is also employed. Experiment results show that our proposed method achieves good performances in both precision and speed.

Tao Yang, Cindy Cappelle, Yassine Ruichek, Mohammed El Bagdouri
Parameter Characterization of Complex Wavelets and its use in 3D Reconstruction

Fringe projection is an optical method used to perform three-dimensional reconstruction of objects by applying structured light and phase detection algorithms. Some of these algorithms make use of the wavelet transform, which is a function that splits a signal into sub-signals with different scales at different levels of resolution. However, despite the above characteristics, the use and implementation of the wavelet transform requires good parameterization of the many variables involved for each wavelet function (scale and translation coefficient variation), in addition to analyze different wavelet functions such as Morlet, Paul Mother, and Gaussian, among others. Based on these requirements, the present paper aims to develop an in-depth analysis of the most suitable parameters for the Shannon, B-Spline and Morlet Wavelets that ensure the most efficient 3D reconstruction. The experimental results are presented using a set of virtual objects and can be applied to a real object for the purpose of validation.

Claudia Victoria Lopez, Jesus Carlos Pedraza, Juan Manuel Ramos, Elias Gonzalo Silva, Efren Gorrostieta Hurtado
Methodology for Automatic Collection of Vehicle Traffic Data by Object Tracking

Traffic monitoring is carried out both manual and mechanically, and is subject to problems of subjectivity and high costs due to human errors. This study proposes a methodology to collect vehicle traffic data (counts, speeds, etc.) on video in an automated fashion, by means of object tracking techniques, which can help to design and implement reliable and accurate software. The development of this methodology has followed the design cycle of all tracking system, namely, preprocessing, detection, tracking and quantification. The preprocessing stage attenuated the noise and increased the classification percentage by an average of 10%. The object detection algorithm with better performance was Gaussian Mixture Models with an execution time of 0.06 s per image and a classification percentage of 86.71%. The Computational cost of the object tracking was reduced using Template Matching with Search Window. Finally, the quantification stage got to successfully collect the vehicular traffic data on video.

Jesús Caro-Gutierrez, Miguel E. Bravo-Zanoguera, Félix F. González-Navarro

Robotics

Frontmatter
GPS-Based Curve Estimation for an Adaptive Pure Pursuit Algorithm

Different algorithms for path tracking have been described and implemented for autonomous vehicles. Traditional geometric algorithms like Pure Pursuit use position information to compute vehicle’s steering angle to follow a predefined path. The main issue of these algorithms resides in cutting corners since no curvature information is taken into account. In order to overcome this problem, we present a sub-system for path tracking where an algorithm that analyzes GPS information off-line classifies high curvature segments and estimates the ideal speed for each one. Additionally since the evaluation of our sub-system is performed through a simulation of an adaptive Pure Pursuit algorithm, we propose a method to estimate dynamically its look-ahead distance based on the vehicle speed and lateral error. As it will be shown through experimental results, our sub-system introduces improvements in comfort and safety due to the extracted geometry information and speed control, stabilizing the vehicle and minimizing the lateral error.

Citlalli Gámez Serna, Alexandre Lombard, Yassine Ruichek, Abdeljalil Abbas-Turki
Collective Motion of a Swarm of Simulated Quadrotors Using Repulsion, Attraction and Orientation Rules

Coordination of large groups of robots can be a difficult task, but recent approaches, like swarm robotics, offer an alternative based on the behaviors observed in social animals. In this paper, the collective behaviors of the model proposed by Iain Couzin et al. are implemented in a swarm of 10 simulated quadcopters. The simulations are carried with a dynamic model of the quadcopter and a PID controller for stable flight. Collective behaviors are obtained with local rules of repulsion, orientation and attraction, based on the relative position of neighbors.

Mario Aguilera-Ruiz, Luis Torres-Treviño, Angel Rodríguez-Liñán
Design and Simulation of a New Lower Exoskeleton for Rehabilitation of Patients with Paraplegia

The paper proposes a new architecture for a lower exoskeleton with five degrees of freedom (DOF) per each leg, where, the design and synthesis of the kinematic chains is based on human leg parameters in terms of ratios, range of motion, and physical length. This research presents the design and simulation of lower limb exoskeleton for rehabilitation of patients with paraplegia. This work presents close equation for the forward and inverse kinematics by geometric and Denavit-Hartenberg (D-H) approach. Also, the dynamic model is approached by applying the principle of Lagrangian dynamics. The paper contains several simulations and numerical examples to prove the analytical results.

Fermín C. Aragón, C. Hernández-Santos, José-Isidro Hernández Vega, Daniel Andrés Córdova, Dolores Gabriela Palomares Gorham, Jonam Leonel Sánchez Cuevas
Sensorial System for Obtaining the Angles of the Human Movement in the Coronal and Sagittal Anatomical Planes

This paper presents the construction of a system composed by inertial sensors and the linkage with a biped robot, our main aim is the obtaining, quantification and analysis of human body posture (in rest or in motion). To achieve this, we have three objectives: 1.Determination and obtaining, of kinematics and dynamic of the body joints, 2.Imitation of body movements (biped robot prototype)3.Quantification of anomalies presented in the human gait.For this purpose, we present the development a sensorial system capable to obtaining the angles in the human body.

David Alvarado, Leonel Corona, Saúl Muñoz, José Aquino
Backmatter
Metadata
Title
Advances in Computational Intelligence
Editors
Grigori Sidorov
Oscar Herrera-Alcántara
Copyright Year
2017
Electronic ISBN
978-3-319-62434-1
Print ISBN
978-3-319-62433-4
DOI
https://doi.org/10.1007/978-3-319-62434-1

Premium Partner