Skip to main content

Über dieses Buch

The two-volume set LNAI 11288 and 11289 constitutes the proceedings of the 17th Mexican International Conference on Artificial Intelligence, MICAI 2018, held in Guadalajara, Mexico, in October 2018.

The total of 62 papers presented in these two volumes was carefully reviewed and selected from 149 submissions.

The contributions are organized in topical as follows:

Part I: evolutionary and nature-inspired intelligence; machine learning; fuzzy logic and uncertainty management.

Part II: knowledge representation, reasoning, and optimization; natural language processing; and robotics and computer vision.



Evolutionary and Nature-Inspired Intelligence


A Genetic Algorithm to Solve Power System Expansion Planning with Renewable Energy

In this paper, a deterministic dynamic mixed-integer programming model for solving the generation and transmission expansion-planning problem is addressed. The proposed model integrates conventional generation with renewable energy sources and it is based on a centralized planned transmission expansion. Due a growing demand over time, it is necessary to generate expansion plans that can meet the future requirements of energy systems. Nowadays, in most systems a public entity develops both the short and long of electricity-grid expansion planning and mainly deterministic methods are employed. In this study, an heuristic optimization approach based on genetic algorithms is presented. Numerical results show the performance of the proposed algorithm.

Lourdes Martínez-Villaseñor, Hiram Ponce, José Antonio Marmolejo, Juan Manuel Ramírez, Agustina Hernández

Memetic Algorithm for Constructing Covering Arrays of Variable Strength Based on Global-Best Harmony Search and Simulated Annealing

Covering Arrays (CA) are mathematical objects widely used in the design of experiments in several areas of knowledge and of most recent application in hardware and software testing. CA construction is a complex task that entails a high run time and high computational load. To date, research has been carried out for constructing optimal CAs using exact methods, algebraic methods, Greedy methods, and metaheuristic-based methods. These latter, including among them Simulated Annealing and Tabu Search, have reported the best results in the literature. Their effectiveness is largely due to the use of local optimization techniques with different neighborhood schemes. Given the excellent results of Global-best Harmony Search (GHS) algorithm in various optimization problems and given that it has not been explored in CA construction, this paper presents a memetic algorithm (GHSSA) using GHS for global search, SA for local search and two neighborhood schemes for the construction of uniform and mixed CAs of different strengths. GHSSA achieved competitive results on comparison with the state of the art and in experimentation did not require the use of supercomputers.

Jimena Timaná, Carlos Cobos, Jose Torres-Jimenez

An Adaptive Hybrid Evolutionary Approach for a Project Scheduling Problem that Maximizes the Effectiveness of Human Resources

In this paper, an adaptive hybrid evolutionary algorithm is proposed to solve a project scheduling problem. This problem considers a valuable optimization objective for project managers. This objective is maximizing the effectiveness of the sets of human resources assigned to the project activities. The adaptive hybrid evolutionary algorithm utilizes adaptive processes to develop the different stages of the evolutionary cycle (i.e., adaptive parent selection, survival selection, crossover, mutation and simulated annealing processes). These processes adapt their behavior according to the diversity of the algorithm’s population. The utilization of these processes is meant to enhance the evolutionary search. The performance of the adaptive hybrid evolutionary algorithm is evaluated on six instance sets with different complexity levels, and then is compared with those of the algorithms previously reported in the literature for the addressed problem. The obtained results indicate that the adaptive hybrid evolutionary algorithm significantly outperforms the algorithms previously reported.

Virginia Yannibelli

Universal Swarm Optimizer for Multi-objective Functions

This paper presents the Universal Swarm Optimizer for Multi-Objective Functions (USO), which is inspired in the zone-based model proposed by Couzin that represents in a more realistic way the behavior of biological species as fish schools and bird flocks. The algorithm is validated using 10 multi-objective benchmark problems and a comparison with the Multi-Objective Particle Swarm Optimization (MOPSO) is presented. The obtained results suggest that the proposed algorithm is very competitive and presents interesting characteristics which could be used to solve a wide range of optimization problems.

Luis A. Márquez-Vega, Luis M. Torres-Treviño

Broadcasting and Sharing of Parameters in an IoT Network by Means of a Fractal of Hilbert Using Swarm Intelligence

Nowadays, thousand and thousand of small devices, such as Microcontroller Units (MCU’s), live around us. These MCU’not only interact with us turning on lights or identifying movement in a House but also they perform small and specific tasks such as sensing different parameters such as temperature, humidity, $$CO_2$$ , adjustment of the environmental lights. In addition there is a huge kind of these MCU’s like SmartPhones or small general purpose devices, ESP8266 or RaspberryPi3 or any kind of Internet of Things (IoT) devices. They are connected to internet to a central node and then they can share their information. The main goal of this article is to connect all the nodes in a fractal way without using a central one, just sharing some parameters with two adjacent nodes, but any member of these nodes knows the parameters of the rest of these devices even if they are not adjacent nodes. With a Hilbert fractal network we can access to the entire network in real time in a dynamic way since we can adapt and reconfigure the topology of the network when a new node is added using tools of Artificial Intelligence for its application in a Smart City.

Jaime Moreno, Oswaldo Morales, Ricardo Tejeida, Juan Posadas

Solid Waste Collection in Ciudad Universitaria-UNAM Using a VRP Approach and Max-Min Ant System Algorithm

The collection of solid waste is a very important problem for most of the modern cities of the world. The solution to this problem requires to apply optimization techniques capable of design the best path routes that guarantee to collect all the waste minimizing the cost. Several computation techniques could be applied to solve this problem and one of the most suitable could be swarm optimization such as ant colony optimization. In this paper, we propose a methodology for searching a set of collection paths of solid waste that optimize the distance of a tour in Ciudad Universitaria (UNAM). This methodology uses a vehicle routing problem (VRP) approach combined with Max-Min Ant System algorithm. To assess the accuracy of the proposal, we select the scholar circuit in the area of Ciudad Universitaria. The results shown a shortest distance travelled and better distribution than the empiric route used actually for the cleaning service.

Katya Rodriguez-Vazquez, Beatriz Aurora Garro, Elizabeth Mancera

Selection of Characteristics and Classification of DNA Microarrays Using Bioinspired Algorithms and the Generalized Neuron

DNA microarrays are used for the massive quantification of gene expression. This analysis allows to diagnose, identify and classify different diseases. This is a computationally challenging task due to the large number of genes and a relatively small number of samples.Some papers applied the generalized neuron (GN) to solve approximation functions, to calculate density estimates, prediction and classification problems [1, 2].In this work we show how a GN can be used in the task of microarray classification. The proposed methodology is as follows: first reducing the dimensionality of the genes using a genetic algorithm, then the generalized neuron is trained using one bioinspired algorithms: Particle Swarm Optimization, Genetic Algorithm and Differential Evolution. Finally the precision of the methodology it is tested by classifying three databases of DNA microarrays: $$Leukemia\ benchmarck$$ $$ALL-AML$$ , $$Colon\ Tumor$$ and $$Prostate\ cancer$$ .

Flor Alejandra Romero-Montiel, Katya Rodríguez-Vázquez

Supervised and Unsupervised Neural Networks: Experimental Study for Anomaly Detection in Electrical Consumption

Households are responsible for more than 40% of the global electricity consumption [7]. The analysis of this consumption to find unexpected behaviours could have a great impact on saving electricity. This research presents an experimental study of supervised and unsupervised neural networks for anomaly detection in electrical consumption. Multilayer perceptrons and autoencoders are used for each approach, respectively. In order to select the most suitable neural model in each case, there is a comparison of various architectures. The proposed methods are evaluated using real-world data from an individual home electric power usage dataset. The performance is compared with a traditional statistical procedure. Experiments show that the supervised approach has a significant improvement in anomaly detection rate. We evaluate different possible feature sets. The results demonstrate that temporal data and measures of consumption patterns such as mean, standard deviation and percentiles are necessary to achieve higher accuracy.

Joel García, Erik Zamora, Humberto Sossa

Artificial Neural Networks and Common Spatial Patterns for the Recognition of Motor Information from EEG Signals

This paper proposes the use of two models of neural networks (Multi Layer Perceptron and Dendrite Morphological Neural Network) for the recognition of voluntary movements from electroencephalographic (EEG) signals. The proposal consisted of three main stages: organization of EEG signals, feature extraction and execution of classification algorithms. The EEG signals were recorded from eighteen healthy subjects performing self-paced reaching movements. Three classification scenarios were evaluated in each participant: Relax versus Intention, Relax versus Execution and Intention versus Execution. The feature extraction stage was carried out by applying an algorithm known as Common Spatial Pattern, in addition to the statistical methods called Root Mean Square, Variance, Standard Deviation and Mean. The results showed that the models of neural networks provided decoding accuracies above chance level, whereby, it is able to detect a movement prior its execution. On the basis of these results, the neural networks are a powerful promising classification technique that can be used to enhance performance in the recognition of motor tasks for BCI systems based on electroencephalographic signals.

Carlos Daniel Virgilio Gonzalez, Juan Humberto Sossa Azuela, Javier M. Antelis

Classification of Motor Imagery EEG Signals with CSP Filtering Through Neural Networks Models

The paper reports the development and evaluation of brain signals classifiers. The proposal consisted of three main stages: organization of EEG signals, feature extraction and execution of classification algorithms. The EEG signals used, represent four motor actions: Left Hand, Right Hand, Tongue and Foot movements; in the frame of the Motor Imagery Paradigm. These EEG signals were obtained from a database provided by the Technological University of Graz. From this dataset, only the EEG signals of two healthy subjects were used to carry out the proposed work. The feature extraction stage was carried out by applying an algorithm known as Common Spatial Pattern, in addition to the statistical method called Root Mean Square. The classification algorithms used were: K-Nearest Neighbors, Support Vector Machine, Multilayer Perceptron and Dendrite Morphological Neural Networks. This algorithms was evaluated with two studies. The first one aimed to evaluate the performance in the recognition between two classes of Motor Imagery tasks; Left Hand vs. Right Hand, Left Hand vs. Tongue, Left Hand vs. Foot, Right Hand vs. Tongue, Right Hand vs. Foot and Tongue vs. Foot. The second study aimed to employ the same algorithms in the recognition between four classes of Motor Imagery tasks; Subject 1 - $$93.9\% \pm 3.9\%$$ and Subject 2 - $$68.7\% \pm 7\%$$ .

Carlos Daniel Virgilio Gonzalez, Juan Humberto Sossa Azuela, Elsa Rubio Espino, Victor H. Ponce Ponce

Efficiency Analysis of Particle Tracking with Synthetic PIV Using SOM

To identify the field of velocities of a fluid, the postprocessing stage in the analysis of fluids using PIV images associates tracers in two consecutive images. Statistical methods have been used to perform this task and investigations have reported models of artificial neural networks, as well. The Self-Organized Map (SOM) model stands out for its simplicity and effectiveness, additionally to presenting areas of opportunity for exploring. The SOM model is efficient in the correlation of tracers detected in consecutive PIV images; however, the necessary operations are computationally expensive. This paper discusses the implementation of these operations on GPU to reduce the time complexity. Furthermore, the function calculating the learning factor of the original network model is too simple, and it is advisable to use one that can better adapt to the characteristics of the fluid’s motion. Thus, a proposed 3PL learning factor function modifies the original model for good, because of its greater flexibility due to the presence of three parameters. The results show that this 3PL modification overcomes the efficiency of the original model and one of its variants, in addition to decreasing the computational cost.

Rubén Hernández-Pérez, Ruslan Gabbasov, Joel Suárez-Cansino, Virgilio López-Morales, Anilú Franco-Árcega

Machine Learning


Transforming Mixed Data Bases for Machine Learning: A Case Study

Structured Data Bases which include both numerical and categorical attributes (Mixed Databases or MD) ought to be adequately pre-processed so that machine learning algorithms may be applied to their analysis and further processing. Of primordial importance is that the instances of all the categorical attributes be encoded so that the patterns embedded in the MD be preserved. We discuss CESAMO, an algorithm that achieves this by statistically sampling the space of possible codes. CESAMO’s implementation requires the determination of the moment when the codes distribute normally. It also requires the approximation of an encoded attribute as a function of other attributes such that the best code assignment may be identified. The MD’s categorical attributes are thusly mapped into purely numerical ones. The resulting numerical database (ND) is then accessible to supervised and non-supervised learning algorithms. We discuss CESAMO, normality assessment and functional approximation. A case study of the US census database is described. Data is made strictly numerical using CESAMO. Neural Networks and Self-Organized Maps are then applied. Our results are compared to classical analysis. We show that CESAMO’s application yields better results.

Angel Kuri-Morales

Full Model Selection in Huge Datasets and for Proxy Models Construction

Full Model Selection is a technique for improving the accuracy of machine learning algorithms through the search of the most adequate combination on each dataset of feature selection, data preparation, a machine learning algorithm and its hyper-parameters tuning. With the increasingly larger quantities of information generated in the world, the emergence of the paradigm known as Big Data has made possible the analysis of gigantic datasets in order to obtain useful information for science and business. Though Full Model Selection is a powerful tool, it has been poorly explored in the Big Data context, due to the vast search space and the elevated number of fitness evaluations of candidate models. In order to overcome this obstacle, we propose the use of proxy models in order to reduce the number of expensive fitness functions evaluations and also the use of the Full Model Selection paradigm in the construction of such proxy models.

Angel Díaz-Pacheco, Carlos Alberto Reyes-García

Single Imputation Methods Applied to a Global Geothermal Database

In the exploitation stage of a geothermal reservoir, the estimation of the bottomhole temperature (BHT) is essential to know the available energy potential, as well as the viability of its exploitation. This BHT estimate can be measured directly, which is very expensive, therefore, statistical models used as virtual geothermometers are preferred. Geothermometers have been widely used to infer the temperature of deep geothermal reservoirs from the analysis of fluid samples collected at the soil surface from springs and exploration wells. Our procedure is based on an extensive geochemical data base (n = 708) with measurements of BHT and geothermal fluid of eight main element compositions. Unfortunately, the geochemical database has missing data in terms of some compositions of measured principal elements. Therefore, to take advantage of all this information in the BHT estimate, a process of imputation or completion of the values is necessary.In the present work, we compare the imputations using medium and medium statistics, as well as the stochastic regression and the support vector machine to complete our data set of geochemical components. The results showed that the regression and SVM are superior to the mean and median, especially because these methods obtained the smallest RMSE and MAE errors.

Román-Flores Mariana Alelhí, Santamaría-Bonfil Guillermo, Díaz-González Lorena, Arroyo-Figueroa Gustavo

Feature Selection for Automatic Classification of Gamma-Ray and Background Hadron Events with Different Noise Levels

In this paper we present a feature set for Gamma-ray and Background Hadron events automatic classification. We selected the best parameters combination collected by Cherenkov telescopes in order to make a robust Gamma-ray recognition against different signal noise levels using multiple Machine Learning approaches for pattern recognition. We made a comparison of the robustness to noise for four classifiers reaching an accuracy up to $$90.14\%$$ in high noise level cases.

Andrea Burgos-Madrigal, Ariel Esaú Ortiz-Esquivel, Raquel Díaz-Hernández, Leopoldo Altamirano-Robles

Ranking Based Unsupervised Feature Selection Methods: An Empirical Comparative Study in High Dimensional Datasets

Unsupervised Feature Selection methods have raised considerable interest in the scientific community due to their capability of identifying and selecting relevant features in unlabeled data. In this paper, we evaluate and compare seven of the most widely used and outstanding ranking based unsupervised feature selection methods of the state-of-the-art, which belong to the filter approach. Our study was made on 25 high dimensional real-world datasets taken from the ASU Feature Selection Repository. From our experiments, we conclude which methods perform significantly better in terms of quality of selection and runtime.

Saúl Solorio-Fernández, J. Ariel Carrasco-Ochoa, José Fco. Martínez-Trinidad

Dynamic Selection Feature Extractor for Trademark Retrieval

The paper contributes to the CBIR systems applied to trademark retrieval. The proposed method seeks to find dynamically the best feature extractor that represents the trademark queried. In the experiments are applied four feature extractors: Concavity/Convexity deficiencies (CC), Freeman Chain (FC), Scale Invariant Feature Transform (SIFT) and Hu Invariant Moments (Hu). These extractors represent a set of classes of features extractors, which are submitted to a classification process using two different classifiers: ANN (Artificial Neural Networks) and SVM (Support Vector Machines). The selecting the best feature extractor is important to processing the next levels in search of similar trademarks (i.e. applying zoning mechanisms or combining the best feature extractors), because it is possible restrict the number of operations in large databases. We carried out experiments using UK Patent Office database, with 10,151 images. Our results are in the same basis of the literature and the average in the best case for the normalized recall (Rn) is equal to 0.91. Experiments show that dynamic selection of extractors can contribute to improve the trademarks retrieval.

Simone B. K. Aires, Cinthia O. A. Freitas, Mauren L. Sguario

Bayesian Chain Classifier with Feature Selection for Multi-label Classification

Multi-label classification task has many applications in Text Categorization, Multimedia, Biology, Chemical data analysis and Social Network Mining, among others. Different approaches have been developed: Binary Relevance (BR), Label Power Set (LPS), Random k label sets (RAkEL), some of them consider the interaction between labels in a chain (Chain Classifier) and other alternatives around this method are derived, for instance, Probabilistic Chain Classifier, Monte Carlo Chain Classifier and Bayesian Chain Classifier (BCC). All previous approaches have in common and focus on is in considering different orders or combinations of the way the labels have to be predicted. Given that feature selection has proved to be important in classification tasks, reducing the dimensionality of the problem and even improving classification model’s accuracy. In this work a feature selection technique is tested in BCC algorithm with two searching methods, one using Best First (BF-FS-BCC) and another with GreedyStepwise (GS-FS-BCC), these methods are compared, the winner is also compared with BCC, both tests are compared through Wilcoxon Signed Rank test, in addition it is compared with others Chain Classifier and finally it is compared with others approaches (BR, RAkEL, LPS).

Ricardo Benítez Jiménez, Eduardo F. Morales, Hugo Jair Escalante

A Time Complexity Analysis to the ParDTLT Parallel Algorithm for Decision Tree Induction

In addition to the usual tests for analyzing the performance of a decision tree in a classification process, the analysis of the amount of time and the space resource required are also useful during the supervised decision tree induction. The parallel algorithm called “Parallel Decision Tree for Large Datasets” (or ParDTLT for short) has proved to perform very well when large datasets become part of the training and classification process. The training phase processes in parallel the expansion of a node, considering only a subset of the whole set of training objects. The time complexity analysis proves a linear dependency on the cardinality of the complete set of training objects, and that the dependence is asymptotic and log–linear on the cardinality of the selected subset of training objects when categoric and numeric data are applied, respectively.

Joel Suárez-Cansino, Anilú Franco-Árcega, Linda Gladiola Flores-Flores, Virgilio López-Morales, Ruslan Gabbasov

Infrequent Item-to-Item Recommendation via Invariant Random Fields

Web recommendation services bear great importance in e-commerce and social media, as they aid the user in navigating through the items that are most relevant to her needs. In a typical web site, long history of previous activities or purchases by the user is rarely available. Hence in most cases, recommenders propose items that are similar to the most recent ones viewed in the current user session. The corresponding task is called session based item-to-item recommendation. Generating item-to-item recommendations by “people who viewed this, also viewed” lists works fine for popular items. These recommender systems rely on item-to-item similarities and item-to-item transitions for building next-item recommendations. However, the performance of these methods deteriorates for rare (i.e., infrequent) items with short transaction history. Another difficulty is the cold-start problem, items that recently appeared and had no time yet to accumulate a sufficient number of transactions. In this paper, we describe a probabilistic similarity model based on Random Fields to approximate item-to-item transition probabilities. We give a generative model for the item interactions based on arbitrary distance measures over the items including explicit, implicit ratings and external metadata. We reach significant gains in particular for recommending items that follow rare items. Our experiments on various publicly available data sets show that our new model outperforms both simple similarity baseline methods and recent item-to-item recommenders, under several different performance metrics.

Bálint Daróczy, Frederick Ayala-Gómez, András Benczúr

An Approach Based on Contrast Patterns for Bot Detection on Web Log Files

Nowadays, companies invest resources in detecting non-human accesses on their web traffics. Usually, non-human accesses are a few compared with the human accesses, which is considered as a class imbalance problem, and as a consequence, classifiers bias their classification results toward the human accesses obviating, in this way, the non-human accesses. In some classification problems, such as the non-human traffic detection, high accuracy is not only the desired quality, the model provided by the classifier should be understood by experts. For that, in this paper, we study the use of contrast pattern-based classifiers for building an understandable and accurate model for detecting non-human traffic on web log files. Our experiments over five databases show that the contrast pattern-based approach obtains significantly better AUC results than other state-of-the-art classifiers.

Octavio Loyola-González, Raúl Monroy, Miguel Angel Medina-Pérez, Bárbara Cervantes, José Ernesto Grimaldo-Tijerina

User Recommendation in Low Degree Networks with a Learning-Based Approach

User recommendation plays an important role in microblogging systems since users connect to these networks to share and consume content. Finding relevant users to follow is then a hot topic in the study of social networks. Microblogging networks are characterized by having a large number of users, but each of them connects with a limited number of other users, making the graph of followers to have a low degree. One of the main problems of approaching user recommendation with a learning-based approach in low-degree networks is the problem of extreme class imbalance. In this article, we propose a balancing scheme to face this problem, and we evaluate different classification algorithms using as features classical metrics for link prediction. We found that the learning-based approach outperformed individual metrics for the problem of user recommendation in the evaluated dataset. We also found that the proposed balancing approach lead to better results, enabling a better identification of existing connections between users.

Marcelo G. Armentano, Ariel Monteserin, Franco Berdun, Emilio Bongiorno, Luis María Coussirat

Volcanic Anomalies Detection Through Recursive Density Estimation

The volcanic conditions of Latin America and the Caribbean propitiate the occurrence of natural disaster in these areas. The volcanic-related disasters alter the living conditions of the populations compromised by their activity. We propose to use Recursive Density Estimation (RDE) method to detect volcanic anomalies. The different data used for the design and evaluation of this method are obtained from Puraće volcano of two surveillance volcanic areas: Geochemistry and Deformation. The proposed method learns quickly from data streams in real time and the different volcanic anomalies can be detected taking into account all the previous data of the volcano. RDE achieves good performance in the outliers detection; 82% of precision for geochemestry data, while 77% of precision in geodesy data.

Jose Eduardo Gomez, David Camilo Corrales, Emmanuel Lasso, Jose Antonio Iglesias, Juan Carlos Corrales

A Rainfall Prediction Tool for Sustainable Agriculture Using Random Forest

In recent years world’s governments have focused its efforts on the development of the Sustainable Agriculture were all resources, especially water resources, are used in a more environmentally friendly manner. In this paper, we present an approach for estimating daily accumulated rainfall using multi-spatial scale multi-source data based on Machine Learning algorithms for three HABs in the Andean Region of Colombia where the agricultural activities are one of the main production activities. The proposed approach uses data from different rain-related variables such as vegetation index, elevation data, rain rate and temperature with the aim of the development of a rain forecast, able to respond to local or large-scale rain events. The results show that the trained model can detect local rain events event when no meteorological station data was used.

Cristian Valencia-Payan, Juan Carlos Corrales

Kolb's Learning Styles, Learning Activities and Academic Performance in a Massive Private Online Course

Massive Open Online Courses (MOOC) have been considered an “educational revolution”. Although these courses were designed to reach a massive number of participants, Higher Education institutions have started to use MOOCs technologies and methodologies as a support for educative traditional practices in what has been called Small Private Online Courses (SPOCs) and Massive Private Online Courses (MPOCs) according to the proportion of students enrolled and the teachers who support them. A slightly explored area of scientific literature is the possible correlations between performance and learning styles in academic value courses designed to be offered in massively environments. This article presents the results obtained in the MPOC “Daily Astronomy” at University of Cauca in terms of the possible associations between learning styles according to Kolb and the results in the evaluations and the activity demonstrated in the services of the platform that hosted the course.

Mario Solarte, Raúl Ramírez-Velarde, Carlos Alario-Hoyos, Gustavo Ramírez-González, Hugo Ordóñez-Eraso

Tremor Signal Analysis for Parkinson’s Disease Detection Using Leap Motion Device

Tremor is an involuntary rhythmic movement observed in people with Parkinson’s disease (PD), specifically, hand tremor is a measurement for diagnosing this disease. In this paper, we use hand positions acquired by Leap Motion device for statistical analysis of hand tremor based on the sum and difference of histograms (SDH). Tremor is measured using only one coordinate of the center palm during predefined exercises performed by volunteers at Hospital. In addition, the statistical features obtained with SDH are used to classify tremor signal as with PD or not. Experimental results show that the classification is independent of the hand used during tests, achieving $$98\%$$ of accuracy for our proposed approach using different supervised machine learning classifiers. Additionally, we compare our result with others classifiers proposed in the literature.

Guillermina Vivar-Estudillo, Mario-Alberto Ibarra-Manzano, Dora-Luz Almanza-Ojeda

Fuzzy Logic and Uncertainty Management


Modeling Decisions for Project Scheduling Optimization Problem Based on Type-2 Fuzzy Numbers

This paper examines type-2 fuzzy numbers implementation to resource-constrained scheduling problem (RCSP) for agriculture production system based on expert parameter estimations. Some critical parameters in production system are usually treated as uncertain variables due to environmental changes that influence agriculture process. Implementation of type-2 fuzzy sets (T2FSs) can handle uncertain data when estimating variables for solving decision-making problems. The paper focuses on estimation procedure of uncertain variables in scheduling that reflect level of preference or attitude of decision-maker towards imprecise concepts, relations between variables. Special profiles for activity performance allow to consider uncertainty in time variables, expert estimations, flexibilities in scheduling, resource levelling problem and combinatorial nature of solution methodology. An example of activities for agriculture production system is introduced. Heuristic decision algorithm based on enumeration tree and partial schedules is developed. It can handle both resource-constrained optimization problem under uncertain variables and activity profile switching. As initial activity profile we consider expert decision about best activity execution profile on each level of enumeration tree.

Margarita Knyazeva, Alexander Bozhenyuk, Janusz Kacprzyk

Differential Evolution Algorithm Using a Dynamic Crossover Parameter with High-Speed Interval Type 2 Fuzzy System

The main contribution of this paper is the use of a new concept of type reduction in type-2 fuzzy systems for improving performance in differential evolution algorithm. The proposed method is an analytical approach using an approximation to the Continuous Karnik-Mendel (CEKM) method, and in this way the computational evaluation cost of the Interval Type 2 Fuzzy System is reduced. The performance of the proposed approach was evaluated with seven reference functions using the Differential Evolution algorithm with a crossover parameter that is dynamically adapted with the proposed methodology.

Patricia Ochoa, Oscar Castillo, José Soria, Prometeo Cortes-Antonio

Allocation Centers Problem on Fuzzy Graphs with Largest Vitality Degree

The problem of optimal allocation of service centers is considered in this paper. It is supposed that the information received from GIS is presented in the form of second kind fuzzy graphs. Method of optimal allocation as a way to determine fuzzy set of vitality for fuzzy graph is suggested. This method is based on the transition to the complementary fuzzy graph of first kind. The method allows solving not only problem of finding of optimal service centers location but also finding of optimal location for k-centers with the greatest degree and selecting of service center numbers. Based on this method the algorithm searching vitality fuzzy set for second kind fuzzy graphs is considered. The example of finding optimum allocation centers in fuzzy graph is considered as well.

Alexander Bozhenyuk, Stanislav Belyakov, Margarita Knyazeva, Janusz Kacprzyk

Fuzzy Design of Nearest Prototype Classifier

In pattern classification problems, many works have been carried out with the aim of designing good classifiers from different perspectives. These works achieve very good results in many domains. However, in general they are very dependent on some crucial parameters involved in the design. An alternative is to use fuzzy relations to eliminate thresholds and make the development of classifiers more flexible. In this paper, a new method for solving data classification problems based on prototypes is proposed. Using fuzzy similarity relations for the granulation of the universe, similarity classes are generated and a prototype is built for each similarity class. In the new approach we replace the relation of similarity between two objects by a binary fuzzy relation, which quantifies the strength of the relationship in a range of [0; 1]. Experimental results show that the performance of our method is superior to other methods.

Yanela Rodríguez Alvarez, Rafael Bello Pérez, Yailé Caballero Mota, Yaima Filiberto Cabrera, Yumilka Fernández Hernández, Mabel Frias Dominguez

A Fuzzy Harmony Search Algorithm for the Optimization of a Benchmark Set of Functions

A fuzzy harmony search algorithm (FHS) is presented in this paper. This method uses a fuzzy system for dynamic adaptation of the harmony memory accepting (HMR) parameter along the iterations, and in this way achieving control of the intensification and diversification of the search space. This method was previously applied to classic benchmark mathematical functions with different number of dimensions. However, in this case we decided to apply the proposed FHS to benchmark mathematical problems provided by the CEC 2015 competition, which are unimodal, multimodal, hybrid and composite functions to check the efficiency for the proposed method. A comparison is presented to verify the results obtained with respect to the original harmony search algorithm and fuzzy harmony search algorithm.

Cinthia Peraza, Fevrier Valdez, Oscar Castillo, Patricia Melin

An Innovative and Improved Mamdani Inference (IMI) Method

For a fuzzy system, inputs can be considered as crisp ones or fuzzy ones or a combination of them. Generally, the inputs are of crisp type; but sometimes they are of fuzzy type. For fuzzy inputs, the min max method for measuring the amount of matching is used. The min max method is studied in the paper and its weaknesses will be discovered in the current paper. We propose an alternative approach which is called an innovative and improved mamdani inference method (IIMI). We will show that all weaknesses of the previous min max method have been managed in the proposed inference method.

Hamid Jamalinia, Zahra Alizadeh, Samad Nejatian, Hamid Parvin, Vahideh Rezaie

A General Method for Consistency Improving in Decision-Making Under Uncertainty

In order to ascertain and solve a particular Multiple Criteria Decision Making (MCDM) problem, frequently a diverse group of experts must share their knowledge and expertise, and thus uncertainty arises from several sources. In those cases, the Multiplicative Preference Relation (MPR) approach can be a useful technique. An MPR is composed of judgements between any two criteria components which are declared within a crisp rank and to express decision maker(s) (DM) preferences. Consistency of an MPR is obtained when each expert has her/his information and, consequently, her/his judgments free of contradictions. Since inconsistencies may lead to incoherent results, individual Consistency should be sought after in order to make rational choices. In this paper, based on the Hadamard’s dissimilarity operator, a methodology to derive intervals for MPRs satisfying a consistency index is introduced. Our method is proposed through a combination of a numerical and a nonlinear optimization algorithms. As soon as the synthesis of an interval MPR is achieved, the DM can use these acceptably consistent intervals to express flexibility in the manner of her/his preferences, while accomplishing some a priori decision targets, rules and advice given by her/his current framework. Thus, the proposed methodology provides reliable and acceptably consistent Interval MPR, which can be quantified in terms of Row Geometric Mean Method (RGMM) or the Eigenvalue Method (EM). Finally, some examples are solved through the proposed method in order to illustrate our results and compare them with other methodologies.

Virgilio López-Morales, Joel Suárez-Cansino, Ruslan Gabbasov, Anilu Franco Arcega


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!