Skip to main content
Top

2018 | Book

Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations

17th International Conference, IPMU 2018, Cádiz, Spain, June 11-15, 2018, Proceedings, Part I

Editors: Jesús Medina, Manuel Ojeda-Aciego, José Luis Verdegay, David A. Pelta, Inma P. Cabrera, Prof. Dr. Bernadette Bouchon-Meunier, Ronald R. Yager

Publisher: Springer International Publishing

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This three volume set (CCIS 853-855) constitutes the proceedings of the 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2017, held in Cádiz, Spain, in June 2018.

The 193 revised full papers were carefully reviewed and selected from 383 submissions. The papers are organized in topical sections on advances on explainable artificial intelligence; aggregation operators, fuzzy metrics and applications; belief function theory and its applications; current techniques to model, process and describe time series; discrete models and computational intelligence; formal concept analysis and uncertainty; fuzzy implication functions; fuzzy logic and artificial intelligence problems; fuzzy mathematical analysis and applications; fuzzy methods in data mining and knowledge discovery; fuzzy transforms: theory and applications to data analysis and image processing; imprecise probabilities: foundations and applications; mathematical fuzzy logic, mathematical morphology; measures of comparison and entropies for fuzzy sets and their extensions; new trends in data aggregation; pre-aggregation functions and generalized forms of monotonicity; rough and fuzzy similarity modelling tools; soft computing for decision making in uncertainty; soft computing in information retrieval and sentiment analysis; tri-partitions and uncertainty; decision making modeling and applications; logical methods in mining knowledge from big data; metaheuristics and machine learning; optimization models for modern analytics; uncertainty in medicine; uncertainty in Video/Image Processing (UVIP).

Table of Contents

Frontmatter

Advances on Explainable Artificial Intelligence

Frontmatter
A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field

This paper presents the results of a bibliometric study of the recent research on eXplainable Artificial Intelligence (XAI) systems. We took a global look at the contributions of scholars in XAI as well as in the subfields of AI that are mostly involved in the development of XAI systems. It is worthy to remark that we found out that about one third of contributions in XAI come from the fuzzy logic community. Accordingly, we went in depth with the actual connections of fuzzy logic contributions with AI to promote and improve XAI systems in the broad sense. Finally, we outlined new research directions aimed at strengthening the integration of different fields of AI, including fuzzy logic, toward the common objective of making AI accessible to people.

Jose M. Alonso, Ciro Castiello, Corrado Mencar
Do Hierarchical Fuzzy Systems Really Improve Interpretability?

Fuzzy systems have demonstrated a strong modeling capability. The quality of a fuzzy model is usually measured in terms of its accuracy and interpretability. While the way to measure accuracy is in most cases clear, measuring interpretability is still an open question.The use of hierarchical structures in fuzzy modeling as a way to reduce complexity in systems with many input variables has also shown good results. This complexity reduction is usually considered as a way to improve interpretability, but the real effect of the hierarchy on interpretability has not really been analyzed.The present paper analyzes that complexity reduction comparing it with that of other techniques such as feature extraction, to conclude that only the use of intermediate variables with meaning (from the point of view of model interpretation) will ensure a real interpretability improvement due to the hierarchical structure.

Luis Magdalena
Human Players Versus Computer Games Bots: A Turing Test Based on Linguistic Description of Complex Phenomena and Restricted Equivalence Functions

This paper aims to propose a new version of the well-known Turing Test for computer game bots based on Linguistic Description of Complex Phenomena and Restricted Equivalence Functions whose goal is to evaluate the “believability” of the computer games bots acting in a virtual world. A data-driven software architecture based on Linguistic Modelling of Complex Phenomena is also proposed which allows us to automatically generate bots behavior profiles. These profiles can be compared with human players behavior profiles in order to provide us with a similarity measure of believability between them. In order to show and explore the possibilities of this new turing test, a web platform has been designed and implemented by one of authors.

Clemente Rubio-Manzano, Tomás Lermanda-Senoceaín, Christian Vidal-Castro, Alejandra Segura-Navarrete, Claudia Martínez-Araneda
Reinterpreting Interpretability for Fuzzy Linguistic Descriptions of Data

We approach the problem of interpretability for fuzzy linguistic descriptions of data from a natural language generation perspective. For this, first we review the current state of linguistic descriptions of data and their use contexts as a standalone tool and as part of a natural language generation system. Then, we discuss the standard approach to interpretability for linguistic descriptions and introduce our complementary proposal, which describes the elements from linguistic descriptions of data that can influence and improve the interpretability of automatically generated texts (such as fuzzy properties, quantifiers, and truth degrees), when linguistic descriptions are used to determine relevant content within a text generation system.

A. Ramos-Soto, M. Pereira-Fariña
Personality Determination of an Individual Through Neural Networks

The use of neural networks is proposed in this article as a means of determining the personality of an individual. This research work comes in view of the necessity of combining two psychological tests for carrying out personnel selection. From the assessment of the first test known as 16 Personality Factor we can directly obtain an appraisal of the individual’s personality type as the one given by the Enneagram Test, which now does not need to be done. The two chosen tests are highly accepted by Human Resources Department in big companies as useful tools for selecting personnel when new recruitment comes up, for personnel promotion internal to the firm, for employees’ personal development and growing as a person. The (mathematical/computer science) model chosen to attain the research objectives is based on Artificial Neuron Networks.

J. R. Sanchez, M. I. Capel, Celina Jiménez, Gonzalo Rodriguez-Fraile, M. C. Pegalajar
Fuzzy Rule Learning for Material Classification from Imprecise Data

To address the problem of illicit substance detection at borders, we propose a complete method for explainable classification of materials. The classification is performed using imaprecise chemical data, which is quite rare in the literature. We follow a two-step workflow based on fuzzy logic induction. Firstly, a clustering approach is used to learn the suitable fuzzy terms of the various linguistic variables. Secondly, we induce rules for a justified classification using a fuzzy decision tree. Both methods are adaptations from classic ones to the case of imprecise data. At the end of the paper, results on simulated data are presented in the expectation of real data.

Arnaud Grivet Sébert, Jean-Philippe Poli
Tell Me Why: Computational Explanation of Conceptual Similarity Judgments

In this paper we introduce a system for the computation of explanations that accompany scores in the conceptual similarity task. In this setting the problem is, given a pair of concepts, to provide a score that expresses in how far the two concepts are similar. In order to explain how explanations are automatically built, we illustrate some basic features of COVER, the lexical resource that underlies our approach, and the main traits of the MeRaLi system, that computes conceptual similarity and explanations, all in one. To assess the computed explanations, we have designed a human experimentation, that provided interesting and encouraging results, which we report and discuss in depth.

Davide Colla, Enrico Mensa, Daniele P. Radicioni, Antonio Lieto
Multi-operator Decision Trees for Explainable Time-Series Classification

Analyzing time-series is a task of rising interest in machine learning. At the same time developing interpretable machine learning tools is the recent challenge proposed by the industry to ease use of these tools by engineers and domain experts. In the paper we address the problem of generating interpretable classification of time-series data. We propose to extend the classical decision tree machine learning algorithm to Multi-operator Temporal Decision Trees (MTDT). The resulting algorithm provides interpretable decisions, thus improving the results readability, while preserving the classification accuracy. Aside MTDT we provide an interactive visualization tool allowing a user to analyse the data, their intrinsic regularities and the learned tree model.

Vera Shalaeva, Sami Alkhoury, Julien Marinescu, Cécile Amblard, Gilles Bisson
Comparison-Based Inverse Classification for Interpretability in Machine Learning

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data). It proposes an inverse classification approach whose principle consists in determining the minimal changes needed to alter a prediction: in an instance-based framework, given a data point whose classification must be explained, the proposed method consists in identifying a close neighbor classified differently, where the closeness definition integrates a sparsity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.

Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Aggregation Operators, Fuzzy Metrics and Applications

Frontmatter
Efficient Binary Fuzzy Measure Representation and Choquet Integral Learning

The Choquet integral (ChI), a parametric function for information aggregation, is parameterized by the fuzzy measure (FM), which has $$2^N$$2N real-valued variables for N inputs. However, the ChI incurs huge storage and computational burden due to its exponential complexity relative to N and, as a result, its calculation, storage, and learning becomes intractable for even modest sizes (e.g., $$N=15$$N=15). Inspired by empirical observations in multi-sensor fusion and the more general need to mitigate the storage, computational, and learning limitations, we previously explored the binary ChI (BChI) relative to the binary fuzzy measure (BFM). The BChI is a natural fit for many applications and can be used to approximate others. Previously, we investigated different properties of the BChI and we provided an initial representation. In this article, we propose a new efficient learning algorithm for the BChI, called EBChI, by utilizing the BFM properties that add at most one variable per training instance. Furthermore, we provide an efficient representation of the BFM (EBFM) scheme that further reduces the number of variables required for storage and computation, thus enabling the use of the BChI for “big N”. Finally, we conduct experiments on synthetic data that demonstrate the efficiency of our proposed techniques.

Muhammad Aminul Islam, Derek T. Anderson, Xiaoxiao Du, Timothy C. Havens, Christian Wagner
Mapping Utilities to Transitive Preferences

This article deals with the construction of (pairwise) preference relations from degrees of utilities, e.g. ratings. Recently, the U2P method has been proposed for this purpose, but U2P is neither additively nor multiplicatively transitive. This paper proposes the U2PA and the U2PM methods. U2PA is additively transitive, and U2PM is multiplicatively transitive. Moreover, both U2PA and U2PM have linear preference over ambiguity.

Thomas A. Runkler
Mortality Rates Smoothing Using Mixture Function

The paper offers a description of the new method of a smoothing of the mortality rates using the so-called moving mixture functions. Mixture functions represent a special class of weighted averaging functions where weights are determined by continuous, input values dependent, weighting functions. If they are increasing, they form an important class of aggregation functions. Such mixture functions are more flexible than the standard weighted arithmetic mean, and their weighting functions allow one to penalize or reinforce inputs based on their magnitude. The advantages of this method are that the weights of the input values depend on ourselves and coefficients of weighting functions can be changed each year so that the mean square error is minimized. Moreover, the paper offers the impact of this method on the amount of whole life pension annuities.

Samuel Hudec, Jana Špirková
Aggregation Functions Based on Deviations

After recalling penalty and deviation based constructions of idempotent aggregation functions, we introduce the concept of a general deviation function and related construction of aggregation functions. Our approach is exemplified in some examples, illustrating the ability of our method to model possibly different aggregation attitudes in different coordinates of the aggregated score vectors.

Marián Decký, Radko Mesiar, Andrea Stupňanová
Nullnorms and T-Operators on Bounded Lattices: Coincidence and Differences

T-operators were defined on [0, 1] by Mas et al. in 1999. In 2001, Calvo et al. introduced the notion of nullnorms, also on [0, 1]. Both of these operations were defined as generalizations of t-norms and t-conorms. As Mas et al. in 2002 pointed out, t-operators and nullnorms coincide on [0, 1]. Afterwards, only nullnorms were studied and later generalized as operations on bounded lattices. Our intention is to introduce also t-operators as operations on bounded lattices. We will show that, on bounded lattices, nullnorms and t-operators need not coincide. We will explore conditions under which one of these operations is necessarily the other one, and conditions under which they differ.

Slavka Bodjanova, Martin Kalina
Steinhaus Transforms of Fuzzy String Distances in Computational Linguistics

In this paper we deal with distances for fuzzy strings in $$[0,1]^n$$[0,1]n, to be used in distance-based linguistic classification. We start from the fuzzy Hamming distance, anticipated by the linguist Muljačić back in 1967, and the taxicab distance, which both generalize the usual crisp Hamming distance, using in the first case the standard logical operations of minimum for conjunctions and maximum for disjunctions, while in the second case one uses Łukasiewicz’ T-norms and T-conorms. We resort to the Steinhaus transform, a powerful tool which allows one to deal with linguistic data which are not only fuzzy, but possibly also irrelevant or logically inconsistent. Experimental results on actual data are shown and preliminarily commented upon.

Anca Dinu, Liviu P. Dinu, Laura Franzoi, Andrea Sgarro
Merging Information Using Uncertain Gates: An Application to Educational Indicators

Knowledge provided by human experts is often imprecise and uncertain. The possibility theory provides a solution to handle these problems. The modeling of knowledge can be performed by a possibilistic network but demands to define all the parameters of Conditional Possibility Tables. Uncertain gates allow us, as noisy gates in probability theory, the automatic calculation of Conditional Possibility Tables. The uncertain gates connectors can be used for merging information. We can use the T-norm, T-conorm, mean, and hybrid operators to define new uncertain gates connectors. In this paper, we will present an experimentation on the calculation of educational indicators. Indeed, the LMS Moodle provides a large scale of data about learners that can be merged to provide indicators to teachers. Therefore, teachers can better understand their students’ needs and how they learn. The knowledge about the behavior of learners can be provided by teachers but also by the process of datamining. The knowledge is modeled by using uncertain gates and evaluated from the data. The indicators can be presented to teachers in a decision support system.

Guillaume Petiot
On the Use of Fuzzy Preorders in Multi-robot Task Allocation Problem

This paper addresses the multi-robot task allocation problem. In particular, given a collection of tasks and robots, we focus on how to select the best robot to execute each task by means of the so-called response threshold method. In the aforesaid method, each robot decides to leave a task and to perform another one (decides to transit) following a probability (response functions) that depends mainly of a stimulus and the current task. The probabilistic approaches used to model the transitions present several handicaps. To solve these problems, in a previous work, we introduced the use of indistinguishability operators to model response functions and possibility theory instead of probability. In this paper we extend the previous work in order to be able to model response functions when the stimulus under consideration depends on the distance between tasks and the utility of them. Thus, the resulting response functions that model transitions in the Markov chains must be asymmetric. In the light of this asymmetry, it seems natural to use fuzzy preorders in order to model the system’s behaviour. The results of the simulations executed in Matlab validate our approach and they show again how the possibilistic Markov chains outperform their probabilistic counterpart.

José Guerrero, Juan-José Miñana, Óscar Valero
On the Problem of Aggregation of Partial T-Indistinguishability Operators

In this paper we focus our attention on exploring the aggregation of partial T-indistinguishability operators (relations). Concretely we characterize, by means of (T-$$T_{\min }$$Tmin)-tuples, those functions that allow to merge a collection of partial T-indistinguishability operators into a single one. Moreover, we show that monotony is a necessary condition to ensure that a function aggregates partial T-indistinguishability operators into a new one. We also provide that an inter-exchange composition function condition is a sufficient condition to guarantee that a function aggregates partial T-indistinguishability operators. Finally, examples of this type of functions are also given.

Tomasa Calvo Sánchez, Pilar Fuster-Parra, Óscar Valero
Size-Based Super Level Measures on Discrete Space

We continue in the investigation of a concept of size introduced by Do and Thiele [3]. Our focus is a computation of corresponding super level measure, a key component of size application, on discrete space, i.e., a finite set with discrete topology. We found critical numbers which determine the change of a value of super level measure and we present an algorithm for super level measure computation based on these numbers.

Jana Borzová, Lenka Halčinová, Jaroslav Šupina
What Is the Aggregation of a Partial Metric and a Quasi-metric?

Generalized metrics have been shown to be useful in many fields of Computer Science. In particular, partial metrics and quasi-metrics are used to develop quantitative mathematical models in denotational semantics and in asymptotic complexity analysis of algorithms, respectively. The aforesaid models are implemented independently and they are not related. However, it seems natural to consider a unique framework which remains valid for the applications to the both aforesaid fields. A first natural attempt to achieve that target suggests that the quantitative information should be obtained by means of the aggregation of a partial metric and a quasi-metric. Inspired by the preceding fact, we explore the way of merging, by means of a function, the aforementioned generalized metrics into a new one. We show that the induced generalized metric matches up with a partial quasi-metric. Thus, we characterize those functions that allow to generate partial quasi-metrics from the combination of a partial metric and a quasi-metric. Moreover, the relationship between the problem under consideration and the problems of merging partial metrics and quasi-metrics is discussed. Examples that illustrate the obtained results are also given.

Juan-José Miñana, Óscar Valero
Generalized Farlie-Gumbel-Morgenstern Copulas

The Farlie-Gumbel-Morgenstern copulas are related to the independence copula $$\varPi $$Π and can be seen as perturbations of $$\varPi $$Π. Based on quadratic constructions of copulas, we provide a new look at them. Starting from any 2-dimensional copula and an appropriate real function, we introduce new parametric families of copulas which in the case of the independence copula $$\varPi $$Π coincide with the Farlie-Gumbel-Morgenstern family. Using the proposed approach, we also obtain as a particular case a subclass of the Fréchet family of copulas containing all three basic copulas $$W, \varPi $$W,Π and M, i.e. a comprehensive family of copulas. Finally, based on an iterative approach, we introduce copula families $$\left( C_r\right) _{r\in [-\infty ,\infty ]}$$Crr∈[-∞,∞] complete w.r.t. dependence parameters, resulting in the case of the independence copula and parameters $$r\in [-1,1]$$r∈[-1,1] in the Farlie-Gumbel-Morgenstern family.

Anna Kolesárová, Radko Mesiar, Susanne Saminger-Platz
Extracting Decision Rules from Qualitative Data via Sugeno Utility Functionals

Sugeno integrals are qualitative aggregation functions. They are used in multiple criteria decision making and decision under uncertainty, for computing global evaluations of items, based on local evaluations. The combination of a Sugeno integral with unary order preserving functions on each criterion is called a Sugeno utility functionals (SUF). A noteworthy property of SUFs is that they represent multi-threshold decision rules, while Sugeno integrals represent single-threshold ones. However, not all sets of multi-threshold rules can be represented by a single SUF. In this paper, we consider functions defined as the minimum or the maximum of several SUFs. These max-SUFs and min-SUFs can represent all functions that can be described by a set of multi-threshold rules, i.e., all order-preserving functions on finite scales. We study their potential advantages as a compact representation of a big set of rules, as well as an intermediary step for extracting rules from empirical datasets.

Quentin Brabant, Miguel Couceiro, Didier Dubois, Henri Prade, Agnès Rico
Image Feature Extraction Using OD-Monotone Functions

Edge detection is a basic technique used as a preliminary step for, e.g., object extraction and recognition in image processing. Many of the methods for edge detection can be fit in the breakdown structure by Bezdek, in which one of the key parts is feature extraction. This work presents a method to extract edge features from a grayscale image using the so-called ordered directionally monotone functions. For this purpose we introduce some concepts about directional monotonicity and present two construction methods for feature extraction operators. The proposed technique is competitive with the existing methods in the literature. Furthermore, if we combine the features obtained by different methods using penalty functions, the results are equal or better results than state-of-the-art methods.

Cedric Marco-Detchart, Carlos Lopez-Molina, Javier Fernández, Miguel Pagola, Humberto Bustince
Applying Suitability Distributions in a Geological Context

Some industrial purposes require specific marine resources. Companies rely on information from resource models to decide where to go and what the cost will be to perform the required extractions. Such models, however, are typical examples of imprecise data sets wherein most data is estimated rather than measured. This is especially true for marine resource models, for which acquiring real data samples is a long and costly endeavor. Consequently, such models are largely computed by interpolating data from a small set of measurements. In this paper, we discuss how we have applied fuzzy set theory on a real data set to deal with these issues. It is further explained how the resulting fuzzy model can be queried so it may be used in a decision making context. To evaluate queries, we use a novel preference modeling and evaluation technique specifically suited for dealing with uncertain data, based on suitability distributions. The technique is illustrated by evaluating an example query and discussing the results.

Robin De Mol, Guy De Tré
Metrics for Tag Cloud Evaluation

Since their appearance Tag Clouds are widely used tools in Internet. The main purposes of these textual visualizations are information retrieval, content representation and browsing of text. Despite their widespread use and the large number of research that has been carried out on them, the main metrics available in the literature evaluate the quality of the tag cloud based only on the query results. There are no adequate metrics when the tag cloud is extracted from text and used to represent information content. In this work, three new metrics are proposed for the evaluation of tag clouds when their main function is to represent information content: coverage, overlap and disparity, as well as a fourth metric: the balance, in which we propose a way to calculate it by using OWA operators.

Úrsula Torres-Parejo, Jesús R. Campaña, Maria-Amparo Vila, Miguel Delgado
Evidential Bagging: Combining Heterogeneous Classifiers in the Belief Functions Framework

In machine learning, Ensemble Learning methodologies are known to improve predictive accuracy and robustness. They consist in the learning of many classifiers that produce outputs which are finally combined according to different techniques. Bagging, or Bootstrap Aggregating, is one of the most famous Ensemble methodologies and is usually applied to the same classification base algorithm, i.e. the same type of classifier is learnt multiple times on bootstrapped versions of the initial learning dataset. In this paper, we propose a bagging methodology that involves different types of classifier. Classifiers’ probabilist outputs are used to build mass functions which are further combined within the belief functions framework. Three different ways of building mass functions are proposed; preliminary experiments on benchmark datasets showing the relevancy of the approach are presented.

Nicolas Sutton-Charani, Abdelhak Imoussaten, Sébastien Harispe, Jacky Montmain
Uninorms That Are Neither Conjunctive Nor Disjunctive on Bounded Lattices

In this paper, we demonstrate that on some bounded lattices L, there exist elements $$e\in L\backslash \{0,1\}$$e∈L\{0,1} such that all uninorms having e as the neutral element are only conjunctive or disjunctive. And we introduce two new construction methods to obtain uninorms that are neither conjunctive nor disjunctive on a bounded lattice with a neutral element under some additional constraints. Furthermore, an illustrative example showing that our methods differ slightly from each other is added.

Gül Deniz Çaylı
On the Migrativity Property for Uninorms and Nullnorms

In this paper the notions of $$\alpha $$α-migrative uninorms over a fixed nullnorm and $$\alpha $$α-migrative nullnorms over a fixed uninorm are introduced and studied. All solutions of the migrativity equation for all possible combinations of uninorms and nullnorms are investigated. So, $$(\alpha ,T)$$(α,T)-migrative nullnorm and $$(\alpha ,T)$$(α,T)-migrative uninorm ($$(\alpha ,S)$$(α,S)-migrative nullnorm and $$(\alpha ,S)$$(α,S)-migrative uninorm) for a given t-norm (t-conorm) are extended to a more general form.

Emel Aşıcı
Comparison of Fuzzy Integral-Fuzzy Measure Based Ensemble Algorithms with the State-of-the-Art Ensemble Algorithms

The Fuzzy Integral (FI) is a non-linear aggregation operator which enables the fusion of information from multiple sources in respect to a Fuzzy Measure (FM) which captures the worth of both the individual sources and all their possible combinations. Based on the expected potential of non-linear aggregation offered by the FI, its application to decision-level fusion in ensemble classifiers, i.e. to fuse multiple classifiers outputs towards one superior decision level output, has recently been explored. A key example of such a FI-FM ensemble classification method is the Decision-level Fuzzy Integral Multiple Kernel Learning (DeFIMKL) algorithm, which aggregates the outputs of kernel based classifiers through the use of the Choquet FI with respect to a FM learned through a regularised quadratic programming approach. While the approach has been validated against a number of classifiers based on multiple kernel learning, it has thus far not been compared to the state-of-the-art in ensemble classification. Thus, this paper puts forward a detailed comparison of FI-FM based ensemble methods, specifically the DeFIMKL algorithm, with state-of-the art ensemble methods including Adaboost, Bagging, Random Forest and Majority Voting over 20 public datasets from the UCI machine learning repository. The results on the selected datasets suggest that the FI based ensemble classifier performs both well and efficiently, indicating that it is a viable alternative when selecting ensemble classifiers and indicating that the non-linear fusion of decision level outputs offered by the FI provides expected potential and warrants further study.

Utkarsh Agrawal, Anthony J. Pinar, Christian Wagner, Timothy C. Havens, Daniele Soria, Jonathan M. Garibaldi
Application of Aggregation Operators to Assess the Credibility of User-Generated Content in Social Media

Nowadays, User-Generated Content (UGC) spreads across social media through Web 2.0 technologies, in the absence of traditional trusted third parties that can verify its credibility. The issue of assessing the credibility of UGC is a recent research topic, which has been tackled by many approaches as a classification problem: information is automatically categorized into genuine and fake, usually by employing data-driven solutions, based on Machine Learning (ML) techniques. In this paper, to address some open issues concerning the use of ML, and to give to the decision maker a major control on the process of UGC credibility assessment, the importance that the Multi-Criteria Decision Making (MCDM) paradigm can have in association with the use of aggregation operators is discussed. Some potential aggregation schemes and their properties are illustrated, as well as some interesting research directions.

Gabriella Pasi, Marco Viviani

Belief Function Theory and Its Applications

Frontmatter
Measuring Features Strength in Probabilistic Classification

Probabilistic classifiers output a probability of an input being a member of each of the possible classes, given some of its feature values, selecting most probable class as predicted class. We introduce and compare different measures of the feature strength in probabilistic confidence-weigthed classification models. For that, we follow two approaches: one based on conditional probability tables of the classification variable with respect to each feature, using different statistical distances and a correction parameter, and the second one based on accuracy in predicting classification from evidences on each isolated feature. On a case study, we compute these feature strength measures and rank features attending to them, comparing results.

Rosario Delgado, Xavier-Andoni Tibau
DETD: Dynamic Policy for Case Base Maintenance Based on EK-NNclus Algorithm and Case Types Detection

Case Based Reasoning (CBR) systems know a success in various domains. Consequently, we find several works focusing on Case Base Maintenance (CBM) that aim to preserve CBR systems performance. Thus, CBM tools are generally offering techniques to select only the most potential cases for problem-solving. However, cases are full of imperfection since they represent real world situations, which makes this task harder. In addition, new problems having substantially new solutions will be found in case bases over the time. Hence, we aim, in this paper, to propose a new CBM approach having the ability to manage uncertainty and the dynamic aspect of maintenance using the evidential clustering technique called EK-NNclus based on belief function theory, where clusters’ number is fixed automatically and changes from one maintenance application to another. Finally, the maintenance task is performed through selecting only two types of cases.

Safa Ben Ayed, Zied Elouedi, Eric Lefevre
Ensemble Enhanced Evidential k-NN Classifier Through Rough Set Reducts

Data uncertainty is seen as one of the main issues of several real world applications that can affect the decision of experts. Several studies have been carried out, within the data mining and the pattern recognition fields, for processing the uncertainty that is associated to the classifier outputs. One solution consists of transforming classifier outputs into evidences within the framework of belief functions. To gain the best performance, ensemble systems with belief functions have been well studied for several years now. In this paper, we aim to construct an ensemble of the Evidential Editing k-Nearest Neighbors classifier (EEk-NN), which is an extension of the standard k-NN classifier for handling data with uncertain attribute values expressed within the belief function framework, through rough set reducts.

Asma Trabelsi, Zied Elouedi, Eric Lefevre
Towards a Hybrid User and Item-Based Collaborative Filtering Under the Belief Function Theory

Collaborative Filtering (CF) approaches enjoy considerable popularity in the field of Recommender Systems (RSs). They exploit the users’ past ratings and provide personalized recommendations on this basis. Commonly, neighborhood-based CF approaches focus on relationships between items (item-based) or, alternatively, between users (user-based). User-based CF predicts new preferences based on the users sharing similar interests. Item-based computes the similarity between items rather than users to perform the final predictions. However, in both approaches, only partial information from the rating matrix is exploited since they rely either on the ratings of similar users or similar items. Besides, the reliability of the information provided by these pieces of evidence as well as the final predictions cannot be fully trusted. To tackle these issues, we propose a new hybrid neighborhood-based CF under the belief function framework. Our approach tends to take advantage of the two kinds of information sources while handling uncertainty pervaded in the predictions. Pieces of evidence from both items and users are combined using Dempster’s rule of combination. The performance of the new recommendation approach is validated on a real-world data set and compared to state of the art CF neighborhood approaches under the belief function theory.

Raoua Abdelkhalek, Imen Boukhris, Zied Elouedi
Evidential Top-k Queries Evaluation: Algorithms and Experiments

Top-k queries represent a vigorous tool to rank-order answers and return only the most interesting ones. ETop-k queries were introduced to discriminate answers in the context of evidential databases. Due to their interval degrees, such answers seem to be difficult to rank-order and to interpret. Two methods of ranking intervals were proposed in the evidential context. This paper presents an efficient implementation of these methods and discusses the experimental results obtained.

Fatma Ezzahra Bousnina, Mouna Chebbah, Mohamed Anis Bach Tobji, Allel Hadjali, Boutheina Ben Yaghlane
Independence of Sources in Social Networks

Online social networks are more and more studied. The links between users of a social network are important and have to be well qualified in order to detect communities and find influencers for example. In this paper, we present an approach based on the theory of belief functions to estimate the degrees of cognitive independence between users in a social network. We experiment the proposed method on a large amount of data gathered from the Twitter social network.

Manel Chehibi, Mouna Chebbah, Arnaud Martin

Current Techniques to Model, Process and Describe Time Series

Frontmatter
Forecasting Energy Demand by Clustering Smart Metering Time Series

Current demands on the energy market, such as legal policies towards green energy usage and economic pressure due to growing competition, require energy companies to increase their understanding of consumer behavior and streamline business processes. One way to help achieve these goals is by making use of the increasing availability of smart metering time series. In this paper we extend an approach based on fuzzy clustering using smart meter data to yield load profiles which can be used to forecast the energy demand of customers. In addition, our approach is built with existing business processes in mind. This helps not only to accurately satisfy real world requirements, but also to ease adoption by the industry. We also assess the quality of our approach using real world smart metering datasets.

Christian Bock
Linguistic Description of the Evolution of Stress Level Using Fuzzy Deformable Prototypes

The purpose of this paper is to show that it is possible to describe stress levels through a complete time-log analysis. For this purpose it has been developed a fuzzy deformable prototypes based model that uses a fuzzy representation of the prototypical situations. The proposed model has been applied to a database composed of time logs from students with and without stress. Preliminary results from the proposed model application have been validated by experts. Moreover, the model has been applied as a classifier obtaining good results for both sensitivity and specificity. Finally, the proposal has been validated and should be considered useful for the expert systems design to support the stress level description.

Francisco P. Romero, José A. Olivas, Jesus Serrano-Guerrero
Model Averaging Approach to Forecasting the General Level of Mortality

Already a 1% improvement to the overall forecast accuracy of mortality rates, may lead to the significant decrease of insurers costs. In practice, Lee-Carter model is widely used for forecasting the mortality rates. Within this study, we combine the traditional Lee-Carter model with the recent advances in the weighted model averaging. For this purpose, first, the training database of template predictive models is constructed for the mortality data and processed with similarity measures, and secondly, competitive predictive models are averaged to produce forecasts. The main innovation of the proposed approach is reflecting the uncertainty related to the shortness (e.g., 14 observations) of available data by the incorporation of multiple predictive models. The performance of the proposed approach is illustrated with experiments for the Human Mortality Database. We analyzed time series datasets for women and men aged 0–100 years from 10 countries in the Central and Eastern Europe. The presented numerical results seem very promising and show that the proposed approach is highly competitive with the state-of-the-art models. It outperforms benchmarks especially when forecasting long periods (6–10 years ahead).

Marcin Bartkowiak, Katarzyna Kaczmarek-Majer, Aleksandra Rutkowska, Olgierd Hryniewicz

Discrete Models and Computational Intelligence

Frontmatter
Robust On-Line Streaming Clustering

With the explosion of ubiquitous continuous sensing, on-line streaming clustering continues to attract attention. The requirements are that the streaming clustering algorithm recognize and adapt clusters as the data evolves, that anomalies are detected, and that new clusters are automatically formed as incoming data dictate. In this paper, we extend an earlier approach, called Extended Robust On-Line Streaming Clustering (EROLSC), which utilizes both the Possibilistic C-Means and Gaussian Mixture Decomposition to perform this task. We show the superiority of EROLSC over traditional streaming clustering algorithms on synthetic and real data sets.

Omar A. Ibrahim, Yizhuo Du, James Keller
T-Overlap Functions: A Generalization of Bivariate Overlap Functions by t-Norms

This paper introduces a generalization of overlap functions by extending one of the boundary conditions of its definition. More specifically, instead of requiring that “the considered function is equal to zero if and only if some of the inputs is equal to zero”, we allow the range in which some t-norm is zero. We call such generalization by a t-overlap function with respect to such t-norm. Then we analyze the main properties of t-overlap function and introduce some construction methods.

Hugo Zapata, Graçaliz Pereira Dimuro, Javier Fernández, Humberto Bustince
On the Existence and Uniqueness of Fixed Points of Fuzzy Cognitive Maps

Fuzzy Cognitive Maps (FCMs) are decision support tools, which were introduced to model complex behavioral systems. The final conclusion (output of the system) relies on the assumption that the system reaches an equilibrium point (fixed point) after a certain number of iteration. It is not straightforward that the iteration leads to a fixed point, since limit cycles and chaotic behaviour may also occur.In this article, we give sufficient conditions for the existence and uniqueness of the fixed point for log-sigmoid and hyperbolic tangent FCMs, based on the weighted connections between the concepts and the parameter of the threshold function. Moreover, in a special case, when all of the weights are non-negative, we prove that fixed point always exists, regardless of the parameter of the threshold function.

István Á. Harmati, Miklós F. Hatwágner, László T. Kóczy
Searching Method of Fuzzy Internally Stable Set as Fuzzy Temporal Graph Invariant

In this paper we consider the problem of finding the invariant of a fuzzy temporal graph, namely, a fuzzy internally stable set. Fuzzy temporal graph is a generalization of a fuzzy graph on the one hand, and a temporal graph on the other hand. In this paper, a temporal fuzzy graph is considered, in which the connectivity degree of vertices varies in discrete time. The notion of maximum internally stable subset of fuzzy temporal graph is considered. A method and an algorithm for finding all maximal internally stable sets are proposed which makes it possible to find a fuzzy internally stable set. The example of definition of internal stable fuzzy set is considered as well.

Alexander Bozhenyuk, Stanislav Belyakov, Margarita Knyazeva, Igor Rozenberg
Prioritisation of Nielsen’s Usability Heuristics for User Interface Design Using Fuzzy Cognitive Maps

Usability Heuristics are being widely used as a means of evaluating user interfaces. However, little existing work has been done that focused on assessing the effect of these heuristics individually or collectively on said systems. In this paper, the authors propose an approach to evaluating the usability of systems that deploys a prioritised version of Nielsen’s usability heuristics. Fuzzy cognitive maps were used to prioritise the original heuristics according to experts in both fields. Using either set of heuristics evaluators can identify the same number of usability issues. However, when trying to enhance the overall usability of a system, the prioritised set of heuristics can help stakeholders focus their limited resources on fixing the subset of issues that collectively has the worst effect on the usability of their systems during each iteration. To test the findings proposed by authors several websites were evaluated for various usability problems. The experimental results show that by using the proposed heuristics, evaluators were able to find a comparable number of problems to those who used Nielsen’s, the prioritised heuristics resulted in an ordered list of issues based on their effect on usability. Therefore, the authors believe that heuristic evaluation in general, and their introduced heuristics in particular, are effective in dealing with issues when facing situations of limited resources.

Rita N. Amro, Saransh Dhama, Muhanna Muhanna, László T. Kóczy
Discrete Bacterial Memetic Evolutionary Algorithm for the Time Dependent Traveling Salesman Problem

The Time Dependent Traveling Salesman Problem (TDTSP) that is addressed in this paper is a variant of the well-known Traveling Salesman Problem. In this problem the distances between nodes vary in time (are longer in rush hours in the city centre), Our Discrete Bacterial Evolutionary Algorithm (DBMEA) was tested on benchmark problems (on bier127 and on a self-generated problem with 250 nodes) with various jam factors. The results demonstrate the effectiveness of the algorithm.

Boldizsár Tüű-Szabó, Péter Földesi, László T. Kóczy

Formal Concept Analysis and Uncertainty

Frontmatter
Study of the Relevance of Objects and Attributes of L-fuzzy Contexts Using Overlap Indexes

Objects and attributes play an important role in an L-fuzzy context. From the point of view of the L-fuzzy concepts, some of them can be more relevant than others. Besides, the number of objects and attributes of the L-fuzzy context is one of the most important factors that influence in the size of the L-fuzzy concept lattice. In this paper, we define different rankings for the objects and the attributes according to their relevance in the L-fuzzy concept lattice and using different overlap indexes. These rankings can be useful for the reduction of the L-fuzzy context size.

Cristina Alcalde, Ana Burusco
FCA Attribute Reduction in Information Systems

One of the main targets in formal concept analysis (FCA) and in rough set theory (RST) is the reduction of redundant information. Feature selection mechanisms have been studied separately in many works. In this paper, we analyse the result of applying the reduction mechanisms given in FCA to RST, and give interpretations of such reductions.

M. José Benítez-Caballero, Jesús Medina, Eloísa Ramírez-Poussa
Reliability Improvement of Odour Detection Thresholds Bibliographic Data

Odour control is an important industrial issue as it is a criterion in purchase of a material. The minimal concentration of a pure compound allowing to perceive its odour, called Odour Detection Threshold (ODT), is a key of the odour control. Each compound has its own ODT. Literature is the main source to obtain ODT, but a lot of compounds are not reported and, when reported, marred by a high variability. This paper proposes a supervised cleaning methodology to reduce uncertainty of available ODTs and a prediction of missing ODTs on the base of physico-chemical variables.This cleaning leads to eliminate 39% of reported compounds while conducting 84% of positive scenarios on 37 comparisons. Missing ODTs are predicted with an error of 0.83 for the train and 1.14 for the test (log10 scale). Given the uncertainty of data, the model is sufficient. This approach allows working with a lower uncertainty and satisfactory prediction of missing ODTs.

Pascale Montreer, Stefan Janaqi, Stéphane Cariou, Mathilde Chaignaud, Isabelle Betremieux, Philippe Ricoux, Frédéric Picard, Sabine Sirol, Budagwa Assumani, Jean-Louis Fanlo
Formal Concept Analysis and Structures Underlying Quantum Logics

A Hilbert space H induces a formal context, the Hilbert formal context $$\overline{H}$$H¯, whose associated concept lattice is isomorphic to the lattice of closed subspaces of H. This set of closed subspaces, denoted $$\mathcal C(H)$$C(H), is important in the development of quantum logic and, as an algebraic structure, corresponds to a so-called “propositional system,” that is, a complete, atomistic, orthomodular lattice which satisfies the covering law. In this paper, we continue with our study of the Chu construction by introducing the Chu correspondences between Hilbert contexts, and showing that the category of Propositional Systems, PropSys, is equivalent to the category of $$\text {ChuCors}_{\mathcal H}$$ChuCorsH of Chu correspondences between Hilbert contexts.

Ondrej Krídlo, Manuel Ojeda-Aciego
Directness in Fuzzy Formal Concept Analysis

Implicational sets have showed to be an efficient tool for knowledge representation. An active area is the definition of some canonical sets (basis) to efficiently specify and manage the information specified with implications. Unlike in classical formal concept analysis, in the fuzzy framework it is an open issue to design methods to efficiently compute the corresponding basis from a given set of fuzzy implications and, later, manage it in an automatic way. In this work we use Simplification Logic to tackle this issue. More specifically, we cover the following stages related to this problem: the generalization of the Simplification logic to an arbitrary complete residuated lattice changing its semantic, the introduction of the syntactic closure and an algorithm to compute it, the definition of a fuzzy direct basis with minimum size, providing the so-called directness property as well, and, finally the design of an algorithm to compute this basis.

Pablo Cordero, Manuel Enciso, Angel Mora
Formal Independence Analysis

In this paper we propose a new lens through which to observe the information contained in a formal context. Instead of focusing on the hierarchical relation between objects or attributes induced by their incidence, we focus on the “unrelatedness” of the objects with respect to those attributes with which they are not incident. The crucial order concept for this is that of maximal anti-chain and the corresponding representation capabilities are provided by Behrendt’s theorem. With these tools we introduce the fundamental theorem of Formal Independence Analysis and use it to provide an example of what its affordances are for the analysis of data tables. We also discuss its relation to Formal Concept Analysis.

Francisco J. Valverde-Albacete, Carmen Peláez-Moreno, Inma P. Cabrera, Pablo Cordero, Manuel Ojeda-Aciego

Fuzzy Implication Functions

Frontmatter
Fuzzy Boundary Weak Implications

An extension of fuzzy implications and coimplications, called fuzzy boundary weak implications (shortly, fuzzy bw-implications), is introduced and discussed in this paper. Firstly, by weakening the boundary conditions of fuzzy implications and coimplications, we introduce the concept of fuzzy bw-implications. And then, we investigate some of their basic properties. Next, the concept of fuzzy pseudo-negations is introduced and the natural pseudo-negations of fuzzy bw-implications are investigated. Finally, the fuzzy bw-implications generated, respectively, by aggregation operators and generator functions are discussed in details. This work is motivated by the fact that in real applications there are used some operators which are not fuzzy implications. We hope that such an extension of fuzzy (co)implications can provide a certain theoretical foundation for the real applications.

Hua-Wen Liu, Michał Baczyński
On Linear and Quadratic Constructions of Fuzzy Implication Functions

In this paper a new construction method of fuzzy implication functions from a given one, based on ternary polynomial functions is presented. It is proved that the case of linear polynomial functions leads only to trivial solutions and thus the quadratic case is studied in depth. It is shown that the quadratic method allows many different possibilities depending on the usual properties of fuzzy implications functions that we want to preserve. Specifically, there are infinitely many quadratic functions that transform fuzzy implication functions satisfying properties like the neutrality principle, the identity principle, or the law of contraposition with respect to the classical negation, into new fuzzy implication functions satisfying them.

Sebastia Massanet, Juan Vicente Riera, Joan Torrens
On the Characterization of a Family of Generalized Yager’s Implications

Over the last years, several generalizations of Yager’s f and g-generated implications have been proposed in the literature expanding the number of available families of fuzzy implication functions. Among them, the so-called (f, g) and (g, f)-implications were introduced by means of generalizing the internal functions x and $$\frac{1}{x}$$1x of the standard Yager’s f and g-generated implications to more general unary functions. In particular, those generated using $$\frac{x}{e}$$xe and $$\frac{e}{x}$$ex with $$e\in (0,1)$$e∈(0,1) stand out due to their key role in the structure of (h, e)-implications. In this paper, the characterizations of the $$(f,\frac{x}{e})$$(f,xe)-implications are presented. These characterizations, which rely on two properties closely related to the law of importation, will be crucial in order to achieve a fully axiomatic characterization of (h, e)-implications.

Raquel Fernandez-Peralta, Sebastia Massanet
Generalized Modus Ponens for (U, N)-implications

The Modus Ponens becomes an essential property in approximate reasoning and fuzzy control when forward inferences are managed. Thus, the conjunctor and the fuzzy implication function used in the inference process are required to satisfy this property. Usually, the conjunctor is modeled by a t-norm, but recently also by conjunctive uninorms. In this paper we study when (U, N)-implications satisfy the Modus Ponens property with respect to a conjunctive uninorm U in general, in a similar way as it was previously done for RU-implications. The functional inequality derived from the Modus Ponens involves in this case two different uninorms and a fuzzy negation leading to many possibilities. So, this communication presents only a first step in this study and many cases depending on the classes of the involved uninorms are worth to study.

M. Mas, D. Ruiz-Aguilera, Joan Torrens
Dependencies Between Some Types of Fuzzy Equivalences

The article deals with diverse types of fuzzy equivalences interpreted as fuzzy connectives. It presents some dependencies between well known fuzzy C-equivalences as well as lately examined fuzzy $$\alpha $$α–C-equivalences, fuzzy semi-C-equivalences, fuzzy weak C-equivalences, and a fuzzy equivalence defined by Fodor and Roubens.

Urszula Bentkowska, Anna Król
Selected Properties of Generalized Hypothetical Syllogism Including the Case of R-implications

In this paper we investigate the generalized hypothetical syllogism (GHS) in fuzzy logic, which can be seen as the functional equation $$\sup _{z\in [0,1]} T(I(x,z), I(z,y))=I(x,y)$$supz∈[0,1]T(I(x,z),I(z,y))=I(x,y), where I is a fuzzy implication and T is a t-norm. Our contribution is inspired by the article [Fuzzy Sets Syst 323:117–137 (2017)], where the author considered (GHS) when T is the minimum t-norm. We show several general results and then we focus on R-implications. We characterize all t-norms which satisfy (GHS) with arbitrarily fixed R-implication generated from a left-continuous t-norm.

Michał Baczyński, Katarzyna Miś

Fuzzy Logic and Artificial Intelligence Problems

Frontmatter
Interval Type-2 Intuitionistic Fuzzy Logic Systems - A Comparative Evaluation

Several fuzzy modeling techniques have been employed for handling uncertainties in data. This study presents a comparative evaluation of a new class of interval type-2 fuzzy logic system (IT2FLS) namely: interval type-2 intuitionistic fuzzy logic system (IT2IFLS) of Takagi-Sugeno-Kang (TSK)-type with classical IT2FLS and its type-1 variant (IFLS). Simulations are conducted using a real-world gas compression system (GCS) dataset. Study shows that the performance of the proposed framework with membership functions (MFs) and non-membership functions (NMFs) that are each intervals is superior to classical IT2FLS with only MFs (upper and lower) and IFLS with MFs and NMFs that are not intervals in this problem domain.

Imo Eyoh, Robert John, Geert De Maere
Artificial Neural Networks and Fuzzy Logic for Specifying the Color of an Image Using Munsell Soil-Color Charts

The Munsell soil-color charts contain 238 standard color chips arranged in seven charts with Munsell notation. They are widely used to determine soil color by visual comparison, seeking the closest match between a soil sample and one of the chips. The Munsell designation of this chip (hue, value, and chroma) is assigned to the soil under study. However, the available chips represent only a subset of all possible soil colors, in which the visual appearance for an observer is usually intermediate between several chips. Our study proposes an intelligent system which combines two Soft Computing Techniques (Artificial Neural Networks and Fuzzy Logic Systems) aimed at finding a set of chips as similar as possible to a given soil sample. This is under the precondition that the soil sample is an image taken by a digital camera or mobile phone. The system receives an image as input and returns a set of color-chip designations as output.

María Carmen Pegalajar, Manuel Sánchez-Marañón, Luis G. Baca Ruíz, Luis Mansilla, Miguel Delgado
Fuzzy Extensions of Conceptual Structures of Comparison

Comparing two items (objects, images) involves a set of relevant attributes whose values are compared. Such a comparison may be expressed in terms of different modalities such as identity, similarity, difference, opposition, analogy. Recently J.-Y. Béziau has proposed an “analogical hexagon” that organizes the relations linking these modalities. The hexagon structure extends the logical square of opposition invented in Aristotle time (in relation with the theory of syllogisms). The interest of these structures has been recently advocated in logic and in artificial intelligence. When non-Boolean attributes are involved, elementary comparisons may be a matter of degree. Moreover, attributes may not have the same importance. One might only consider most attributes rather than all of them, using operators such as ordered weighted min and max. The paper studies in which ways the logical hexagon structure may be preserved in such gradual extensions. As an illustration, we start with the hexagon of equality and inequality due to Blanché and extend it with fuzzy equality and fuzzy inequality.

Didier Dubois, Henri Prade, Agnès Rico
New Negations on the Type-2 Membership Degrees

Hernández et al. [9] established the axioms that an operation must fulfill in order to be a negation on a bounded poset (partially ordered set), and they also established in [14] the conditions that an operation must satisfy to be an aggregation operator on a bounded poset. In this work, we focus on the set of the membership degrees of the type-2 fuzzy sets, and therefore, the set M of functions from [0, 1] to [0, 1]. In this sense, the negations on M respect to each of the two partial orders defined in this set are presented for the first time. In addition, the authors show new negations on L (set of the normal and convex functions of M) that are different from the negations presented in [9] applying the Zadeh’s Extension Principle. In particular, negations on M and on L are obtained from aggregation operators and negations. As results to highlight, a characterization of the strong negations that leave the constant function 1 fixed is given, and a new family of strong negations on L is presented.

Carmen Torres-Blanc, Susana Cubillo, Pablo Hernández-Varela
First Steps Towards Harnessing Partial Functions in Fuzzy Type Theory

In this paper we present how the theory of partial functions can be developed in the fuzzy type theory and show how the theory elaborated by Lapierre [3] and Lepage [4] can be included in it. Namely, the latter is developed as a special theory whose models contain the partial functions in the sense introduced by both authors.

Vilém Novák
On Hash Bipolar Division: An Enhanced Processing of Novel and Conventional Forms of Bipolar Division

In this paper, two issues of bipolar division are discussed. First, we outline some new operators dealing with the bipolar division, to enrich the interpretations of bipolar queries. In this context, we propose some extended operators of bipolar division based on the connector “or else”. Besides, we introduce a new bipolar division operator dealing with the “Satisfied-Dissatisfied approach”. Secondly, we highlight the matter of the performance improvement of the considered operators. Thus, we present an efficient method which allows handling several bipolar divisions with a unified processing. Our idea is to design new variants of the classical Hash-Division algorithm, for dealing with the bipolar division. The issue of answers ranking is also dealt with. Computational experiments are carried out and demonstrate that the new variants outperform the conventional ones with respect to performance.

Noussaiba Benadjimi, Walid Hidouci, Allel Hadjali
A 2D-Approach Towards the Detection of Distress Using Fuzzy K-Nearest Neighbor

This paper focuses on a novel approach of distress detection referred to as the 2D approach, using the fuzzy K-NN classification model. Unlike the traditional approach where single emotions were qualified to depict distress such as fear, anxiety, or anger, the 2D approach introduces two phases of classification, with the first one checking the speech excitement level, otherwise referred to as arousal in previous researches, and the second one checking the speech’s polarity (negative or positive). Speech features are obtained from the Berlin Database of Emotional Studies (BDES), and feature selection done using the forward selection (FS) method. Attaining a distress detection accuracy of 86.64% using fuzzy K-NN, the proposed 2D approach shows promise in enhancing the detection of emotional states having at least two emotions that could qualify the emotion in question based on their original descriptions just as distress can be either one or many of a number of emotions. Application areas for distress detection include health and security for hostage scenario detection and faster medical response respectively.

Daniel Machanje, Joseph Orero, Christophe Marsala

Fuzzy Mathematical Analysis and Applications

Frontmatter
Modified Methods of Capital Budgeting Under Uncertainties: An Approach Based on Fuzzy Numbers and Interval Arithmetic

The fuzzy modified net present value (Fuzzy MNPV) method for evaluation of non-conventional investment projects under uncertainty explicitly provided for the use of the opportunity costs associated with the interim cash flows of an investment project and eliminated the major problems of traditional capital budgeting methods. Based on the same assumptions that guided the development of that method, the current paper presents a unified capital budgeting solution, consisting of the modified internal rate of return (Fuzzy MIRR), the modified profitability index (Fuzzy MPI), and the modified total payback (Fuzzy MTPB). These methods are MNPV-consistent, maximize shareholder wealth and always lead to the same conditions of acceptance or rejection of investment projects.

Antonio Carlos de Souza Sampaio Filho, Marley M. B. R. Vellasco, Ricardo Tanscheit
Solving Job-Shop Scheduling Problems with Fuzzy Processing Times and Fuzzy Due Dates

This paper shows an iterative method for solving n-jobs, m-machines scheduling problems with fuzzy processing times and fuzzy due dates which are defined using third-party information coming from experts. We use an iterative method based on the cumulative membership function of a fuzzy set to find an overall satisfaction degree among fuzzy processing times and fuzzy due dates.

Camilo Alejandro Bustos-Tellez, Jhoan Sebastian Tenjo-García, Juan Carlos Figueroa-García
Backmatter
Metadata
Title
Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations
Editors
Jesús Medina
Manuel Ojeda-Aciego
José Luis Verdegay
David A. Pelta
Inma P. Cabrera
Prof. Dr. Bernadette Bouchon-Meunier
Ronald R. Yager
Copyright Year
2018
Electronic ISBN
978-3-319-91473-2
Print ISBN
978-3-319-91472-5
DOI
https://doi.org/10.1007/978-3-319-91473-2

Premium Partner