Skip to main content

2014 | Buch

Information Processing and Management of Uncertainty in Knowledge-Based Systems

15th International Conference, IPMU 2014, Montpellier, France, July 15-19, 2014, Proceedings, Part I

herausgegeben von: Anne Laurent, Olivier Strauss, Bernadette Bouchon-Meunier, Ronald R. Yager

Verlag: Springer International Publishing

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

These three volumes (CCIS 442, 443, 444) constitute the proceedings of the 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2014, held in Montpellier, France, July 15-19, 2014. The 180 revised full papers presented together with five invited talks were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on uncertainty and imprecision on the web of data; decision support and uncertainty management in agri-environment; fuzzy implications; clustering; fuzzy measures and integrals; non-classical logics; data analysis; real-world applications; aggregation; probabilistic networks; recommendation systems and social networks; fuzzy systems; fuzzy logic in boolean framework; management of uncertainty in social networks; from different to same, from imitation to analogy; soft computing and sensory analysis; database systems; fuzzy set theory; measurement and sensory information; aggregation; formal methods for vagueness and uncertainty in a many-valued realm; graduality; preferences; uncertainty management in machine learning; philosophy and history of soft computing; soft computing and sensory analysis; similarity analysis; fuzzy logic, formal concept analysis and rough set; intelligent databases and information systems; theory of evidence; aggregation functions; big data - the role of fuzzy methods; imprecise probabilities: from foundations to applications; multinomial logistic regression on Markov chains for crop rotation modelling; intelligent measurement and control for nonlinear systems.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Preference Relations and Families of Probabilities: Different Sides of the Same Coin

The notion of preference is reviewed from different perspectives, including the Imprecise Probabilities’ approach. Formal connections between different streams of the literature are provided, and new definitions are proposed.

Inés Couso
Unifying Logic and Probability: A New Dawn for AI?

Logic and probability theory are two of the most important branches of mathematics and each has played a significant role in artificial intelligence (AI) research. Beginning with Leibniz, scholars have attempted to unify logic and probability. For “classical” AI, based largely on first-order logic, the purpose of such a unification is to handle uncertainty and facilitate learning from real data; for “modern” AI, based largely on probability theory, the purpose is to acquire formal languages with sufficient expressive power to handle complex domains and incorporate prior knowledge. This paper provides a brief summary of an invited talk describing efforts in these directions, focusing in particular on

open-universe

probability models that allow for uncertainty about the existence and identity of objects.

Stuart Russell

On Uncertainty and Imprecision on the Web of Data

Two Procedures for Analyzing the Reliability of Open Government Data

Open Government Data often contain information that, in more or less detail, regard private citizens. For this reason, before publishing them, public authorities manipulate data to remove any sensitive information while trying to preserve their reliability. This paper addresses the lack of tools aimed at measuring the reliability of these data. We present two procedures for the assessment of the Open Government Data reliability, one based on a comparison between open and closed data, and the other based on analysis of open data only. We evaluate the procedures over data from the data.police.uk website and from the Hampshire Police Constabulary in the United Kingdom. The procedures effectively allow estimating the reliability of open data and, actually, their reliability is high even though they are aggregated and smoothed.

Davide Ceolin, Luc Moreau, Kieron O’Hara, Wan Fokkink, Willem Robert van Hage, Valentina Maccatrozzo, Alistair Sackley, Guus Schreiber, Nigel Shadbolt
Classification with Evidential Associative Rules

Mining database provides valuable information such as frequent patterns and especially associative rules. The associative rules have various applications and assets mainly data classification. The appearance of new and complex data support such as evidential databases has led to redefine new methods to extract pertinent rules. In this paper, we intend to propose a new approach for pertinent rule’s extraction on the basis of

confidence

measure redefinition. The confidence measure is based on conditional probability basis and sustains previous works. We also propose a classification approach that combines evidential associative rules within information fusion system. The proposed methods are thoroughly experimented on several constructed evidential databases and showed performance improvement.

Ahmed Samet, Eric Lefèvre, Sadok Ben Yahia
Towards Evidence-Based Terminological Decision Trees

We propose a method that combines terminological decision trees and the Dempster-Shafer Theory, to support tasks like

ontology completion

. The goal is to build a predictive model that can cope with the epistemological uncertainty due to the Open World Assumption when reasoning with Web ontologies. With such models not only one can predict new (non derivable) assertions for completing the ontology but by assessing the quality of the induced axioms.

Giuseppe Rizzo, Claudia d’Amato, Nicola Fanizzi, Floriana Esposito
Uncertainty in Ontology Matching: A Decision Rule-Based Approach

Considering the high heterogeneity of the ontologies published on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible similarity measures, considered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision process based on a distance measure to identify the best possible matching entities for a given source entity.

Amira Essaid, Arnaud Martin, Grégory Smits, Boutheina Ben Yaghlane

Decision Support and Uncertainty Management in Agri-Environment

A Practical Application of Argumentation in French Agrifood Chains

Evaluating food quality is a complex process since it relies on numerous criteria historically grouped into four main types: nutritional, sensorial, practical and hygienic qualities. They may be completed by other emerging preoccupations such as the environmental impact, economic phenomena, etc. However, all these aspects of quality and their various components are not always compatible and their simultaneous improvement is a problem that sometimes has no obvious solution, which corresponds to a real issue for decision making. This paper proposes a decision support method guided by the objectives defined for the end products of an agrifood chain. It is materialized by a backward chaining approach based on argumentation.

Madalina Croitoru, Rallou Thomopoulos, Nouredine Tamani
Soft Fusion of Heterogeneous Image Time Series

In this contribution we analyze the problem of the fusion of time series of heterogeneous remote sensing images to serve classification and monitoring activities which can aid farming applications such as crop classification, change detection and monitoring. We propose several soft fusion operators that are based on different assumptions and model distinct desired properties. Conducted experiments on various geographic regions have been carried out and illustrate the effectiveness of our proposal.

Mar Bisquert, Gloria Bordogna, Mirco Boschetti, Pascal Poncelet, Maguelonne Teisseire
Fuzzy Argumentation System for Decision Support

We introduce in this paper a quantitative preference based argumentation system relying on ASPIC argumentation framework and fuzzy set theory. The knowledge base is fuzzified to allow the experts to express their expertise (premises and rules) attached with grades of importance in the unit interval. Arguments are attached with a score aggregating the importance expressed on their premises and rules. Extensions are then computed and the strength of each of which can also be obtained based on its strong arguments. The strengths are used to rank fuzzy extensions from the strongest to the weakest one, upon which decisions can be made. The approach is finally used for decision making in a real world application within the EcoBioCap project.

Nouredine Tamani, Madalina Croitoru
Applying a Fuzzy Decision Tree Approach to Soil Classification

As one of the most important factors that interfere in peoples life, the soil is characterized by quantitative and qualitative features which describe not only the soil itself, but also the environment, the weather and the vegetation around it. Different types of soil can be identified by means of these features. A good soil classification is very important to get a better use of the soil. Soil classification, when performed manually by experts, is not a simple task, as long as the experts opinions may vary considerably. Besides, different types of soil cannot be defined deterministically. With the objective of exploring an alternative approach towards solving this problem, we investigated in this paper the application of an automatic procedure to generate a soil classifier from data, using a fuzzy decision tree induction algorithm. In order to compare the results obtained by means of the fuzzy decision tree classifier, we used two well known methods for classifiers generation: the classic decision tree induction algorithm C4.5 and the fuzzy rules induction algorithm named FURIA.

Mariana V. Ribeiro, Luiz Manoel S. Cunha, Heloisa A. Camargo, Luiz Henrique A. Rodrigues
Towards the Use of Sequential Patterns for Detection and Characterization of Natural and Agricultural Areas

Nowadays, a huge amount of high resolution satellite images are freely available. Such images allow researchers in environmental sciences to study the different natural habitats and farming practices in a remote way. However, satellite images content strongly depends on the season of the acquisition. Due to the periodicity of natural and agricultural dynamics throughout seasons, sequential patterns arise as a new opportunity to model the behaviour of these environments. In this paper, we describe some preliminary results obtained with a new framework for studying spatiotemporal evolutions over natural and agricultural areas using

k

-partite graphs and sequential patterns extracted from segmented Landsat images.

Fabio Guttler, Dino Ienco, Maguelonne Teisseire, Jordi Nin, Pascal Poncelet
Application of E 2 M Decision Trees to Rubber Quality Prediction

In many applications, data are often imperfect, incomplete or more generally uncertain. This imperfection has to be integrated into the learning process as an information in itself. The

E

2

M

decision trees

is a methodology that provides predictions from uncertain data modelled by belief functions. In this paper, the problem of rubber quality prediction is presented with a belief function modelling of some data uncertainties. Some resulting

E

2

M

decision trees

are presented in order to improve the interpretation of the tree compared to standard decision trees.

Nicolas Sutton-Charani, Sébastien Destercke, Thierry Denœux
An Interval Programming Approach for an Operational Transportation Planning Problem

This paper deals with an interval programming approach for an operational transportation problem, arising in a typical agricultural cooperative during the crop harvest time. More specifically, an interval programming model with uncertain coefficients occurred in the right-hand side and the objective function is developed for a single-period multi-trip planning of a heterogeneous fleet of vehicles, while satisfying the stochastic seed storage requests, represented as interval numbers. The proposed single-period interval programming model is conceived and implemented for a real life agricultural cooperative case study.

Valeria Borodin, Jean Bourtembourg, Faicel Hnaien, Nacima Labadie
Fuzzy Modeling of a Composite Agronomical Feature Using FisPro: The Case of Vine Vigor

Fuzzy logic is a powerful interface between linguistic and numerical spaces. It allows the design of transparent models based upon linguistic rules. The

FisPro

open source software includes learning algorithms as well as a friendly java interface. In this paper, it is used to model a composite agronomical feature, the vine vigor. The system behavior is characterized by its numerical accuracy and analyzed according to the induced knowledge. Well known input output relationships are identified, but also some rules reflect local interactions.

Cécile Coulon-Leroy, Brigitte Charnomordic, Marie Thiollet-Scholtus, Serge Guillaume

Fuzzy Implications

On Fuzzy Polynomial Implications

In this work, the class of fuzzy polynomial implications is introduced as those fuzzy implications whose expression is given by a polynomial of two variables. Some properties related to the values of the coefficients of the polynomial are studied in order to obtain a fuzzy implication. The polynomial implications with degree less or equal to 3 are fully characterized. Among the implications obtained in these results, there are some well-known implications such as the Reichenbach implication.

Sebastia Massanet, Juan Vicente Riera, Daniel Ruiz-Aguilera
Implications Satisfying the Law of Importation with a Given Uninorm

In this paper a characterization of all fuzzy implications with continuous

e

-natural negation that satisfy the law of importation with a given uninorm

U

is provided. The cases when the considered uninorm

U

is representable or a uninorm in

${\cal U}_{\min}$

are studied separately and detailed descriptions of those implications with continuous natural negation with respect to

e

that satisfy the law of importation with a uninorm in these classes are done. In the process some important examples are included.

Sebastia Massanet, Joan Torrens
Laws of Contraposition and Law of Importation for Probabilistic Implications and Probabilistic S-implications

Recently, Grzegorzewski [5-7] introduced two new families of fuzzy implication functions called probabilistic implications and probabilistic S-implications. They are based on conditional copulas and make a bridge between probability theory and fuzzy logic. In the same article [7] author gives a motivation to his idea and indicates some interesting connections between new families of implications and the dependence structure of the underlying environment. In this paper the laws of contraposition and the law of importation are studied for these families of fuzzy implications.

Michał Baczyński, Przemysław Grzegorzewski, Wanda Niemyska

Clustering

Sequential Clustering for Event Sequences and Its Impact on Next Process Step Prediction

Next step prediction is an important problem in process analytics and it can be used in process monitoring to preempt failure in business processes. We are using logfiles from a workflow system that record the sequential execution of business processes. Each process execution results in a timestamped event. The main issue of analysing such event sequences is that they can be very diverse. Models that can effectively handle diverse sequences without losing the sequential nature of the data are desired. We propose an approach which clusters event sequences. Each cluster consists of similar sequences and the challenge is to identify a similarity measure that can cope with the sequential nature of the data. After clustering we build individual predictive models for each group. This strategy addresses both the sequential and diverse characteristics of our data. We first employ K-means and extent it into a categorical-sequential clustering algorithm by combining it with sequential alignment. Finally, we treat each resulting cluster by building individual Markov models of different orders, expecting that the representative characteristics of each cluster are captured.

Mai Le, Detlef Nauck, Bogdan Gabrys, Trevor Martin
A Fuzzy Semisupervised Clustering Method: Application to the Classification of Scientific Publications

This paper introduces a new method of fuzzy semisupervised hierarchical clustering using fuzzy instance level constraints. It introduces the concepts of fuzzy must-link and fuzzy cannot-link constraints and use them to find the optimum

α

-cut of a dendrogram. This method is used to approach the problem of classifying scientific publications in web digital libraries. It is tested on real data from that problem against classical methods and crisp semisupervised hierarchical clustering.

Irene Diaz-Valenzuela, Maria J. Martin-Bautista, Maria-Amparo Vila
Using a Fuzzy Based Pseudometric in Classification

In this work, we propose a pseudometric based on a fuzzy relation, which is itself derived from a fuzzy partition. This pseudometric is a metric in the particular case in which the fuzzy partition is composed solely by triangular fuzzy sets. We prove that these functions are indeed a pseudometric and a metric and illustrate their use in an experiment for the classification of land use in an area of the Brazilian Amazon region.

Sandra Sandri, Flávia Martins-Bedê, Luciano Dutra

Fuzzy Measures and Integrals

Quasi-Lovász Extensions on Bounded Chains

We study quasi-Lovász extensions as mappings

defined on a nonempty bounded chain

C

, and which can be factorized as

f

(

x

1

,…,

x

n

) = 

L

(

ϕ

(

x

1

),…,

ϕ

(

x

n

)), where

L

is the Lovász extension of a pseudo-Boolean function

and

is an order-preserving function.

We axiomatize these mappings by natural extensions to properties considered in the authors’ previous work. Our motivation is rooted in decision making under uncertainty: such quasi-Lovász extensions subsume overall preference functionals associated with discrete Choquet integrals whose variables take values on an ordinal scale

C

and are transformed by a given utility function

.

Furthermore, we make some remarks on possible lattice-based variants and bipolar extensions to be considered in an upcoming contribution by the authors.

Miguel Couceiro, Jean-Luc Marichal
Efficient and Scalable Nonlinear Multiple Kernel Aggregation Using the Choquet Integral

Previously, we investigated the definition and applicability of the

fuzzy integral

(FI) for nonlinear

multiple kernel

(MK) aggregation in pattern recognition. Kernel theory provides an elegant way to map multi-source heterogeneous data into a combined homogeneous (implicit) space in which aggregation can be carried out. The focus of our initial work was the Choquet FI, a per-matrix sorting based on the quality of a base learner and learning was restricted to the Sugeno

λ

-

fuzzy measure

(FM). Herein, we investigate what representations of FMs and FIs are valid and ideal for nonlinear MK aggregation. We also discuss the benefit of our approach over the linear convex sum MK formulation in machine learning. Furthermore, we study the Möbius transform and k-additive integral for scalable

MK learning

(MKL). Last, we discuss an extension to our

genetic algorithm

(GA) based MKL algorithm, called FIGA, with respect to a combination of multiple

light weight

FMs and FIs.

Lequn Hu, Derek T. Anderson, Timothy C. Havens, James M. Keller
On the Informational Comparison of Qualitative Fuzzy Measures

Fuzzy measures or capacities are the most general representation of uncertainty functions. However, this general class has been little explored from the point of view of its information content, when degrees of uncertainty are not supposed to be numerical, and belong to a finite qualitative scale, except in the case of possibility or necessity measures. The thrust of the paper is to define an ordering relation on the set of qualitative capacities expressing the idea that one is more informative than another, in agreement with the possibilistic notion of relative specificity. To this aim, we show that the class of qualitative capacities can be partitioned into equivalence classes of functions containing the same amount of information. They only differ by the underlying epistemic attitude such as pessimism or optimism. A meaningful information ordering between capacities can be defined on the basis of the most pessimistic (resp. optimistic) representatives of their equivalence classes. It is shown that, while qualitative capacities bear strong similarities to belief functions, such an analogy can be misleading when it comes to information content.

Didier Dubois, Henri Prade, Agnès Rico
Maxitive Integral of Real-Valued Functions

The paper pursues the definition of a maxitive integral on all real-valued functions (i.e., the integral of the pointwise maximum of two functions must be the maximum of their integrals). This definition is not determined by maxitivity alone: additional requirements on the integral are necessary. The paper studies the consequences of additional requirements of invariance with respect to affine transformations of the real line.

Marco E. G. V. Cattaneo
Fuzzy Weber Sets and Lovász Extensions of Cooperative Games

This paper investigates fuzzy extensions of cooperative games and the coincidence of the solutions for fuzzy and crisp games. We first show that an exact game has an exact fuzzy extension such that its fuzzy core coincides with the core. For games with empty cores, we exploit Lovász extensions to establish the coincidence of Weber sets for fuzzy and crisp games.

Nobusumi Sagara
Set Function Representations of Alternatives’ Relative Features

In a weighted sum model such as the Analytic Hierarchy Process, a set function value is constructed from the weights of the model. In this paper, we use relative individual scores to propose a set function that shows the features of an alternative. The set function value for the alternative is calculated by averaging the values of the set function representation of the weights generated when the alternative has the highest comprehensive score. By interpreting the functions, we can understand the features of an alternative. We discuss the properties of the set functions, and extend to Choquet integral models.

Eiichiro Takahagi
2-additive Choquet Optimal Solutions in Multiobjective Optimization Problems

In this paper, we propose a sufficient condition for a solution to be optimal for a 2-additive Choquet integral in the context of multiobjective combinatorial optimization problems. A 2-additive Choquet optimal solution is a solution that optimizes at least one set of parameters of the 2-additive Choquet integral. We also present a method to generate 2-additive Choquet optimal solutions of multiobjective combinatorial optimization problems. The method is experimented on some Pareto fronts and the results are analyzed.

Thibaut Lust, Antoine Rolland
A Characterization of the 2-Additive Symmetric Choquet Integral Using Trinary Alternatives

In a context of Multiple Criteria Decision Aid, we present some necessary and sufficient conditions to obtain a symmetric Choquet integral compatible with some preferences on a particular set of alternatives. These axioms are based on the notion of strict cycle and the MOPI conditions.

Brice Mayag
Choquet Integral on Multisets

Fuzzy measures on multisets are studied in this paper. We show that a class of multisets on a finite space can be represented as a subset of positive integers. Comonotonicity for multisets are defined. We show that a fuzzy measure on multisets with some comonotonicity condition can be represented by a generalized fuzzy integral.

Yasuo Narukawa, Vicenç Torra
Rescaling for Evaluations Using Inclusion-Exclusion Integral

On multivariate analyses generally distributions of explanatory variable have deviation depending on each unique quality, and eliminating deviation often beneficially effective. We propose two algorithms for rescaling of raw data and verify the validity of them using real reliable big data.

Aoi Honda, Ryoji Fukuda, Jun Okamoto
Construction of a Bi-capacity and Its Utility Functions without any Commensurability Assumption in Multi-criteria Decision Making

We consider a multi-criteria evaluation function

U

defined over a Cartesian product of attributes. We assume that

U

is written as the combination of an aggregation function and one value function over each attribute. The aggregation function is assumed to be a Choquet integral w.r.t. an unknown bi-capacity. The problem we wish to address in this paper is the following one: if

U

is known, can we construct both the value functions and the bi-capacity? The approaches that have been developed so far in the literature to answer this question in an analytical way assume some commensurability hypothesis. We propose in this paper a method to construct the value functions and the capacity without any commensurability assumption. Moreover, we show that the construction of the value functions is unique up to an affine transformation.

Christophe Labreuche

Non-Classical Logics

Multi-valued Representation of Neutrosophic Information

The paper presents three variants for multi-valued representation of neutrosophic information. These three representations are provided in the framework of multi-valued logics and it provides some calculation formulae for the following neutrosophic features:

truth, falsity, neutrality, undefinedness, saturation, contradiction, ambiguity

. In addition, it was defined

net-truth, definedness, neutrosophic score and neutrosophic indeterminacy

.

Vasile Patrascu
Formalising Information Scoring in a Multivalued Logic Framework

This paper addresses the task of information scoring seen as measuring the degree of trust that can be invested in a piece of information. To this end, it proposes to model the trust building process as the sequential integration of relevant dimensions. It also proposes to formalise both the degree of trust and the process in an extended multivalued logic framework that distinguishes between an indifferent level and the actual impossibility to measure. To formalise the process, it proposes multivalued combination operators matching the desired behaviours.

Adrien Revault d’Allonnes, Marie-Jeanne Lesot
Tableau Calculus for Basic Fuzzy Logic BL

In this paper we present a tableau calculus for BL, basic fuzzy logic introduced by Petr Hájek in his monograph

Metamathematics of Fuzzy Logic

. We show that it is sound and complete with respect to continuous t-norms, and demonstrate the refutational procedure and the search for models procedure on a selected example. The idea of the calculus is based on the decomposition theorem for a continuous t-norm, by which this operation is shown to be equivalent to the ordinal sum of a family of t-norms defined on countably many intervals.

Agnieszka Kułlacka
Possibilistic vs. Relational Semantics for Logics of Incomplete Information

This paper proposes an extension of the MEL logic to a language containing modal formulae of depth 0 or 1 only. MEL is a logic of incomplete information where an agent can express both beliefs and explicitly ignored facts, that only uses modal formulae of depth 1, and no objective ones. The extended logic, called MEL

 + 

has the same axioms as, and is in some sense equivalent to, S5 with a restricted language, but with the same expressive power. The semantics is not based on Kripke models with equivalence relations, but on pairs made of an interpretation (representing the real state of facts) and a non-empty set of possible interpretations (representing an epistemic state). Soundness and completeness are established. We provide a rationale for using our approach when an agent reasons about what is known of the epistemic state of another agent and compares it with what is known about the real world. Our approach can be viewed as an alternative to the basic epistemic logic not concerned with introspection. We discuss the difference with S5 used as a logic for rough sets, and the similarity with some previous non-monotonic logics of knowledge is highlighted.

Mohua Banerjee, Didier Dubois, Lluis Godo
Resolution in Linguistic First Order Logic Based on Linear Symmetrical Hedge Algebra

This paper focuses on resolution in linguistic first order logic with truth value taken from linear symmetrical hedge algebra. We build the basic components of linguistic first order logic, including syntax and semantics. We present a resolution principle for our logic to resolve on two clauses having converse linguistic truth values. Since linguistic information is uncertain, inference in our linguistic logic is approximate. Therefore, we introduce the concept of reliability in order to capture the natural approximation of the resolution inference rule.

Thi-Minh-Tam Nguyen, Viet-Trung Vu, The-Vinh Doan, Duc-Khanh Tran

Data Analysis

A Fuzzy Set Based Evaluation of Suppliers on Delivery, Front Office Quality and Value-Added Services

Fuzzy probabilities are used in an algorithmic process to address the ambiguity and uncertainty in supplier selection. Supplier selection is receiving increased focus in supply chain management (SCM) and was the impetus of a survey sent to 3000 companies that deal with an industry-dominated seven suppliers. This study focuses on three criteria, each having four attributes; delivery, front-office quality, and value-added services. The respondent data are partitioned, the algorithm is applied to the twelve aspects of the criteria using a spreadsheet program, the results are analyzed, and discussion is provided of a weighted scoring approach to rank order the suppliers.

Margaret F. Shipley, Gary L. Stading, Jonathan Davis
A Fuzzy Rulebase Approach to Remap Gridded Spatial Data: Initial Observations

In many fields of research where different gridded spatial data needs to be processed, the grids do not align properly. This can be for a multitude of reasons, and it complicates drawing conclusions and further processing the data; it requires one grid to be transformed to match the other grid. In this article, we present the first results of a completely new approach to transforming data that are represented in one grid, to have it match a given target grid. The approach uses techniques from artificial intelligence and simulates an intelligent reasoning on how the grid can be transformed, using additionally available information to estimate the underlying distribution. The article describes the algorithm, and results on artificial datasets are discussed.

Jörg Verstraete
Fast and Incremental Computation for the Erosion Score

The erosion score is a Mathematical Morphology tool used primarily to detect periodicity in data. In this paper, three new computation methods are proposed, to decrease its computational cost and to allow to process data streams, in an incremental variant. Experimental results show the significant computation time decrease, especially for the efficient levelwise incremental approach which is able to process a one million point data stream in 1.5s.

Gilles Moyse, Marie-Jeanne Lesot
Complexity of Rule Sets Induced from Incomplete Data Sets Using Global Probabilistic Approximations

We consider incomplete data sets using two interpretations of missing attribute values: lost values and “do not care” conditions. Additionally, in our data mining experiments we use global probabilistic approximations (singleton, subset and concept). The results of validation of such data, using global probabilistic approximations, were published recently. A novelty of this paper is research on the complexity of corresponding rule sets, in terms of the number of rules and number of rule conditions. Our main result is that the simplest rule sets are induced from data sets in which missing attribute values are interpreted as “do not care” conditions where rule sets are induced using subset probabilistic approximations.

Patrick G. Clark, Jerzy W. Grzymala-Busse
A Possibilistic View of Binomial Parameter Estimation

This paper deals with the possibility roots of binomial parameter interval estimation. It shows that conventional probability methods consist to obtain confidence intervals representing

de dicto

parameter uncertainty from coverage intervals representing

de re

uncertainty of observed samples. We relate the different types of coverage intervals to equivalent

de re

possibility distributions whose lead after inversion to

de dicto

possibility distributions corresponding to the stacking up of all confidence intervals at all levels. The different choices for the centre of the intervals corresponds to the different existing methods, in the same vein a novel one centred on the mean is proposed.

Gilles Mauris

Real-World Applications

A New Approach to Economic Production Quantity Problems with Fuzzy Parameters and Inventory Constraint

In this paper, we will develop a new multi-item economic production quantity model with limited storage space. This new model will then be extended to allow for fuzzy demand and solved numerically with a non-linear programming solver for two cases: in the first case the optimization problem will be defuzzified with the signed distance measure and in the second case, the storage constraint needs to be fulfilled, only to a certain degree of possibility. Both cases are solved and illustrated with an example.

József Mezei, Kaj-Mikael Björk
Approximate Reasoning for an Efficient, Scalable and Simple Thermal Control Enhancement

In order to ensure thermal energy efficiency and follow government’s thermal guidance, more flexible and efficient buildings’ thermal controls are required. This paper focuses on proposing scalable, efficient and simple thermal control approach based on imprecise knowledge of buildings’ specificities. Its main principle is a weak data-dependency which ensures the scalability and simplicity of our thermal enhancement approach. For this, an extended thermal qualitative model is proposed. It is based on a qualitative description of influences that actions’ parameters may have on buildings’ thermal performances. Our thermal qualitative model is enriched by collecting and assessing previous thermal control performances. Thus, an approximate reasoning for a

smart

thermal control becomes effective based on our extended thermal qualitative model.

Afef Denguir, François Trousset, Jacky Montmain
Robust Statistical Process Monitoring for Biological Nutrient Removal Plants

This paper presents an approach by combining robust fuzzy principal component analysis (RFPCA) technique with the multiscale principal component analysis (MSPCA) methodology. Thus the two typical issues of industrial data, outliers and changing process conditions are solved by resulting MS-RFPCA methodology. The RFPCA is proved to be effective in mitigating the impact of noise, and MSPCA has become necessary due to the nature of complex systems in which operations occur at different scales. The efficiency of the proposed technique is illustrated on a simulated benchmark of biological nitrogen removal process.

Nabila Heloulou, Messaoud Ramdani
Considering Psychological Conditions in a Tsunami Evacuation Simulation

The Great East Japan Earthquake occurred at 14:46 JST on Friday, March 11, 2011. It was the most powerful earthquake to have hit Japan and was one of the five most powerful earthquakes in the world since modern recordkeeping began in 1900. The earthquake triggered an extremely destructive tsunami with waves of up to 40.5 m in height. In this paper, in preparation for a possible Nankai Trough earthquake, we created a multi-agent tsunami evacuation simulation in Kure city to analyze the evacuation process and determine solutions to any problems. More specifically, we focus on the psychological conditions of people in the disaster area. During times of emergency, it is said that people fall into a psychological condition where they fail to evacuate. Based on the simulation results, we can confirm that people under psychological conditions require more time to evacuate than those in a normal frame of mind. Thus, people need to know more about psychological conditions during times of disaster to evacuate safely and efficiently.

Saori Iwanaga, Yoshinori Matsuura
β-Robust Solutions for the Fuzzy Open Shop Scheduling

We consider the open shop scheduling problem with uncertain durations modelled as fuzzy numbers. We define the concepts of necessary and possible

β

-robustness of schedules and set as our goal to maximise them. Additionally, we propose to assess solution robustness by means of Monte Carlo simulations. Experimental results using a genetic algorithm illustrate the proposals.

Juan José Palacios, Inés González-Rodríguez, Camino R. Vela, Jorge Puente Peinador

Aggregation

Aggregation Operators on Bounded Partially Ordered Sets, Aggregative Spaces and Their Duality

The present paper introduces aggregative spaces and their category

AGS

, and then establishes a dual adjunction between

AGS

and the category

Agop

of aggregation operators on bounded partially ordered sets. Spatial aggregation operators and sober aggregative spaces, enabling us to restrict the dual adjunction between

AGS

and

Agop

to a dual equivalence between the full subcategory of

Agop

consisting of spatial aggregation operators and the full subcategory of

AGS

consisting of sober aggregative spaces, will also be subjects of this paper.

Mustafa Demirci
Smart Fuzzy Weighted Averages of Information Elicited through Fuzzy Numbers

We illustrate a preliminary proposal of weighted fuzzy averages between two membership functions. Conflicts, as well as agreements, between the different sources of information in the two new operators are endogenously embedded inside the average weights. The proposal is motivated by the practical problem of assessing the fuzzy volatility parameter in the Black and Scholes environment via alternative estimators.

Andrea Capotorti, Gianna Figá-Talamanca
Analytical Hierarchy Process under Group Decision Making with Some Induced Aggregation Operators

This paper focuses on the extension of Analytical Hierarchy Process under Group Decision Making (AHP-GDM) with some induced aggregation operators. This extension generalizes the aggregation process used in AHP-GDM by allowing more flexibility in the specific problem under consideration. The Induced Ordered Weighted Average (IOWA) operator is a promising tool for decision making with the ability to reflect the complex attitudinal character of decision makers. The Maximum Entropy OWA (MEOWA) which is based on the maximum entropy principle and the level of ‘orness’ is a systematic way to derive weights for decision analysis. In this paper, the focus is given on the integration of some induced aggregation operators with the AHP-GDM based-MEOWA as an extension model. An illustrative example is presented to show the results obtained with different types of aggregation operators.

Binyamin Yusoff, José Maria Merigó Lindahl
Calibration of Utility Function and Mixture Premium

Calibration of the different types of utility functions of money is discussed in this paper. This calibration is based on an expected utility maximization of different alternatives of investment strategies which are offered to persons in a short questionnaire. Investigated utility functions have different Arrow-Pratt absolute and relative risk aversion coefficients. Moreover, the paper proposes a basic concept of uncertainty modeling by the chosen utility functions for the determination of so-called maximum mixture premium in non-life insurance. This concept is based on aggregation of maximum premiums by so-called mixture function with the selected weighting function. A case study is included.

Jana Špirková

Probabilistic Networks

The General Expression of the Prior Convergence Error: A Proof

In [2], we introduced the notion of the

parental synergy

. In the same paper, moreover, an expression was advanced for the prior convergence error (the error which is found in the marginal probabilities computed for a node when the parents of this node are wrongfully assumed to be independent), in which the parental synergy has a key position as weighting factor. This key position suggests that the parental synergy captures a fundamental feature of a Bayesian network. In this paper a proof is provided for the correctness of the conjectured expression of the prior convergence error.

Janneke H. Bolt
On SPI for Evaluating Influence Diagrams

An Influence Diagram is a probabilistic graphical model used to represent and solve decision problems under uncertainty. Its evaluation requires to perform a series of combinations and marginalizations with the potentials attached to the Influence Diagram. Finding an optimal order for these operations, which is NP-hard, is an element of crucial importance for the efficiency of the evaluation. The SPI algorithm considers the evaluation as a combinatorial factorization problem. In this paper, we describe how the principles of SPI can be used to solve Influence Diagrams. We also include an evaluation of different combination selection heuristics and a comparison with the variable elimination algorithm.

Rafael Cabañas, Anders L. Madsen, Andrés Cano, Manuel Gómez-Olmedo
On Causal Compositional Models: Simple Examples

The “algebraic” form of representation of probabilistic causal systems by compositional models seems to be quite useful and advantageous because of two reasons. First, decomposition of the model into its low-dimensional parts makes some of computations feasible, and, second, it appears that within these models, both conditioning and intervention can be realized as a composition of the model with a degenerated one-dimensional distribution. The syntax of these two computational processes are similar to each other; they differ just by one pair of brackets. Moreover, as it is shown in the last part of this paper on examples, it appears that these models can also cope with the problem of unobserved variables elimination.

Radim Jiroušek
How to Create Better Performing Bayesian Networks: A Heuristic Approach for Variable Selection

Variable selection in Bayesian networks is necessary to assure the quality of the learned network structure. Cinicioglu & Shenoy (2012) suggested an approach for variable selection in Bayesian networks where a score,

S

j

, is developed to assess each variable whether it should be included in the final Bayesian network. However, with this method the variables without parents or children are punished which affects the performance of the learned network. To eliminate that drawback, in this paper we develop a new score,

NS

j

. We measure the performance of this new heuristic in terms of the prediction capacity of the learned network, its lift over marginal and evaluate its success by comparing it with the results obtained by the previously developed

S

j

score. For the illustration of the developed heuristic and comparison of the results credit score data is used.

Esma Nur Cinicioglu, Gülseren Büyükuğur

Recommendation Systems and Social Networks

A Highly Automated Recommender System Based on a Possibilistic Interpretation of a Sentiment Analysis

This paper proposes an original recommender system (RS) based upon an automatic extraction of trends from opinions and a multicriteria multi actors assessment model. Our RS tries to optimize the use of the available information on the web to reduce as much as possible the complex and tedious steps for multicriteria assessing and for identifying users’ preference models. It may be applied as soon as i) overall assessments of competing entities are provided by trade magazines and ii) web users’ critics in natural languages and related to some characteristics of the assessed entities are available. Recommendation is then based on the capacity of the RS to associate a web user with a trade magazine that conveys the same values as the user and thus represents a reliable personalized source of information. Possibility theory is used to take account subjectivity of critics. Finally a case study concerning movie recommendations is presented.

Abdelhak Imoussaten, Benjamin Duthil, François Trousset, Jacky Montmain
Suggesting Recommendations Using Pythagorean Fuzzy Sets illustrated Using Netflix Movie Data

The web can be perceived as a huge repository of items, and users’ activities can be seen as processes of searching for items of interest. Recommender systems try to estimate what items users may like based on similarities between users, their activities, or on explicitly specified preferences. Users do not have any influence on item selection processes.

In this paper we propose a novel collaborative-based recommender system that provides a user with the ability to control a process of constructing a list of suggested items. This control is accomplished via explicit requirements regarding rigorousness of identifying users who become a reference base for generating suggestions. Additionally, we propose a new way of ranking items rated by multiple users. The approach is based on Pythagorean fuzzy sets and takes into account not only assigned rates but also their number. The proposed approach is used to generate lists of recommended movies from the Netflix competition database.

Marek Z. Reformat, Ronald R. Yager
Evidential Communities for Complex Networks

Community detection is of great importance for understanding graph structure in social networks. The communities in real-world networks are often overlapped,

i.e.

some nodes may be a member of multiple clusters. How to uncover the overlapping communities/clusters in a complex network is a general problem in data mining of network data sets. In this paper, a novel algorithm to identify overlapping communities in complex networks by a combination of an evidential modularity function, a spectral mapping method and evidential

c

-means clustering is devised. Experimental results indicate that this detection approach can take advantage of the theory of belief functions, and preforms good both at detecting community structure and determining the appropriate number of clusters. Moreover, the credal partition obtained by the proposed method could give us a deeper insight into the graph structure.

Kuang Zhou, Arnaud Martin, Quan Pan

Fuzzy Systems

Probabilistic Fuzzy Systems as Additive Fuzzy Systems

Probabilistic fuzzy systems combine a linguistic description of the system behaviour with statistical properties of data. It was originally derived based on Zadeh’s concept of probability of a fuzzy event. Two possible and equivalent additive reasoning schemes were proposed, that lead to the estimation of the output’s conditional probability density. In this work we take a complementary approach and derive a probabilistic fuzzy system from an additive fuzzy system. We show that some fuzzy systems with universal approximation capabilities can compute the same expected output value as probabilistic fuzzy systems and discuss some similarities and differences between them. A practical relevance of this functional equivalence result is that learning algorithms, optimization techniques and design issues can, under certain circumstances, be transferred across different paradigms.

Rui Jorge Almeida, Nick Verbeek, Uzay Kaymak, João Miguel Da Costa Sousa
Measure Inputs to Fuzzy Rules

Our concern is with the determination of the firing level of the antecedent fuzzy set in a fuzzy systems model rule base. We first consider the case where the input information is also expressed in terms of a normal fuzzy set. We provide the requirements needed by any formulation of this operation. We next consider the case when the input information is expressed using a measure. Here we also provide the requirements for any formulation that can be used to determine the firing level of the antecedent fuzzy set when the input information is a measure. We provide some examples of these formulations. Since a probability distribution is a special case of a measure we are able to determine the firing level of fuzzy rules with probabilistic inputs.

Ronald R. Yager
Interpretability in Fuzzy Systems Optimization: A Topological Approach

When dealing with complex problems, it is often the case that fuzzy systems must undergo an optimization process. During this process, the preservation of interpretability is a major concern. Here we present a new mathematical framework to analyze the notion of interpretability of a fuzzy partition, and a generic algorithm to preserve it. This approach is rather flexible and it helps to highly automatize the optimization process. Some tools come from the field of algebraic topology.

Ricardo de Aldama, Michaël Aupetit
Backmatter
Metadaten
Titel
Information Processing and Management of Uncertainty in Knowledge-Based Systems
herausgegeben von
Anne Laurent
Olivier Strauss
Bernadette Bouchon-Meunier
Ronald R. Yager
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-08795-5
Print ISBN
978-3-319-08794-8
DOI
https://doi.org/10.1007/978-3-319-08795-5

Premium Partner