Skip to main content

2018 | Buch

Belief Functions: Theory and Applications

5th International Conference, BELIEF 2018, Compiègne, France, September 17-21, 2018, Proceedings

herausgegeben von: Sébastien Destercke, Prof. Thierry Denoeux, Dr. Fabio Cuzzolin, Arnaud Martin

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 5th International Conference on Belief Functions, BELIEF 2018, held in Compiègne, France, in September 2018.The 33 revised regular papers presented in this book were carefully selected and reviewed from 73 submissions. The papers were solicited on theoretical aspects (including for example statistical inference, mathematical foundations, continuous belief functions) as well as on applications in various areas including classification, statistics, data fusion, network analysis and intelligent vehicles.

Inhaltsverzeichnis

Frontmatter
An Evidential Collaborative Filtering Approach Based on Items Contents Clustering

Recommender Systems (RSs) have emerged as powerful tools to provide the users with personalized recommendations and to guide them in their decision making process. Among the various recommendation approaches, Collaborative Filtering (CF) is considered as one of the most popular techniques in RSs. CF techniques are categorized into model-based and memory-based. Model-based approaches consist in learning a model from past ratings to perform predictions while memory-based ones predict ratings by selecting the most similar users (user-based) or the most similar items (item-based). In both types, recommendations are fully based on users’ past ratings. However, aside from users’ ratings, exploiting additional information such as items’ features would enhance the accuracy of the provided predictions. Another crucial challenge in the RSs area would be to handle uncertainty arising throughout the prediction process. That is why, in this paper, we propose an item-based Collaborative Filtering under the belief function theory that not only takes advantages of both model- and memory- based CF approaches but also integrates items’ contents in the recommendation process.

Raoua Abdelkhalek, Imen Boukhris, Zied Elouedi
The Belief Functions Theory for Sensors Localization in Indoor Wireless Networks

This paper investigates the usage of the belief functions theory to localize sensors in indoor environments. The problem is tackled as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The proposed approach uses the belief functions theory to define an evidence framework, for estimating the most probable sensor’s zone. Real experiments demonstrate the effectiveness of this approach as compared to other localization methods.

Daniel Alshamaa, Farah Mourad-Chehade, Paul Honeine
On Evidential Clustering with Partial Supervision

This paper introduces a new semi-supervised evidential clustering algorithm. It considers label constraints and exploits the evidence theory to create a credal partition coherent with the background knowledge. The main characteristics of the new method is its ability to express the uncertainties of partial prior information by assigning each constrained object to a set of labels. It enriches previous existing algorithm that allows the preservation of the uncertainty in the constraint by adding the possibility to favor crisp decision following the inherent structure of the dataset. The advantages of the proposed approach are illustrated using both a synthetic dataset and a real genomics dataset.

Violaine Antoine, Kévin Gravouil, Nicolas Labroche
Exploiting Domain-Experts Knowledge Within an Evidential Process for Case Base Maintenance

Case Base Maintenance (CBM) presents one of the key factors success for Case Based Reasoning (CBR) systems. Thence, several CBM policies are proposed to improve their problem-solving performance and competence. However, to the best of our knowledge, all of them are not able to make use of prior knowledge which can be offered by domain experts, especially that CBR is widely applied in real-life domains. For instance, given symptoms of two different cases in medicine area, the doctor can affirm that these two cases should never follow the same treatment, or conversely. This kind of prior knowledge is presented in form of Cannot-Link and Must-link constraints. In addition, most of them cannot manage uncertainty in cases during CBM. To overcome this shortcoming, we propose, in this paper, a CBM policy that handles constraints to exploit experts’ knowledge during case base learning along with managing uncertainty using the belief function theory. This new CBM approach consists mainly in noisy and redundant cases deletion.

Safa Ben Ayed, Zied Elouedi, Eric Lefevre
The Kantorovich Problem and Wasserstein Metric in the Theory of Belief Functions

The aim of this paper is to show that the Kantorovich problem, well known in models of economics and very intensively studied in probability theory in recent years, can be viewed as the basis of some constructions in the theory of belief functions. We demonstrate this by analyzing specialization relation for finitely defined belief functions and belief functions defined on reals. In addition, for such belief functions we consider the Wasserstein metric and study its connections to disjunctions of belief functions.

Andrey G. Bronevich, Igor N. Rozenberg
Generalised Max Entropy Classifiers

In this paper we propose a generalised maximum-entropy classification framework, in which the empirical expectation of the feature functions is bounded by the lower and upper expectations associated with the lower and upper probabilities associated with a belief measure. This generalised setting permits a more cautious appreciation of the information content of a training set. We analytically derive the Karush-Kuhn-Tucker conditions for the generalised max-entropy classifier in the case in which a Shannon-like entropy is adopted.

Fabio Cuzzolin
General Geometry of Belief Function Combination

In this paper we build on previous work on the geometry of Dempster’s rule to investigate the geometric behaviour of various other combination rules, including Yager’s, Dubois’, and disjunctive combination, starting from the case of binary frames of discernment. Believability measures for unnormalised belief functions are also considered. A research programme to complete this analysis is outlined.

Fabio Cuzzolin
Logistic Regression Revisited: Belief Function Analysis

We show that the weighted sum and softmax operations performed in logistic regression classifiers can be interpreted in terms of evidence aggregation using Dempster’s rule of combination. From that perspective, the output probabilities from such classifiers can be seen as normalized plausibilities, for some mass functions that can be laid bare. This finding suggests that the theory of belief functions is a more general framework for classifier construction than is usually considered.

Thierry Denoeux
From Relations Between Sets to Relations Between Belief Functions

In uncertainty theories, a common problem is to define how we can extend relations between sets (e.g., inclusion, ranking, consistency, ...) to corresponding notions between uncertainty representations. Such definitions can then be used to perform the same operations as those that are done for sets: measuring information content, ordering alternatives or checking consistency, to name a few. In this paper, we propose a general way to extend set relations to belief functions, using constrained stochastic matrices to identify those belief functions in relation. We then study some properties of our proposal, as well as its relations with existing works focusing on peculiar relations.

Sébastien Destercke, Frédéric Pichon, John Klein
Application of Belief Functions to Levee Assessment

We propose the use of Smets and PCR5 rules to merge artificial geophysical and geotechnical data, as part of fluvial levee assessment. It highlights the ability to characterize the presence of interfaces and a geological anomaly.

Théo Dezert, Yannick Fargier, Sérgio Palma Lopes, Philippe Côte
Prejudiced Information Fusion Using Belief Functions

G. Shafer views belief functions as the result of the fusion of elementary partially reliable testimonies from different sources. But any belief function cannot be seen as the combination of simple support functions representing such testimonies. Indeed the result of such a combination only yields a special kind of belief functions called separable. In 1995, Ph. Smets has indicated that any belief function can be seen as the combination of so-called generalized simple support functions. We propose a new interpretation of this result in terms of a pair of separable belief functions, one of them modelling testimonies while the other represents the idea of prejudice. The role of the latter is to weaken the weights of the focal sets of the former separable belief function. This bipolar view accounts for a form of resistance to accept the information supplied by the sources, which differs from the discounting of sources.

Didier Dubois, Francis Faux, Henri Prade
A Heuristic Approach for the Robust Flight Level Assignment Problem

The paper studies the flight level assignment (FLA) problem and its robust variant. Our goal is reducing the total cost (and more specifically the flight delay) induced by airspace congestion through an appropriated FLA taking account of uncertainties such as weather condition, flight velocity, flight departure time, etc. Among these uncertainties, we assume that the flight departure time, which follows a Mixture Gaussian Distribution, is certainly one of the main uncertainty factors worthy to deal with. The deterministic FLA problem is formulated through an Integer Linear Programming (ILP) model, which becomes trickier when the uncertainty aspect is considered. The FLA problem is strongly NP-hard and solving it exactly is out of reach even for moderate realistic instances. Hence, we propose an approximated optimization approach to solve the robust FLA problem. The main idea is to decompose the problem by levels and solving it separately while handling the connecting constraints between levels. Numerical results illustrate our findings.

Akli Fundo, Dritan Nace, Chenghao Wang
Study of Distributed Data Fusion Using Dempster’s Rule and Cautious Operator

This paper presents new algorithms to process data exchanged in vehicular networks. In previous works, a distributed data fusion method using belief functions to model uncertainties has been proposed for smart cars network. Since the origin of data coming from other cars is unknown, this algorithm uses the idempotent cautious operator in order to prevent data incest. This operator has been proved to be efficient in the case of transient errors and ensures the fusion convergence. However, since the cautious operator is idempotent, the quantity of concordant sources does not change the result of fusion. Thus we propose several schemes adding Dempster’s rule in order to improve the fusion when we can ensure that data come from independent sources. We introduce three new combinations layout of Dempster’s rule and cautious operator and we compare them using real data coming from experiments involving several communicating cars in the context of the COMOSEF project.

Romain Guyard, Véronique Cherfaoui
Uncertainty-Aware Parzen-Rosenblatt Classifier for Multiattribute Data

Dempster-Shafer theory has proven to be one of the most powerful tools for data fusion and reasoning under uncertainty. Despite the huge number of frameworks proposed in this area, determining the basic probability assignment remains an open issue. To address this problem, this paper proposes a novel Dempster-Shafer scheme based on Parzen-Rosenblatt windowing for multi-attribute data classification. More explicitly, training data are used to construct approximate distributions for each hypothesis, and per each data attribute, using Parzen-Rosenblatt window density estimation. Such distributions are then used at the classification stage, to generate mass functions and reach a consensus decision using the pignistic transform. To validate the proposed scheme, experiments are carried out on some pattern classification benchmarks. The results obtained show the interest of the proposed approach with respect to some recent state-of-the-art methods.

Ali Hamache, Mohamed El Yazid Boudaren, Houdaifa Boukersoul, Islam Debicha, Hamza Sadouk, Rezki Zibani, Ahmed Habbouchi, Omar Merouani
Birnbaum’s Importance Measure Extended for Non-coherent Systems

We introduce an extended Birnbaum component importance measure considering epistemic and aleatory uncertainty adapted to non-coherent systems. The belief function theory is proposed as a framework for taking into account both types of uncertainty. The objective is to rank components according to their importance in system working. This importance measure was introduced for coherent systems; however, the increasing complexity of modern systems introduces the case of non-coherent systems. This is why we should consider these kinds of systems. In this work, we propose a method to compute the importance measure of the components of non-coherent systems in the framework of belief functions theory.

Ayyoub Imakhlaf, Mohamed Sallak
Evidential Independence Maximization on Twitter Network

Detecting independent users in online social networks is an interesting research issue. In fact, independent users cannot generally be influenced, they are independent in their choices and decisions. Independent users may attract other users and make them adopt their point of view. A user is qualified as independent when his/her point of view does not depend on others ideas. Thus, the behavior of such a user is independent from other behaviors. Detecting independent users is interesting because a part of them can be influencers. Independent users that are not influencers can be directly targeted as they cannot be influenced. In this paper, we present an evidential independence maximization approach for Twitter users. The proposed approach is based on three metrics reflecting users behaviors. We propose an useful approach for detecting influencers. Indeed, we consider the independence as a characteristic of influencers even if not all independent users are influencers. The proposed approach is experimented on real data crawled from Twitter.

Siwar Jendoubi, Mouna Chebbah, Arnaud Martin
An Evidential k-nearest Neighbors Combination Rule for Tree Species Recognition

The task of tree species recognition is to recognize the tree species using photos of their leaves and barks. In this paper, we propose an evidential k-nearest neighbors (k-NN) combination rule. The proposed rule is adapted to classification problems where we have a large number of classes with an intra-class variability and an inter-class similarity like the problem of tree species recognition. Finally, we compare the performance of the proposed solution to the evidential k-NN.

Siwar Jendoubi, Didier Coquin, Reda Boukezzoula
A Compact Belief Rule-Based Classification System with Evidential Clustering

In this paper, a rule learning method based on the evidential C-means clustering is proposed to efficiently design a compact belief rule-based classification system. In this method, the evidential C-means algorithm is first used to obtain credal partitions of the training set. The clustering process operates in a supervised way by means of weighted product-space clustering with the goals of obtaining both good inter-cluster separability and inner-cluster pureness. Then the antecedent part of a belief rule is defined by projecting each multi-dimensional credal partition onto each feature. The consequent class and the weight of each belief rule are identified by combing those training patterns belonging to each hard credal partition within the framework of belief functions. An experiment based on several real data sets was carried out to show the effectiveness of the proposed method.

Lianmeng Jiao, Xiaojiao Geng, Quan Pan
A Decomposable Entropy of Belief Functions in the Dempster-Shafer Theory

We define entropy of belief functions in the Dempster-Shafer (D-S) theory that satisfies a compound distributions property that is analogous to the property that characterizes Shannon’s definitions of entropy and conditional entropy for discrete probability distributions. None of the existing definitions of entropy for belief functions in the D-S theory satisfy such a compound distributions property. We describe some important properties of our definition.

Radim Jiroušek, Prakash P. Shenoy
An Evidential K-Nearest Neighbor Classifier Based on Contextual Discounting and Likelihood Maximization

The evidential K nearest neighbor classifier is based on discounting evidence from learning instances in a neighborhood of the pattern to be classified. To adapt the method to partially supervised data, we propose to replace the classical discounting operation by contextual discounting, a more complex operation based on as many discount rates as classes. The parameters of the method are tuned by maximizing the evidential likelihood, an extended notion of likelihood based on uncertain data. The resulting classifier is shown to outperform alternative methods in partially supervised learning tasks.

Orakanya Kanjanatarakul, Siwarat Kuson, Thierry Denoeux
Measuring Market Performance with Stochastic Demand: Price of Anarchy and Price of Uncertainty

Globally operating suppliers face the rising challenge of wholesale pricing under scarce data about retail demand, in contrast to better informed, locally operating retailers. At the same time, as local businesses proliferate, markets congest and retail competition increases. To capture these strategic considerations, we employ the classic Cournot model and extend it to a two-stage supply chain with an upstream supplier who operates under demand uncertainty and multiple downstream retailers who compete over quantity. The supplier’s belief about retail demand is modeled via a continuous probability distribution function F. If F has the decreasing generalized mean residual life property, then the supplier’s optimal pricing policy exists and is the unique fixed point of the mean residual life function. We evaluate the realized Price of Uncertainty and show that there exist demand levels for which market performs better when the supplier prices under demand uncertainty. In general, performance worsens for lower values of realized demand. We examine the effects of increasing competition on supply chain efficiency via the realized Price of Anarchy and complement our findings with numerical results.

Costis Melolidakis, Stefanos Leonardos, Constandina Koki
On the Conflict Measures Agreed with the Combining Rules

The conflict measures induced by the conjunctive and disjunctive combining rules are studied in this paper in the framework of evidence theory. The coherence of conflict measures with combining rules is introduced and studied. In addition, the structure of conjunctive and disjunctive conflict measures is studied in the paper. In particular, it is shown that the metric and entropy components can be distinguished in such measures. Moreover, these components are changed differently after combining of the bodies of evidence.

Alexander Lepskiy
Linear Belief Functions for Data Analytics

This paper studies the application of linear belief functions to both classic and Bayesian statistic data analysis. In particular, it explores how to combine direct observations with/without distributional assumptions as linear belief functions for estimating population mean, how to combine system equations and measurement equations with direct observations in time series models and Bayesian linear regressions. It illustrates the use of Linear Model Operating Systems (LMOS).

Liping Liu
Outer Approximations of Coherent Lower Probabilities Using Belief Functions

We investigate the problem of outer approximating a coherent lower probability with a more tractable model. In particular, in this work we focus on the outer approximations made by belief functions. We show that they can be obtained by solving a linear programming problem. In addition, we consider the subfamily of necessity measures, and show that in that case we can determine all the undominated outer approximations in a simple manner.

Ignacio Montes, Enrique Miranda, Paolo Vicig
An Ordered Family of Consistency Measures of Belief Functions

We propose a family of measures of consistency (and induced conflict) derived from a definition of consistent belief functions introduced previously. Besides satisfying the desired properties of monotonicity, boundedness, and extreme values, the novel family encompasses the existing probabilistic and logical consistency measures which are shown to correspond to two extremes of the family (lower sharp bound and upper asymptotic limit respectively). We illustrate the definitions and measures of consistency within an example of vessel destination estimation with inconsistent sources.

Nadia Ben Abdallah, Anne-Laure Jousselme, Frédéric Pichon
Active Evidential Calibration of Binary SVM Classifiers

Evidential calibration methods of binary classifiers improve upon probabilistic calibration methods by representing explicitly the calibration uncertainty due to the amount of training (labelled) data. This justified yet undesirable uncertainty can be reduced by adding training data, which are in general costly. Hence the need for strategies that, given a pool of unlabelled data, will point to interesting data to be labelled, i.e., to data inducing a drop in uncertainty greater than a random selection. Two such strategies are considered in this paper and applied to an ensemble of binary SVM classifiers on some classical binary classification datasets. Experimental results show the interest of the approach.

Sébastien Ramel, Frédéric Pichon, François Delmotte
Decision Making: A Beliefs, Preferences and Constraints Model

Beliefs, preferences and constraints occur together in many real world problems. However, reasoning with such intertwinement is rather unexplored in the AI literature. In this paper, we introduce a model whereby agents seek for decisions that satisfy their preferences based on their beliefs subject to certain constraints by extending the soft constraints framework to the belief function theory. Constraint-based solving machinery are then adapted for solving such kind of problems. A specific branch and bound algorithm is introduced.

Aouatef Rouahi, Kais Ben Salah, Khaled Ghédira
Belief and Plausibility Functions on the Space of Scalar Products and Applications

We study the problem of a vector space over $$\mathbb {R}$$ whose Euclidean scalar product is unknown. From piece of evidence given by certain experts, we are able to build suitable belief and plausibility functions on the space of scalar products. We pay special attention to study contradictions and degrees of conflict. As a possible application, for a random variable of the vector space, we are able to get the related belief and plausibility functions for its variance.

Juan J. Salamanca
E2CM: An Evolutionary Version of Evidential C-Means Clustering Algorithm

This paper aims to propose an Evolutionary version of Evidential C-Mean (E2CM) clustering method based on a Variable string length Artificial Bee Colony (VABC) algorithm. In the E2CM, the centers of clusters are encoded in form of a population of strings with variable length to search optimal number of clusters as well as locations of centers based on the VABC, by minimizing objective function non-specificity, in which the assignment of objects to the population of cluster centers are performed by the ECM. One significant merit of the E2CM is that it can automatically create a credal partition without requiring the number of clusters as a priority. A numerical example is used to intuitively verify our conclusions.

Zhi-gang Su, Hong-yu Zhou, Pei-hong Wang, Gang Zhao, Ming Zhao
Contrasting Two Laws of Large Numbers from Possibility Theory and Imprecise Probability

The law of large numbers for coherent lower previsions (specifically, Choquet integrals against belief measures) can be applied to possibility measures, yielding that sample averages are asymptotically confined in a compact interval. This interval differs from the one appearing in the law of large numbers from possibility theory. In order to understand this phenomenon, we undertake an in-depth study of the compatibility of the assumptions in those results. It turns out that, although there is no incompatibility between their conclusions, their assumptions can only be simultaneously satisfied if the possibility distributions of the variables are 0–1 valued.

Pedro Terán, Elisa Pis Vigil
Improved Performance of EK-NNClus by Selecting Appropriate Parameter

EK-NNclus is an evidential clustering method based on the evidential K-nearest neighbors classification rule. Its one significant merit is that it does not require any priori on the number of clusters. However, the EK-NNclus suffers from the influence of number K. In other words, the performance of EK-NNclus is sensitive to K: if the number K is too small, the natural cluster may be split into two or more clusters; otherwise, two or more natural clusters may be merged into one cluster. In this paper, we indicated that tuning the parameters (such as $$\alpha $$ in the discounting function) can take full advantage of the distances between the object and its nearest neighbors, which can prevent natural clusters from being merged. Some numerical experiments were conducted and the experimental results suggested that the performance of EK-NNclus can be improved if appropriate $$\alpha $$ is selected.

Qian Wang, Zhi-gang Su
An Empirical Study to Determine the Optimal k in Ek-NNclus Method

Ek-NNclus is a clustering algorithm based on the evidential k-nearest-neighbor rule. It has the advantage that the number of clusters can be detected. However, the parameter k has crucial influence on the clustering results, especially for the number of clusters and clustering quality. Thus, the determination of k is an important issue to optimize the use of the Ek-NNclus algorithm. The authors of Ek-NNclus only give a large interval of k, which is not precise enough for real applications. In traditional clustering algorithms such as c-means and c-medoïd, the determination of c is a real issue and some methods have been proposed in the literature and proved to be efficient. In this paper, we borrow some methods from c determination solutions and propose a k determination strategy based on an empirical study.

Yiru Zhang, Tassadit Bouadi, Arnaud Martin
Evidential Community Detection Based on Density Peaks

Credal partitions in the framework of belief functions can give us a better understanding of the analyzed data set. In order to find credal community structure in graph data sets, in this paper, we propose a novel evidential community detection algorithm based on density peaks (EDPC). Two new metrics, the local density $$\rho $$ and the minimum dissimilarity $$\delta $$ , are first defined for each node in the graph. Then the nodes with both higher $$\rho $$ and $$\delta $$ values are identified as community centers. Finally, the remaining nodes are assigned with corresponding community labels through a simple two-step evidential label propagation strategy. The membership of each node is described in the form of basic belief assignments, which can well express the uncertainty included in the community structure of the graph. The experiments demonstrate the effectiveness of the proposed method on real-world networks.

Kuang Zhou, Quan Pan, Arnaud Martin
Backmatter
Metadaten
Titel
Belief Functions: Theory and Applications
herausgegeben von
Sébastien Destercke
Prof. Thierry Denoeux
Dr. Fabio Cuzzolin
Arnaud Martin
Copyright-Jahr
2018
Electronic ISBN
978-3-319-99383-6
Print ISBN
978-3-319-99382-9
DOI
https://doi.org/10.1007/978-3-319-99383-6

Premium Partner