Skip to main content

2011 | Buch

Symbolic and Quantitative Approaches to Reasoning with Uncertainty

11th European Conference, ECSQARU 2011, Belfast, UK, June 29–July 1, 2011. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 11th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, ECSQARU 2011, held in Belfast, UK, in June/July 2011.
The 60 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 108 submissions. The papers are organized in topical sections on argumentation; Bayesian networks and causal networks; belief functions; belief revision and inconsistency handling; classification and clustering; default reasoning and logics for reasoning under uncertainty; foundations of reasoning and decision making under uncertainty; fuzzy sets and fuzzy logic; implementation and applications of uncertain systems; possibility theory and possibilistic logic; and uncertainty in databases.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Information Fusion and Revision in Qualitative and Quantitative Settings
Steps Towards a Unified Framework

Fusion and revision are two key topics in knowledge representation and uncertainty theories. However, various formal axiomatisations of these notions were proposed inside specific settings, like logic, probability theory, possibility theory, kappa functions, belief functions and imprecise probability. For instance, the revision rule in probability theory is Jeffrey’s rule, and is characterized by two axioms. The AGM axioms for revision are stated in the propositional logic setting. But there is no bridge between these axiomatizations. Likewise, Dempster rule of combination was axiomatized by Smets among others, and a logical syntax-independent axiomatization for merging was independently proposed by Koniezny and Pino-Perez, while a belief function can be viewed as a weighted belief set. Moreover the distinction between fusion and revision is not always so clear and comparing sets of postulates for each of them can be enlightening. This paper presents a tentative set of basic principles for revision and another set of principles for fusion that could be valid regardless of whether information is represented qualitatively or quantitatively. In short, while revision obeys a success postulate and a minimal change principle, fusion is essentially symmetric, and obeys a principle of optimism, that tries to take advantage of all sources of information. Moreover, when two pieces of information are consistent, revising one by the other comes down to merging them symmetrically. Finally, there is a principle of minimal commitment at work in all settings, and common to the two operations.

Didier Dubois
Introducing Equational Semantics for Argumentation Networks

This paper provides equational semantics for Dung’s argumentation networks. The network nodes get numerical values in [0,1], and are supposed to satisfy certain equations. The solutions to these equations correspond to the “extensions” of the network.

This approach is very general and includes the Caminada labelling as a special case, as well as many other so-called network extensions, support systems, higher level attacks, Boolean networks, dependence on time, etc, etc.

The equational approach has its conceptual roots in the 19th century following the algebraic equational approach to logic by George Boole, Louis Couturat and Ernst Schroeder.

Dov M. Gabbay
Constructive Decision Theory: Short Summary

In almost all current approaches to decision making under uncertainty, it is assumed that a decision problem is described by a set of states and set of outcomes, and the decision maker (DM) has a preference relation on a rather rich set of

acts

, which are functions from states to outcomes. The standard representation theorems of decision theory give conditions under which the preference relation can be represented by a utility function on outcomes and numerical representation of beliefs on states. For example, Savage [4] shows that if a DM’s preference order satisfies certain axioms, then the DM’s preference relation can be represented by a probability Pr on the state space and a utility function mapping outcomes to the reals such that she prefers act

a

to act

b

iff the expected utility of

a

(with respect to Pr) is greater than that of

b

. Moreover, the probability measure is unique and the utility function is unique up to affine transformations. Similar representations of preference can be given with respect to other representations of uncertainty (see, for example, [2,5]).

Joseph Y. Halpern

Argumentation

Strong Equivalence for Argumentation Semantics Based on Conflict-Free Sets

Argumentation can be understood as a dynamic reasoning process, i.e. it is in particular useful to know the effects additional information causes with respect to a certain semantics. Accordingly, one can identify the information which does not contribute to the results no matter which changes are performed. In other words, we are interested in so-called

kernels

of frameworks, where two frameworks with the same kernel are then “immune” to all kind of newly added information in the sense that they always produce an equal outcome. The concept of

strong equivalence

for argumentation frameworks captures this intuition and has been analyzed for several semantics which are all based on the concept of admissibility. Other important semantics have been neglected so far. To close this gap, we give strong equivalence results with respect to naive, stage and

cf

2 extensions, and we compare the new results with the already existing ones. Furthermore, we analyze strong equivalence for symmetric frameworks and discuss local equivalence, a certain relaxation of strong equivalence.

Sarah Alice Gaggl, Stefan Woltran
Backing and Undercutting in Defeasible Logic Programming

Two important notions within the field of classical argumentation are undercutting defeaters and backings. The former represent an attack to an inference step, and the latter intend to provide defense against this type of attack. Defeasible Logic Programming (D

e

LP) is a concrete argumentation system that allows to identify arguments whose conclusions or intermediate conclusions are in contradiction, capturing the notion of rebutting defeater. Nevertheless, in D

e

LP is not possible to represent neither undercutting defeaters nor backings. The aim of this work is to extend the formalism of D

e

LP to allow attack and support for defeasible rules. Thus, it will be possible to build arguments for representing undercutting defeaters and backings.

Andrea Cohen, Alejandro J. García, Guillermo R. Simari
Arguing with Valued Preference Relations

Dung’s argumentation is based on a Boolean binary defeat relation. Recently, this framework has been extended in order to consider the strength of the defeat relation, i.e., to quantify the degree to which an argument defeats another one. In the extended framework, the defeat relation with varied strength is abstract, i.e., its origin is not known. In this paper, we instantiate argumentation framework with varied-strength defeats by a preference-based argumentation framework with a certainty degree in the preference relation. A potential example of such valued preference relation is when a weight can be assigned to each argument. In this case, we give conditions on the construction of the valued preference relation from the weight. Moreover, we show that the set of conditions in which a defense holds with a valued preference relation is strictly included in the set of conditions in which a defense holds with a Boolean preference relation.

Souhila Kaci, Christophe Labreuche
Arguing about the Trustworthiness of the Information Sources

Trust minimizes the uncertainty in the interactions among the information sources. To express the possibly conflicting motivations about trust and distrust, we reason about trust using argumentation theory. First, we show how to model the sources and how to attack untrustworthy sources. Second, we provide a focused representation of trust about the sources in which trust concerns not only the sources but also the information items and the relation with other information.

Serena Villata, Guido Boella, Dov M. Gabbay, Leendert van der Torre
Two Roles of Preferences in Argumentation Frameworks

In this paper, we show that preferences intervene twice in argumentation frameworks: i) to compute standard solutions (i.e. extensions), and ii) to refine those solutions (i.e. to return only the preferred extensions). The two roles are independent and obey to distinct postulates. After introducing and studying the postulates, we provide an example of a formal framework which models the two roles and verifies all the proposed postulates.

Leila Amgoud, Srdjan Vesic

Bayesian Networks and Causal Networks

A Re-definition of Mixtures of Polynomials for Inference in Hybrid Bayesian Networks

We discuss some issues in using mixtures of polynomials (MOPs) for inference in hybrid Bayesian networks. MOPs were proposed by Shenoy and West for mitigating the problem of integration in inference in hybrid Bayesian networks. In defining MOP for multi-dimensional functions, one requirement is that the pieces where the polynomials are defined are hypercubes. In this paper, we discuss relaxing this condition so that each piece is defined on regions called hyper-rhombuses. This relaxation means that MOPs are closed under transformations required for multi-dimensional linear deterministic conditionals, such as

Z

 = 

X

 + 

Y

. Also, this relaxation allows us to construct MOP approximations of the probability density functions (PDFs) of the multi-dimensional conditional linear Gaussian distributions using a MOP approximation of the PDF of the univariate standard normal distribution. We illustrate our method using conditional linear Gaussian PDFs in two and three dimensions.

Prakash P. Shenoy
Using Four Cost Measures to Determine Arc Reversal Orderings

Four cost measures

s

1

,

s

2

,

s

3

,

s

4

were recently studied for sorting the operations in Lazy propagation with arc reversal (LPAR), a join tree propagation approach to Bayesian network inference. It has been suggested to use

s

1

with LPAR, since there is an effectiveness ranking, say

s

1

,

s

2

,

s

3

,

s

4

, when applied in isolation. In this paper, we also suggest to use

s

1

with LPAR, but to use

s

2

to break

s

1

ties,

s

3

to break

s

2

ties, and

s

4

to break

s

3

ties. Experimental results show that sometimes there is a noticeable gain to be made.

Cory J. Butz, Anders L. Madsen, Kevin Williams
Using the Noisy-OR Model Can Be Harmful … But It Often Is Not

The noisy-OR model and its generalizations are frequently used for alleviating the burden of probability elicitation upon building Bayesian networks with the help of domain experts. The results from empirical studies consistently suggest that, when compared with a fully expert-quantified network, using the noisy-OR model will just have a minor effect on the performance of a network. In this paper, we address this apparent robustness and investigate its origin. Our results show that ill-considered use of the noisy-OR model can substantially decrease a network’s performance, yet also that the model has broader applicability than it was originally designed for.

Steven P. D. Woudenberg, Linda C. van der Gaag
Attaining Monotonicity for Bayesian Networks

Many real-life Bayesian networks are expected to exhibit commonly known properties of monotonicity, in the sense that higher values for the observable variables should make higher values for the main variable of interest more likely. Yet, violations of these properties may be introduced into a network despite careful engineering efforts. In this paper, we present a method for resolving such violations of monotonicity by varying a single parameter probability. Our method constructs intervals of numerical values to which a parameter can be varied to attain monotonicity without introducing new violations. We argue that our method has a high runtime, yet can be of practical value for specific domains.

Merel T. Rietbergen, Linda C. van der Gaag
Importance Sampling on Bayesian Networks with Deterministic Causalities

Importance sampling is a powerful approximate inference technique for Bayesian networks. However, the performance tends to be poor when the network exhibits deterministic causalities. Deterministic causalities yield predictable influences between statistical variables. In other words, only a strict subset of the set of all variable states is permissible to sample. Samples inconsistent with the permissible state space do not contribute to the sum estimate and are effectively rejected during importance sampling. Detecting inconsistent samples is NP-hard, since it amounts to calculating the posterior probability of a sample given some evidence. Several methods have been proposed to cache inconsistent samples to improve sampling efficiency. However, cache-based methods do not effectively exploit overlap in the state patterns generated by determinism in a local network structure to compress state information. In this paper, we propose a new algorithm to reduce the overhead of caching by using an adaptive decision tree to efficiently store and detect inconsistent samples. Experimental results show that the proposed approach outperforms existing methods to sample Bayesian networks with deterministic causalities.

Haohai Yu, Robert van Engelen
Bayesian Networks and the Imprecise Dirichlet Model Applied to Recognition Problems

This paper describes an Imprecise Dirichlet Model and the maximum entropy criterion to learn Bayesian network parameters under insufficient and incomplete data. The method is applied to two distinct recognition problems, namely, a facial action unit recognition and an activity recognition in video surveillance sequences. The model treats a wide range of constraints that can be specified by experts, and deals with incomplete data using an ad-hoc expectation-maximization procedure. It is also described how the same idea can be used to learn dynamic Bayesian networks. With synthetic data, we show that our proposal and widely used methods, such as the Bayesian maximum a posteriori, achieve similar accuracy. However, when real data come in place, our method performs better than the others, because it does not rely on a single prior distribution, which might be far from the

best

one.

Cassio P. de Campos, Qiang Ji
On Stopping Evidence Gathering for Diagnostic Bayesian Networks

Sequential approaches to automated test selection for diagnostic Bayesian networks include a stopping criterion for deciding in each iteration whether or not gathering of further evidence is opportune. We study the computational complexity of the problem of deciding when to stop evidence gathering in general and show that it is complete for the complexity class

NP

PP

; we show that the problem remains

NP

-complete even when it is restricted to networks of bounded treewidth. We will argue however, that by reasonable further restrictions the problem can be feasibly solved for many realistic applications.

Linda C. van der Gaag, Hans L. Bodlaender
SemCaDo: A Serendipitous Strategy for Learning Causal Bayesian Networks Using Ontologies

Learning Causal Bayesian Networks (CBNs) is a new line of research in the machine learning field. Within the existing works in this direction [8, 12, 13], few of them have taken into account the gain that can be expected when integrating additional knowledge during the learning process. In this paper, we present a new serendipitous strategy for learning CBNs using prior knowledge extracted from ontologies. The integration of such domain’s semantic information can be very useful to reveal new causal relations and provide the necessary knowledge to anticipate the optimal choice of experimentations. Our strategy also supports the evolving character of the semantic background by reusing the causal discoveries in order to enrich the domain ontologies.

Montassar Ben Messaoud, Philippe Leray, Nahla Ben Amor
Scaling Up the Greedy Equivalence Search Algorithm by Constraining the Search Space of Equivalence Classes

Greedy Equivalence Search (GES) is nowadays the state of the art algorithm for learning Bayesian networks (BNs) from complete data. However, from a practical point of view, this algorithm may not be fast enough to work in high dimensionality domains. This paper proposes some variants of GES aimed to increase its efficiency. Under faithfulness assumption, the modified algorithms preserve the same theoretical properties as the original one, that is, they recover a perfect map of the target distribution in the large sample limit. Moreover, experimental results confirm that, although they carry out much less computations, BNs learnt by those algorithms have the same quality as those learnt by GES.

Juan I. Alonso-Barba, Luis de la Ossa, Jose A. Gámez, Jose M. Puerta
Extensions of Decision-Theoretic Troubleshooting: Cost Clusters and Precedence Constraints

In decision-theoretic troubleshooting [5,2], we try to find a cost efficient repair strategy for a malfunctioning device described by a formal model. The need to schedule repair actions under uncertainty has required the researchers to use an appropriate knowledge representation formalism, often a probabilistic one.

The troubleshooting problem has received considerable attention over the past two decades. Efficient solution algorithms have been found for some variants of the problem, whereas other variants have been proven

NP

-hard [5,2,4,17,16].

We show that two troubleshooting scenarios —

Troubleshooting with Postponed System Test

[9] and

Troubleshooting with Cost Clusters without Inside Information

[7] — are

NP

-hard. Also, we define a troubleshooting scenario with precedence restrictions on the repair actions. We show that it is

NP

-hard in general, but polynomial under some restrictions placed on the structure of the precedence relation. In the proofs, we use results originally achieved in the field of Scheduling. Such a connection has not been made in the Troubleshooting literature so far.

Václav Lín
Locally Averaged Bayesian Dirichlet Metrics

The marginal likelihood of the data computed using Bayesian score metrics is at the core of

score+search

methods when learning Bayesian networks from data. However, common formulations of those Bayesian score metrics depend of free parameters which are hard to asses. Recent theoretical and experimental works have also shown as the commonly employed BDeu score metric is strongly biased by the particular assignments of its free parameter known as

the equivalent sample size

and, also, as an optimal selection of this parameter depends of the underlying distribution. This sensibility causes that wrong choices of this parameter lead to inferred models which do not properly represent the distribution generating the data even with large sample sizes. To overcome this issue we introduce here an approach which tries to marginalize this free parameter with a simple averaging method. As experimentally shown, this approach robustly performs as well as an optimum selection of this parameter while it prevents from the choice of wrong settings for this widely applied Bayesian score metric.

Andrés Cano, Manuel Gómez-Olmedo, Andrés R. Masegosa, Serafín Moral
Mixture of Markov Trees for Bayesian Network Structure Learning with Small Datasets in High Dimensional Space

The recent explosion of high dimensionality in datasets for several domains has posed a serious challenge to existing Bayesian network structure learning algorithms. Local search methods represent a solution in such spaces but suffer with small datasets. MMHC (Max-Min Hill-Climbing) is one of these local search algorithms where a first phase aims at identifying a possible skeleton by using some statistical association measurements and a second phase performs a greedy search restricted by this skeleton. We propose to replace the first phase, imprecise when the number of data remains relatively very small, by an application of ”Perturb and Combine” framework we have already studied in density estimation by using mixtures of bagged trees.

Sourour Ammar, Philippe Leray
Finding P–Maps and I–Maps to Represent Conditional Independencies

The representation problem of independence models is studied by focusing on acyclic directed graph (DAG). We present the algorithm PC

*

in order to look for a perfect map. However, when a perfect map does not exist, so that PC

*

fails, it is interesting to find a minimal I–map, which represents as many triples as possible in

J

*

. Therefore we describe an algorithm which finds such a map by means of a backtracking procedure.

Marco Baioletti, Giuseppe Busanello, Barbara Vantaggi
Marginalization without Summation Exploiting Determinism in Factor Algebra

It is known that solving an exact inference problem on a discrete Bayesian network with many deterministic nodes can be far cheaper than what would be expected based on its treewidth. In this article, we introduce a novel technique for this: to the operations of factor multiplication and factor summation that form the basis of many inference algorithms, we add

factor indexing

. We integrate this operation into variable elimination, and extend the minweight heuristic accordingly. A preliminary empirical evaluation gives promising results.

Sander Evers, Peter J. F. Lucas

Belief Functions

Independence and 2-Monotonicity: Nice to Have, Hard to Keep

When using lower probabilities to model uncertainty about the value assumed by a variable, 2-monotonicity is an interesting property to satisfy, as it greatly facilitates further treatments (such as the computation of lower/upper expectation bounds). In this paper, we show that multivariate joint models induced from marginal ones by strong independence, epistemic independence or epistemic irrelevance do not usually preserve such a property, even if it is satisfied by all marginal models. We then propose a joint model outer-approximating those induced by strong and epistemic independence and study some of its properties.

Sebastien Destercke
Constructing Dynamic Frames of Discernment in Cases of Large Number of Classes

The Dempster-Shafer theory (DST) is particularly interesting to deal with imprecise information. However, it is known for its high computational cost, as dealing with a frame of discernment Ω involves the manipulation of up to 2

|Ω|

elements. Hence, classification problems where the number of classes is too large cannot be considered. In this paper, we propose to take advantage of a context of ensemble classification to construct a frame of discernment where only a subset of classes is considered. We apply this method to script recognition problems, which by nature involve a tremendous number of classes.

Yousri Kessentini, Thomas Burger, Thierry Paquet
On Consistent Approximations of Belief Functions in the Mass Space

In this paper we study the class of consistent belief functions, as counterparts of consistent knowledge bases in classical logic. We prove that such class can be defined univocally no matter our definition of proposition implied by a belief function. As consistency can be desirable in decision making, the problem of mapping an arbitrary belief function to a consistent one arises, and can be posed in a geometric setup. We analyze here all the consistent transformations induced by minimizing

L

p

distances between belief functions, represented by the vectors of their basic probabilities.

Fabio Cuzzolin
Generalized Information Theory Based on the Theory of Hints

The

aggregate uncertainty

is the only known functional for Dempster-Shafer theory that generalizes the Shannon and Hartley measures and satisfies all classical requirements for uncertainty measures, including subadditivity. Although being posed several times in the literature, it is still an open problem whether the aggregate uncertainty is unique under these properties. This paper derives an uncertainty measure based on the theory of hints and shows its equivalence to the

pignistic entropy

. It does not satisfy subadditivity, but the viewpoint of hints uncovers a weaker version of subadditivity. On the other hand, the pignistic entropy has some crucial advantages over the aggregate uncertainty. i.e. explicitness of the formula and sensitivity to changes in evidence. We observe that neither of the two measures captures the full uncertainty of hints and propose an extension of the pignistic entropy called

hints entropy

that satisfies all axiomatic requirements, including subadditivity, while preserving the above advantages over the aggregate uncertainty.

Marc Pouly
Towards an Alarm for Opposition Conflict in a Conjunctive Combination of Belief Functions

In the framework of belief functions, information fusion is based on the construction of a unique belief function resulting from the combination of available belief functions induced from several information sources. When sources are reliable and distinct, Smets’ conjunctive rule, which is equivalent to Dempster’s rule of combination without the normalization process, can be considered. This rule offers interesting properties, but in return the empty set is an absorbing element: a series of conjunctive combinations tends to bring a mass equal to 1 to the empty set, making impossible the distinction between a real problem and an effect due to this absorbing effect of the empty set. Then a formalism allowing the preservation of the conflict which reflects the opposition between sources, is introduced in this paper. Based on the normalization process and on distance measures between belief functions, it is tested and compared with classic conjunctive operators on synthetic belief functions.

Éric Lefèvre, Zied Elouedi, David Mercier
E2GK: Evidential Evolving Gustafsson-Kessel Algorithm for Data Streams Partitioning Using Belief Functions

A new online clustering method, called E2GK (Evidential Evolving Gustafson-Kessel) is introduced in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and efficient algorithms: Evidantial

c

-Means (ECM) and Evolving Gustafson-Kessel (EGK). E2GK uses the concept of credal partition of ECM and adapts EGK, offering a better interpretation of the data structure. Experiments with synthetic data sets show good performances of the proposed algorithm compared to the original online procedure.

Lisa Serir, Emmanuel Ramasso, Noureddine Zerhouni
Evidential Markov Decision Processes

This paper proposes a new model, the EMDP (Evidential Markov Decision Process). It is a MDP (Markov Decision Process) for belief functions in which rewards are defined for each state transition, like in a classical MDP, whereas the transitions are modeled as in an EMC (Evidential Markov Chain), i.e. they are sets transitions instead of states transitions. The EMDP can fit to more applications than a MDPST (MDP with Set-valued Transitions). Generalizing to belief functions allows us to cope with applications with high uncertainty (imprecise or lacking data) where probabilistic approaches fail. Implementation results are shown on a search-and-rescue unmanned rotorcraft benchmark.

Hélène Soubaras, Christophe Labreuche, Pierre Savéant
Continuous Belief Functions to Qualify Sensors Performances

In this paper, we deal with the problem of sensor performance estimation. As we assume that the sensor is described with only few data, we decide to use the theory of belief functions to represent the inherent uncertainty of our information. Hence, we introduce the belief functions framework, especially in the continuous approach. We describe the model of sensor adopted in our study. Knowing the experimental setting, we suggest an approach to model the sources of information describing our sensor. Finally, we combine these sources in order to estimate sensor performances.

Pierre-Emmanuel Doré, Christophe Osswald, Arnaud Martin, Anne-Laure Jousselme, Patrick Maupin

Belief Revision and Inconsistency Handling

Measuring Consistency Gain and Information Loss in Stepwise Inconsistency Resolution

Inconsistency is a usually undesirable feature of many kinds of data and knowledge. But altering the information in order to make it less inconsistent may result in the loss of information. In this paper we analyze this trade-off. We review some existing proposals and make new proposals for measures of inconsistency and information. We prove that in both cases the various measures are all pairwise incompatible. Then we introduce the concept of stepwise inconsistency resolution and show what happens in case an inconsistency resolution step applies a deletion, a weakening, or a splitting operation.

John Grant, Anthony Hunter
Relating Truth, Knowledge and Belief in Epistemic States

We define and investigate a structure incorporating

what is true

,

what is known

and

what is believed

by a rational agent in models of the

S4.2

logic of knowledge. The notion of

KB

R

-structures introduced, provides a fine-grained modal analysis of an agent’s epistemic state, actually one that differentiates knowledge from belief and accounts for an agent without full introspective power (concerning her knowledge sets). Many epistemic properties of this structure are proved and it is shown that belief collapses in the form of a Stalnaker stable set (while knowledge does not). Finally, a representation theorem is proved, which exactly matches

KB

R

-structures to

S4.2

models of the world.

Costas D. Koutras, Yorgos Zikos
How Strong Can an Agent Believe Reported Information ?

This paper aims at studying how an agent can believe a new piece of information when it is reported through different sources, each of them citing another one. Two models are presented, the first one in modal logic and the other one in the Theory of Evidence. They both consider important two properties of the information sources: their validity i.e their ability of reporting true information and their invalidity, i.e their ability of reporting false information.

Laurence Cholvy
Logic-Based Fusion of Complex Epistemic States

In this work we extend the framework of logic-based fusion by Konieczny and Pino Pérez [8,9,10,11] to complex epistemic states. We present some postulates given in terms of the finite propositional logic which define merging operators of complex epistemic states. We state some representation theorems for these operators. When we consider concrete spaces, namely those where the epistemic states are total pre-orders over valuations, we obtain strong representation theorems. This new framework allows us to generalize, in a natural way, the revision operators presented by Benferhat et ál. [2]. As an application of our representation theorems, we define some merging operators over complex epistemic states and show some examples of these operators at work.

Amílcar Mata Díaz, Ramón Pino Pérez

Classification and Clustering

Latent Tree Classifier

We propose a novel generative model for classification called latent tree classifier (LTC). An LTC represents each class-conditional distribution of attributes using a latent tree model, and uses Bayes rule to make prediction. Latent tree models can capture complex relationship among attributes. Therefore, LTC can approximate the true distribution behind data well and thus achieve good classification accuracy. We present an algorithm for learning LTC and empirically evaluate it on 37 UCI data sets. The results show that LTC compares favorably to the state-of-the-art. We also demonstrate that LTC can reveal underlying concepts and discover interesting subgroups within each class.

Yi Wang, Nevin L. Zhang, Tao Chen, Leonard K. M. Poon
When Learning Naive Bayesian Classifiers Preserves Monotonicity

Naive Bayesian classifiers are used in a large range of application domains. These models generally show good performance despite their strong underlying assumptions. In this paper, we demonstrate however, by means of an example probability distribution, that a data set of instances can give rise to a classifier with counterintuitive behaviour. We will argue that such behaviour can be attributed to the learning algorithm having constructed incorrect directions of monotonicity for some of the feature variables involved. We will further show that conditions can be derived for the learning algorithm to retrieve the correct directions.

Barbara F. I. Pieters, Linda C. van der Gaag, Ad Feelders
Possibilistic Classifiers for Uncertain Numerical Data

In many real-world problems, input data may be pervaded with uncertainty. Naive possibilistic classifiers have been proposed as a counterpart to Bayesian classifiers to deal with classification tasks in presence of uncertainty. Following this line here, we extend possibilistic classifiers, which have been recently adapted to numerical data, in order to cope with uncertainty in data representation. We consider two types of uncertainty: i) the uncertainty associated with the class in the training set, which is modeled by a possibility distribution over class labels, and ii) the imprecision pervading attribute values in the testing set represented under the form of intervals for continuous data. We first adapt the possibilistic classification model, previously proposed for the certain case, in order to accommodate the uncertainty about class labels. Then, we propose an extension principle-based algorithm to deal with imprecise attribute values. The experiments reported show the interest of possibilistic classifiers for handling uncertainty in data. In particular, the probability-to-possibility transform-based classifier shows a robust behavior when dealing with imperfect data.

Myriam Bounhas, Henri Prade, Mathieu Serrurier, Khaled Mellouli

Default Reasoning and Logics for Reasoning under Uncertainty

Relational Probabilistic Conditional Reasoning at Maximum Entropy

This paper presents and compares approaches for reasoning with relational probabilistic conditionals, i.e. probabilistic conditionals in a restricted first-order environment. It is well-known that conditionals play a crucial role for default reasoning, however, most formalisms are based on propositional conditionals, which restricts their expressivity. The formalisms discussed in this paper are relational extensions of a propositional conditional logic based on the principle of maximum entropy. We show how this powerful principle can be used in different ways to realize model-based inference relations for first-order probabilistic knowledge bases. We illustrate and compare the different approaches by applying them to several benchmark examples, and we evaluate each approach with respect to properties adopted from default reasoning. We also compare our approach to Bayesian logic programs (BLPs) from the field of statistical relational learning which focuses on the combination of probabilistic reasoning and relational knowledge representation as well.

Matthias Thimm, Gabriele Kern-Isberner, Jens Fisseler
Probabilistic Approach to Nonmonotonic Consequence Relations

The paper offers a probabilistic characterizations of determinacy preservation, fragmented disjunction and conditional excluding middle for preferential relations. The paper also presents a preferential relation that is above Disjunctive rationality and strictly below Rational monotonicity. This so called

ε

,

μ

-relation is constructed using a positive infinitesimal

ε

and a finitely additive hyperreal valued probability measure

μ

on the set of propositional formulas.

Dragan Doder, Aleksandar Perović, Zoran Ognjanović
Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework

Knowledge Representation is an important issue in reinforcement learning. In this paper, we bridge the gap between reinforcement learning and knowledge representation, by providing a rich knowledge representation framework, based on normal logic programs with answer set semantics, that is capable of solving model-free reinforcement learning problems for more complex domains and exploits the domain-specific knowledge. We prove the correctness of our approach. We show that the complexity of finding an offline and online policy for a model-free reinforcement learning problem in our approach is NP-complete. Moreover, we show that any model-free reinforcement learning problem in an MDP environment can be encoded as a SAT problem. The importance of that is model-free reinforcement learning problems can be now solved as SAT problems.

Emad Saad
Answer Set Programming for Computing Decisions Under Uncertainty

Possibility theory offers a qualitative framework for modeling decision under uncertainty. In this setting, pessimistic and optimistic decision criteria have been formally justified. The computation by means of possibilistic logic inference of optimal decisions according to such criteria has been proposed. This paper presents an Answer Set Programming (ASP)-based methodology for modeling decision problems and computing optimal decisions in the sense of the possibilistic criteria. This is achieved by applying both a classic and a possibilistic ASP-based methodology in order to handle both a knowledge base pervaded with uncertainty and a prioritized preference base.

Roberto Confalonieri, Henri Prade
Quasi Conjunction and Inclusion Relation in Probabilistic Default Reasoning

We study in the setting of probabilistic default reasoning under coherence the quasi conjunction, which is a basic notion for defining consistency of conditional knowledge bases, and the Goodman & Nguyen inclusion relation for conditional events. We deepen two results given in a previous paper: the first result concerns p-entailment from a finite family

$\mathcal{F}$

of conditional events to the quasi conjunction

$\mathcal{C}(\mathcal{S})$

, for each nonempty subset

$\mathcal{S}$

of

$\mathcal{F}$

; the second result analyzes the equivalence between p-entailment from

$\mathcal{F}$

and p-entailment from

$\mathcal{C}(\mathcal{S})$

, where

$\mathcal{S}$

is some nonempty subset of

$\mathcal{F}$

. We also characterize p-entailment by some alternative theorems. Finally, we deepen the connections between p-entailment and inclusion relation, by introducing for a pair

$(\mathcal{F},E|H)$

the class of the subsets

$\mathcal{S}$

of

$\mathcal{F}$

such that

$\mathcal{C}(\mathcal{S})$

implies

E

|

H

. This class is additive and has a greatest element which can be determined by applying a suitable algorithm.

Angelo Gilio, Giuseppe Sanfilippo
Handling Exceptions in Logic Programming without Negation as Failure

Default rules, i.e. statements of the form

normally a’s are b’s

, are usually handled in Answer Set Programming by means of negation as failure which provides a way to capture exceptions to normal situations. In this paper we propose another approach which offers an operational counterpart to negation as failure, and which may be thought as a corresponding dual attitude. The approach amounts to an explicit rewriting of exceptions in default rules, together with the addition of completion rules that are consistent with current knowledge. It is shown that the approach can be applied to restore the consistency of inconsistent programs that implicitly involve specificity ordering between the rules. The approach is compared to previous works aiming at providing support to the rewriting of default rules. It is also shown how the proposed approach agrees with the results obtained in the classical way.

Roberto Confalonieri, Henri Prade, Juan Carlos Nieves
Probabilistic Stit Logic

We define an extension of stit logic that encompasses subjective probabilities representing beliefs about simultaneous choice exertion of other agents. This semantics enables us to express that an agent sees to it that a condition obtains under a minimal chance of success. We first define the fragment of XSTIT where choice exertion is not collective. Then we add effect probability lower bounds to the stit syntax, and define the semantics in terms of subjective probabilities concerning choice exertion of other agents. We show how the resulting probabilistic stit logic faithfully generalizes the non-probabilistic XSTIT fragment.

Jan M. Broersen
Overriding Subsuming Rules

This paper is concerned with intelligent agents that are able to perform nonmonotonic reasoning, not only

with

, but also

about

general rules with exceptions. More precisely, the focus is on enriching a knowledge base Γ with a general rule that is subsumed by another rule already there. Such a problem is important because evolving knowledge needs not follow logic as it is well-known from e.g. the belief revision paradigm. However, belief revision is mainly concerned with the case that the extra information logically conflicts with Γ. Otherwise, the extra knowledge is simply doomed to extend Γ with no change altogether. The problem here is different and may require a change in Γ even though no inconsistency arises. The idea is that when a rule is to be added, it might need to override any rule that subsumes it: preemption must take place. A formalism dedicated to reasoning with and about rules with exceptions is introduced. An approach to dealing with preemption over such rules is then developed.

Philippe Besnard, Éric Grégoire, Sébastien Ramon

Foundations of Reasoning and Decision Making under Uncertainty

Pseudo-polynomial Functions over Finite Distributive Lattices

In this paper we extend the authors’ previous works [6,7] by considering an aggregation model

$f\colon X_{1}\times\cdots\times X_{n}\rightarrow Y$

for arbitrary sets

X

1

,…,

X

n

and a finite distributive lattice

Y

, factorizable as

$$ ~f(x_{1},\ldots,x_{n})=p(\varphi_{1}(x_{1}),\ldots,\varphi_{n}(x_{n})), $$

where

p

is an

n

-variable lattice polynomial function over

Y

, and each

ϕ

k

is a map from

X

k

to

Y

. Following the terminology of [6,7], these are referred to as pseudo-polynomial functions.

We present an axiomatization for this class of pseudo-polynomial functions which differs from the previous ones both in flavour and nature, and develop general tools which are then used to obtain all possible such factorizations of a given pseudo-polynomial function.

Miguel Couceiro, Tamás Waldhauser
A Bridge between Probability and Possibility in a Comparative Framework

The paper studies the connection between comparative probability and comparative plausibility, with a particular emphasis on comparative possibility. We consider a comparative probability on an algebra and extend it to a different algebra. We prove that, in general, the upper extension of the given comparative probability is a comparative plausibility. By considering a suitable condition of weak logical independence between the two partitions related to the atoms of the two algebras, we prove that the upper ordinal relation is a comparative possibility. These results hold for comparative probability not necessarily representable by a numerical probability.

Giulianella Coletti, Romano Scozzafava, Barbara Vantaggi
Leximax Relations in Decision Making through the Dominance Plausible Rule

In qualitative decision theory, a very natural way of defining preference relations ≽ over the policies (acts) is by using the so called

Dominance Plausible Rule

. To do that, we need a relation > over the consequences and a relation

$\sqsupseteq$

over the events. Very interesting axiomatic characterizations,

à la

Savage, have been established for these decision relations [6,7]. Namely, when the relation

$\sqsupseteq$

is a possibilistic relation. Unfortunately, this kind of decision relation is not discriminant enough. We have searched for decision rules that discriminate more than those defined through a possibilistic relation. In particular, in this work, we study decision relations defined by the Dominance Plausible Rule using a leximax relation

$\sqsupseteq$

. We give an axiomatic characterization of these decision relations.

Franklin Camacho, Ramón Pino Pérez
Parameterized Uncertain Reasoning Approach Based on a Lattice-Valued Logic

This paper presents a parameterized reasoning approach with uncertainty based on a lattice-valued logic system. In this uncertain reasoning approach, some parameters are used to represent uncertainty arising from different sources, which is a common phenomenon in rule-based systems. In our system, reasoning with different parameter values means reasoning with different levels of belief and consistency. Some methods are presented for selecting appropriate parameter values during the uncertain reasoning process which allow us to find suitable parameter values to meet the diverse practical and theoretical requirements.

Shuwei Chen, Jun Liu, Hui Wang, Juan Carlos Augusto

Fuzzy Sets and Fuzzy Logic

From Preference Relations to Fuzzy Choice Functions

This is a first approach to the study of the connection between fuzzy preference relations and fuzzy choice functions. In particular we depart from a fuzzy preference relation and we study the conditions it must satisfy in order to get a fuzzy choice function from it. We are particulary interested in one function:

G

-rationalization. We discuss the relevance of the completeness condition on the departing preference relation. We prove that not every non-complete fuzzy preference relation leads to a choice function.

Davide Martinetti, Ignacio Montes, Susana Díaz
Fuzzy Relational Inequations and Equations in the Framework of Control Problems

The paper deals with fuzzy relational inequations and equations connected with closed fuzzy sets under a fixed fuzzy relation over the same domain. Such formulas arise in the framework of control problems. We show that fuzzy sets being solutions of these inequations and corresponding equations form a descending sequence with particular lower bounds which are also analyzed. Our approach is based on complete lattices as structures of membership values, which makes this investigation more general then the classical, real-interval valued approach.

Jorge Jiménez, Susana Montes, Branimir Šešelja, Andreja Tepavčević
Fuzzy Autoepistemic Logic: Reflecting about Knowledge of Truth Degrees

Autoepistemic logic is one of the principal formalisms for nonmonotonic reasoning. It extends propositional logic by offering the ability to reason about an agent’s (lack of) knowledge or beliefs. Moreover, it is well known to generalize the stable model semantics of answer set programming. Fuzzy logics on the other hand are multi-valued logics, which allow to model the intensity with which a property is satisfied. We combine these ideas to a fuzzy autoepistemic logic which can be used to reason about one’s knowledge about the degrees to which proporties are satisfied. In this paper we show that many properties from classical autoepistemic logic remain valid under this generalization and that the important relation between autoepistemic logic and answer set programming is preserved in the sense that fuzzy autoepistemic logic generalizes fuzzy answer set programming.

Marjon Blondeel, Steven Schockaert, Martine De Cock, Dirk Vermeir
Belief Functions on MV-Algebras of Fuzzy Events Based on Fuzzy Evidence

Recently Kroupa has proposed a generalization of belief functions on MV-algebras, the latter being the chosen algebraic setting for fuzzy (or many-valued) events. However, Kroupa’s belief functions evaluate the degree of belief in the occurrence of fuzzy events by taking into account (weighted) evidence on classical subsets. In other words, the focal elements, used in determining the degree of belief, are classical sets. Within the MV-algebraic setting, the aim of the present work is to introduce a generalization of Kroupa belief functions that allows to deal with fuzzy events supported by evidence on fuzzy subsets.

T. Flaminio, L. Godo, E. Marchioni
Order Compatible Fuzzy Relations and Their Elicitation from General Fuzzy Partitions

We study a special kind of fuzzy relations capable of modeling that two elements in the universe of discourse are similar to the extend that they are close to each other with respect to a given scale. These relations are reflexive and symmetric but not necessarily T-transitive. We study the requirements to construct such relations from a large class of fuzzy partitions that obey some useful but not severely constraining requirements. We give some formal results, including a lower bound (in terms of fuzzy sets inclusion) on the relations from this class that can be derived from the general class of fuzzy partitions.

Sandra Sandri, Flávia Toledo Martins-Bedé

Implementation and Applications of Uncertain Systems

Web Services and Incerta Spiriti: A Game Theoretic Approach to Uncertainty

A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable.

Uncertainty profiles

are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum

angel-daemon

game with two competing players: an angel

$\mathfrak{a}$

whose objective is to minimize damage to an orchestration and a daemon

$\mathfrak{d}$

who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.

Joaquim Gabarro, Maria Serna, Alan Stewart
Underwater Archaeological 3D Surveys Validation within the Removed Sets Framework

This paper presents the results of the VENUS european project aimed at providing scientific methodologies and technological tools for the virtual exploration of deep water archaeological sites. We focused on underwater archaeological 3D surveys validation problem. This paper shows how the validation problem has been tackled within the Removed Sets framework, according to Removed Sets Fusion (RSF) and to the Partially Preordered Removed Sets Inconsistency Handling (PPRSIH). Both approaches have been implemented thanks to ASP and the good behaviour of the Removed Sets operations is presented through an experimental study on two underwater archaeological sites.

Julien Hué, Mariette Sérayet, Pierre Drap, Odile Papini, Eric Würbel
Adaptive Dialogue Strategy Selection through Imprecise Probabilistic Query Answering

In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use

probabilistic logic programming (PLP)

to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.

Ian O’Neill, Anbu Yue, Weiru Liu, Phil Hanna

Possibility Theory and Possibilistic Logic

Statistical Estimations of Lattice-Valued Possibilistic Distributions

The most often applied non-numerical uncertainty degrees are those taking their values in complete lattices, but also their weakened versions may be of interest. In what follows, we introduce and analyze possibilistic distributions and measures taking values in finite upper-valued possibilistic lattices, so that only for finite sets of such values their supremum is defined. For infinite sets of values of the finite lattice in question we apply the idea of the so called Monte-Carlo method: sample at random and under certain conditions a large enough finite subset of the infinite set in question, and take the supremum over this finite sample set as a “good enough” estimation of the undefined supremum of the infinite set. A number of more or less easy to prove assertions demonstrate the conditions when and in which sense the quality of the results obtained by replacing non-existing or non-accessible supremum values by their random estimations tend to the optimum results supposing that the probabilistic qualities of the statistical estimations increase as demanded by Monte-Carlo methods.

Ivan Kramosil, Milan Daniel
Compiling Min-based Possibilistic Causal Networks: A Mutilated-Based Approach

Qualitative causal possibilistic networks are important tools for handling uncertain information in the possibility theory framework. Despite their importance, no compilation has been performed to ensure causal reasoning in possibility theory framework. This paper proposes two compilation-based inference algorithms for min-based possibilistic causal networks. The first is a possibilistic adaptation of the probabilistic inference method [8] and the second is a purely possibilistic approach. Both of them are based on an encoding of the network into a propositional theory and a compilation of this output in order to efficiently compute the effect of both observations and interventions, while adopting a mutilation strategy.

Raouia Ayachi, Nahla Ben Amor, Salem Benferhat
Possibilistic Evidence

The paper investigates a qualitative counterpart of Shafer’s evidence theory, where the basic

probability

assignment is turned into a basic

possibility

assignment whose weights have 1 as a maximum. The associated set functions, playing the role of belief, plausibility, commonality, and the dual of the latter, are now defined on a “maxitive”, rather than on an additive basis. Although this possibilistic evidence setting has been suggested for a long time, and has a clear relation with the study of qualitative Möbius transforms, it has not been really systematically studied and considered for itself as a general qualitative representation framework. It can be viewed as defining imprecise possibilities, and encompasses standard possibilitistic representations as a particular case. The paper particularly focuses on a generalized set-theoretic view of this setting and discusses the entailment relation between basic possibility assignments as well as combination rules.

Henri Prade, Agnès Rico

Uncertainty in Databases

A Preference Query Model Based on a Fusion of Local Orders

In this paper, we define an approach to database preference queries based on the fusion of local orders. The situation considered is that of queries involving incommensurable partial preferences, possibly associated with scoring functions. The basic principle is to rank the tuples according to each partial preference, then to merge the local orders obtained, using a linear function for aggregating the local scores attached to the tuples. Basically, a local score expresses the extent to which a tuple is strictly better than many others and not strictly worse than many others with respect to the partial preference attached to a given attribute. This model refines Pareto order for queries of the Skyline type.

Patrick Bosc, Olivier Pivert, Grégory Smits
Approximate Achievability in Event Databases

An event DB is a database about states (of the world) and events (taken by an agent) whose effects are not well understood. Event DBs are omnipresent in the social sciences and may include diverse scenarios from political events and the state of a country to education-related actions and their effects on a school system. We consider the following problem: given an event DB

$\mathcal{K}$

representing historical events (what was the state and what actions were done at various past time points), and given a goal we wish to accomplish, what “change attempts” can the agent make so as to “optimize” the potential achievement of the goal? We define a formal version of this problem and derive results on its complexity. We then present a basic algorithm that provably provides a correct solution to finding an optimal state change attempt, as well as an enhanced algorithm that is built on top of the well known trie data structure and is also provably correct. We show correctness and algorithmic complexity results for both algorithms and report on experiments comparing their performance on synthetic data.

Austin Parker, Gerardo I. Simari, Amy Sliva, V. S. Subrahmanian
A Probabilistic Interpretation for a Geometric Similarity Measure

A Boolean logic-based evaluation of a database query returns

true

on match and

false

on mismatch. Unfortunately, there are many application scenarios where such an evaluation is not possible or does not adequately meet user expectations about vague and uncertain conditions. Consequently, there is a need for incorporating impreciseness and proximity into a logic-based query language. In this work we propose a probabilistic interpretation for our query language CQQL which is based on a geometric retrieval model. In detail, we show that the CQQL can evaluate

arbitrary

similarity conditions in a probabilistic fashion. Furthermore, we lay a theoretical foundation for the combination of CQQL with other probabilistic semantics.

Sebastian Lehrack, Ingo Schmitt
Backmatter
Metadaten
Titel
Symbolic and Quantitative Approaches to Reasoning with Uncertainty
herausgegeben von
Weiru Liu
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-22152-1
Print ISBN
978-3-642-22151-4
DOI
https://doi.org/10.1007/978-3-642-22152-1

Premium Partner