Skip to main content

2007 | Buch

Symbolic and Quantitative Approaches to Reasoning with Uncertainty

9th European Conference, ECSQARU 2007, Hammamet, Tunisia, October 31 - November 2, 2007. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Talks

Pattern Recognition and Information Fusion Using Belief Functions: Some Recent Developments

The Transferable BeliefModel (TBM) is a general framework for reasoning with uncertainty using belief functions [8]. Of particular interest is the General Bayesian Theorem (GBT), an extension of Bayes’s theorem in which probability measures are replaced by belief functions, and no prior knowledge is assumed [7,6].

Thierry Denœux
Causality and Dynamics of Beliefs in Qualitative Uncertainty Frameworks

Causality and belief changes play an important role in many applications. Recently, Pearl [6] has proposed approaches based on probability theory using causal graphs to give formal semantics to the notion of interventions. From representational point of view, interventions are distinguished from observations using the concept of the ”do” operator [4]. From reasoning point of view, handling interventions consists in ”ignoring” the effects of all direct (and undirected) causes related to the variable of interest.

Salem Benferhat
Elements of Argumentation

Logic-based formalizations of argumentation, that take pros and cons for some claim into account, have been extensively studied, and some basic principles have been established (for reviews see [1-3]). These formalizations assume a set of formulae and then exhaustively lay out arguments and counterarguments, where a counterargument either rebuts (i.e. negates the claim of the argument) or undercuts (i.e. negates the support of the argument). Recently attempts have been made to refine these formalizations by using techniques for selecting the more appropriate arguments and counterarguments by taking into account intrinsic factors (such as the degree of inconsistency between an argument and its counterarguments) and extrinsic factors (such as the impact of particular arguments on the audience and the beliefs of the audience). In this presentation, we consider the need to take intrinsic and extrinsic factors into account, and then consider ways that this can be done in logic in order to refine existing logic-based approaches to argumentation. These refinements offer interesting options for formalizations that may better capture practical argumentation for intelligent agents [3].

Anthony Hunter

Causal Networks

Causal Graphical Models with Latent Variables: Learning and Inference

Several paradigms exist for modeling causal graphical models for discrete variables that can handle latent variables without explicitly modeling them quantitatively. Applying them to a problem domain consists of different steps: structure learning, parameter learning and using them for probabilistic or causal inference. We discuss two well-known formalisms, namely semi-Markovian causal models and maximal ancestral graphs and indicate their strengths and limitations. Previously an algorithm has been constructed that by combining elements from both techniques allows to learn a semi-Markovian causal models from a mixture of observational and experimental data. The goal of this paper is to recapitulate the integral learning process from observational and experimental data and to demonstrate how different types of inference can be performed efficiently in the learned models. We will do this by proposing an alternative representation for semi-Markovian causal models.

Stijn Meganck, Philippe Leray, Bernard Manderick
Learning Causal Bayesian Networks from Incomplete Observational Data and Interventions

This paper proposes a new method for learning causal Bayesian networks from incomplete observational data and interventions. We extend our Greedy Equivalence Search-Expectation Maximization (GES-EM) algorithm [2], initially proposed to learn Bayesian networks from incomplete observational data, by adding a new step allowing the discovery of correct causal relationships using interventional data. Two intervention selection approaches are proposed: an adaptive one, where interventions are done sequentially and where the impact of each intervention is considered before starting the next one, and a non-adaptive one, where the interventions are executed simultaneously. An experimental study shows the merits of the new version of the GES-EM algorithm by comparing the two selection approaches.

Hanen Borchani, Maher Chaouachi, Nahla Ben Amor

Belief Revision and Inconsistency Handling

Measuring Inconsistency for Description Logics Based on Paraconsistent Semantics

In this paper, we present an approach for measuring inconsistency in a knowledge base. We first define the degree of inconsistency using a four-valued semantics for the description logic

$\mathcal{ALC}$

. Then an ordering over knowledge bases is given by considering their inconsistency degrees. Our measure of inconsistency can provide important information for inconsistency handling.

Yue Ma, Guilin Qi, Pascal Hitzler, Zuoquan Lin
On the Dynamics of Total Preorders: Revising Abstract Interval Orders

Total preorders (tpos) are often used in belief revision to encode an agent’s strategy for revising its belief set in response to new information. Thus the problem of tpo-revision is of critical importance to the problem of iterated belief revision. Booth et al. [1] provide a useful framework for revising tpos by adding extra structure to guide the revision of the initial tpo, but this results in

single-step

tpo revision only. In this paper we extend that framework to consider

double-step

tpo revision. We provide new ways of representing the structure required to revise a tpo, based on

abstract interval orders

, and look at some desirable properties for revising this structure. We prove the consistency of these properties by giving a concrete operator satisfying all of them.

Richard Booth, Thomas Meyer
Approaches to Constructing a Stratified Merged Knowledge Base

Many merging operators have been proposed to merge either flat or stratified knowledge bases. The result of merging by such an operator is a flat base (or a set of models of the merged base) irrespective of whether the original ones are flat or stratified. The drawback of obtaining a flat merged base is that information about more preferred knowledge (formulae) versus less preferred knowledge is not explicitly represented, and this information can be very useful when deciding which formulae should be retained when there is a conflict. Therefore, it can be more desirable to return a stratified knowledge base as a merged result. A straightforward approach is to deploy the preference relation over possible worlds obtained after merging to reconstruct such a base. However, our study shows that such an approach can produce a poor result, that is, preference relations over possible worlds obtained after merging are not suitable for reconstructing a merged stratified base. Inspired by the Condorcet method in voting systems, we propose an alternative method to stratify a set of possible worlds given a set of stratified bases and take the stratification of possible worlds as the result of merging. Based on this, we provide a family of syntax-based methods and a family of model-based methods to construct a stratified merged knowledge base. In the syntax based methods, the formulae contained in the merged knowledge base are from the original individual knowledge bases. In contrast, in the model based methods, some additional formulae may be introduced into the merged knowledge base and no information in the original knowledge bases is lost. Since the merged result is a stratified knowledge base, the commonly agreed knowledge together with a preference relation over this knowledge can be extracted from the original knowledge bases.

Anbu Yue, Weiru Liu, Anthony Hunter
Syntactic Propositional Belief Bases Fusion with Removed Sets

The problem of merging multiple sources information is central in several domains of computer science. In knowledge representation for artificial intelligence, several approaches have been proposed for propositional bases fusion, however, most of them are defined at a semantic level and are untractable. This paper proposes a new syntactic approach of belief bases fusion, called Removed Sets Fusion (RSF). The notion of removed-set, initially defined in the context of belief revision is extended to fusion and most of the classical fusion operations are syntactically captured by RSF. In order to efficiently implement RSF, the paper shows how RSF can be encoded into a logic program with answer set semantics, then presents an adaptation of the smodels system devoted to efficiently compute the removed sets in order to perform RSF. Finally a preliminary experimental study shows that the answer set programming approach seems promising for performing belief bases fusion on real scale applications.

Julien Hue, Odile Papini, Eric Wurbel
COBA 2.0: A Consistency-Based Belief Change System

We describe COBA 2.0, an implementation of a consistency-based framework for expressing belief change, focusing here on revision and contraction, with the possible incorporation of integrity constraints. This general framework was first proposed in [1]; following a review of this work, we present COBA 2.0’s high-level algorithm, work through several examples, and describe our experiments. A distinguishing feature of COBA 2.0 is that it builds on SAT-technology by using a module comprising a state-of-the-art SAT-solver for consistency checking. As well, it allows for the simultaneous specification of revision, multiple contractions, along with integrity constraints, with respect to a given knowledge base.

James P. Delgrande, Daphne H. Liu, Torsten Schaub, Sven Thiele
An Algorithm for Computing Inconsistency Measurement by Paraconsistent Semantics

Measuring inconsistency in knowledge bases has been recognized as an important problem in many research areas. Most of approaches proposed for measuring inconsistency are based on paraconsistent semantics. However, very few of them provide an algorithm for implementation. In this paper, we first give a four-valued semantics for first-order logic and then propose an approach for measuring the degree of inconsistency based on this four-valued semantics. After that, we propose an algorithm to compute the inconsistency degree by introducing a new semantics for first order logic, which is called

S[n]

-4 semantics.

Yue Ma, Guilin Qi, Pascal Hitzler, Zuoquan Lin
How Dirty Is Your Relational Database? An Axiomatic Approach

There has been a significant amount of interest in recent years on how to reason about inconsistent knowledge bases. However, with the exception of three papers by Lozinskii, Hunter and Konieczny and by Grant and Hunter, there has been almost no work on characterizing the degree of dirtiness of a database. One can conceive of many reasonable ways of characterizing how dirty a database is. Rather than choose one of many possible measures, we present a set of axioms that any dirtiness measure must satisfy. We then present several plausible candidate dirtiness measures from the literature (including those of Hunter-Konieczny and Grant-Hunter) and identify which of these satisfy our axioms and which do not. Moreover, we define a new dirtiness measure which satisfies all of our axioms.

Maria Vanina Martinez, Andrea Pugliese, Gerardo I. Simari, V. S. Subrahmanian, Henri Prade

Logics Under Uncertainty

A Top-Down Query Answering Procedure for Normal Logic Programs Under the Any-World Assumption

The Any-World Assumption (AWA) has been introduced for normal logic programs as a generalization of the well-known notions of Closed World Assumption (CWA) and the Open World Assumption (OWA). The AWA allows

any

assignment (i.e., interpretation), over a

truth space

(bilattice), to be a default assumption and, thus, the CWA and OWA are just special cases. To answer queries, we provide a novel and simple top-down procedure.

Umberto Straccia
Measure Logic

In this paper we investigate logic which is suitable for reasoning about uncertainty in different situations. A possible-world approach is used to provide semantics to formulas. Axiomatic system for our logic is given and the corresponding strong completeness theorem is proved. Relationships to other systems are discussed.

Nebojsa Ikodinovic, Miodrag Raskovic, Zoran Markovic, Zoran Ognjanovic
Weak Implication in Terms of Conditional Uncertainty Measures

We define weak implication

$H \longmapsto_{\varphi} E$

(“H weakly implies E under

$\mathit{\varphi}$

”) through the relation

$\mathit{\varphi}(E|H)$

= 1, where

$\mathit{\varphi}$

is a (coherent) conditional uncertainty measure. By considering various such measures with different levels of generality, we get different sets of “inferential rules”, that correspond to those of default logic when

$\mathit{\varphi}$

reduces to a conditional probability.

Giulianella Coletti, Romano Scozzafava, Barbara Vantaggi
Language Invariance and Spectrum Exchangeability in Inductive Logic

A sufficient condition, in terms of a de Finetti style representation, is given for a probability function in Inductive Logic (with relations of all arities) satisfying Spectrum Exchangeability to additionally satisfy Language Invariance. This condition is shown to also be necessary in the case of homogeneous probability functions. In contrast it is proved that (purely)

t

-heterogeneous probability functions can never be members of a language invariant family satisfying Spectrum Exchangeability.

Jürgen Landes, Jeff Paris, Alena Vencovská
Best Approximation of Ruspini Partitions in Gödel Logic

A Ruspini partition is a finite family of fuzzy sets {

f

1

, ...,

f

n

},

f

i

: [0, 1] →[0, 1], such that

$\sum^n_{i=1} f_i(x) = 1$

for all

x

 ∈ [0, 1]. We analyze such partitions in the language of Gödel logic. Our main result identifies the precise degree to which the Ruspini condition is expressible in this language, and yields inter alia a constructive procedure to axiomatize a given Ruspini partition by a theory in Gödel logic.

Pietro Codara, Ottavio M. D’Antona, Vincenzo Marra
A Logical Approach to Qualitative and Quantitative Reasoning

Reasoning with qualitative and quantitative uncertainty is required in some real-world applications [6]. However, current extensions to logic programming with uncertainty support representing and reasoning with either qualitative or quantitative uncertainty. In this paper we extend the language of Hybrid Probabilistic Logic programs [29,27], originally introduced for reasoning with quantitative uncertainty, to support both qualitative and quantitative uncertainty. We propose to combine disjunctive logic programs [10,19] with Extended and Normal Hybrid Probabilistic Logic Programs (EHPP [27] and NHPP [29]) in a unified logic programming framework, to allow directly and intuitively to represent and reason in the presence of both qualitative and quantitative uncertainty. The semantics of the proposed languages are based on the answer sets semantics and stable model semantics of extended and normal disjunctive logic programs [10,19]. In addition, they also rely on the probabilistic answer sets semantics and the stable probabilistic model semantics of EHPP [27] and NHPP [29].

Emad Saad
Description Logic Programs Under Probabilistic Uncertainty and Fuzzy Vagueness

This paper is directed towards an infrastructure for handling both uncertainty and vagueness in the Rules, Logic, and Proof layers of the Semantic Web.More concretely, we present probabilistic fuzzy description logic programs, which combine fuzzy description logics, fuzzy logic programs (with stratified nonmonotonic negation), and probabilistic uncertainty in a uniform framework for the Semantic Web. We define important concepts dealing with both probabilistic uncertainty and fuzzy vagueness, such as the expected truth value of a crisp sentence and the probability of a vague sentence. Furthermore, we describe a shopping agent example, which gives evidence of the usefulness of probabilistic fuzzy description logic programs in realistic web applications. In the extended report, we also provide algorithms for query processing in probabilistic fuzzy description logic programs, and we delineate a special case where query processing can be done in polynomial time in the data complexity.

Thomas Lukasiewicz, Umberto Straccia
From DEL to EDL: Exploring the Power of Converse Events

Dynamic epistemic logic (

DEL

) as viewed by Baltag et col. and propositional dynamic logic (

PDL

) offer different semantics of events. On the one hand,

DEL

adds dynamics to epistemic logic by introducing so-called epistemic action models as syntactic objects into the language. On the other hand,

PDL

has instead transition relations between possible worlds. This last approach allows to easily introduce converse events. We add epistemics to this, and call the resulting logic epistemic dynamic logic (

EDL

). We show that

DEL

can be translated into

EDL

thanks to this use of the converse operator: this device enables us to translate the structure of the action (or event) model directly within a particular axiomatization of

EDL

, without having to refer to a particular epistemic action (event) model in the language (as done in

DEL

). It follows that

EDL

is more expressive and general than

DEL

.

Guillaume Aucher, Andreas Herzig

Argumentation Systems

Comparing Argumentation Semantics with Respect to Skepticism

The issue of formalizing skepticism relations between argumentation semantics has been considered only recently in the literature. In this paper, we contribute to this kind of analysis by providing a systematic comparison of a significant set of literature semantics (namely grounded, complete, preferred, stable, semi-stable, ideal, prudent, and

CF

2 semantics) using both a weak and a strong skepticism relation.

Pietro Baroni, Massimiliano Giacomin
An Algorithm for Computing Semi-stable Semantics

The semi-stable semantics for formal argumentation has been introduced as a way of approximating stable semantics in situations where no stable extensions exist. Semi-stable semantics can be located between stable semantics and preferred semantics in the sense that every stable extension is a semi-stable extension and every semi-stable extension is a preferred extension. Moreover, in situations where at least one stable extension exists, the semi-stable extensions are equal to the stable extensions. In this paper we provide an outline of an algorithm for computing the semi-stable extensions, given an argumentation framework. We show that with a few modifications, the algorithm can also be used for computing stable and preferred semantics.

Martin Caminada
The Logical Handling of Threats, Rewards, Tips, and Warnings

Previous logic-based handling of arguments has mainly focused on explanation or justification in presence of inconsistency. As a consequence, only one type of argument has been considered, namely the explanatory type; several argumentation frameworks have been proposed for generating and evaluating explanatory arguments. However, recent investigations of argument-based negotiation have emphasized other types of arguments, such as

threats, rewards, tips

, and

warnings

. In parallel, cognitive psychologists recently started studying the characteristics of these different types of arguments, and the conditions under which they have their desired effect. Bringing together these two lines of research, we present in this article some logical definitions as well as some criteria for evaluating each type of argument. Empirical findings from cognitive psychology validate these formal results.

Leila Amgoud, Jean-Francois Bonnefon, Henri Prade
On the Acceptability of Incompatible Arguments

In this paper we study the acceptability of incompatible arguments within Dung’s abstract argumentation framework. As an example we introduce an instance of Dung’s framework where arguments are represented by propositional formulas and an argument attacks another one when the conjunction of their representations is inconsistent, which we characterize as a kind of symmetric attack. Since symmetric attack is known to have the drawback to collapse the various argumentation semantics, we consider also two variations. First, we consider propositional arguments distinguishing support and conclusion. Second, we introduce a preference ordering over the arguments and we define the attack relation in terms of a symmetric incompatibility relation and the preference relation. We show how to characterize preference-based argumentation using a kind of acyclic attack relation.

Souhila Kaci, Leendert van der Torre, Emil Weydert
Handling Ignorance in Argumentation: Semantics of Partial Argumentation Frameworks

In this paper we propose semantics for acceptablity in partial argumentation frameworks (PAF). The PAF is an extension of Dung’s argumentation framework and has been introduced in [1] for merging argumentation frameworks. It consists in adding a new interaction between arguments representing the ignorance about the existence of an attack.

The proposed semantics are built following Dung’s method, so that they generalize Dung’s semantics without increasing the temporal complexity.

C. Cayrol, C. Devred, M. C. Lagasquie-Schiex
Dialectical Proof Theories for the Credulous Prudent Preferred Semantics of Argumentation

In Dung’s argumentation system, acceptable sets of arguments are de- fined as sets of arguments that attack all their attackers, and that do not contain any direct contradiction. However, in many applications, the presence of indirect contradictions should prevent a set from being acceptable. The family of prudent semantics has been proposed as an answer to this problem. We are interested in this paper in determining whether a given set of arguments is included in at least one acceptable set under the prudent preferred semantics. To this end, we propose a dialectical framework and several proof theories.

Caroline Devred, Sylvie Doutre
Towards an Extensible Argumentation System

Many types of inter-agent dialogue, including information seeking, negotiation and deliberation can be seen as varieties of argumentation. Argumentation is especially appropriate where demonstration is not possible because the information is incomplete and uncertain or because the parties involved in the argument have different perspectives on an issue. Argumentation frameworks provide a powerful tool for evaluating the sets of conflicting arguments which emerge from such dialogues. Originally argumentation frameworks considered arguments as completely abstract entities related by a single attack relation, which always succeeded. Use of the frameworks in practical applications such as law, e-democracy and medicine has motivated a distinction between successful and unsuccessful attacks, determined by properties of the conflicting arguments. This remains insufficient to capture a range of phenomena which arise from procedural and contextual considerations. These require that a successful attack depend not only on the properties of the con- flicting arguments but also on the nature of the attack and the context in which it is made. In this paper we present an analysis of arguments, their properties and relations which can accommodate a wide range of such phenomena. Our analysis is extensible for we can add components to each system while preserving an overarching argumentation framework. We first capture the abstract notions of original argumentation frameworks, and then introduce a system which embraces properties of arguments. This system is further extended in two ways to include properties of relations between arguments. We illustrate each system with a characteristic example and discuss the particular features of argumentation which they can address.

Adam Z. Wyner, Trevor J. M. Bench-Capon
Dialectical Explanations in Defeasible Argumentation

This work addresses the problem of providing explanation capabilities to an argumentation system. Explanation in defeasible argumentation is an important, and yet undeveloped field in the area. Therefore, we move in this direction by defining a concrete argument system with explanation facilities.

We consider the structures that provide information on the warrant status of a literal. Our focus is put on argumentation systems based on a dialectical proof procedure, therefore we study

dialectical explanations

. Although arguments represent a form of explanation for a literal, we study the complete set of dialectical trees that justifies the warrant status of a literal, since this set has proved to be a useful tool to comprehend, analyze, develop, and debug argumentation systems.

Alejandro J. García, Nicolás D. Rotstein, Guillermo R. Simari
Arguing over Actions That Involve Multiple Criteria: A Critical Review

There has recently been many proposals to adopt an argumentative approach to decision-making. As the underlying assumptions made in these different approaches are not always clearly stated, we review these works, taking a more classical decision theory perspective, more precisely a multicriteria perspective. It appears that these approaches seem to have much to offer to decision models, because they allow a great expressivity in the specification of agents’ preferences, because they naturally cater for partial specification of preferences, and because they make explicit many aspects that are usually somewhat hidden in decision models. On the other hand, the typically intrinsic evaluation used in these approaches is not always the most appropriate, and it is not always clear how the multicriteria feature is taken into account when it comes to aggregating several arguments that may potentially interact.

Wassila Ouerdane, Nicolas Maudet, Alexis Tsoukias

Belief Functions

Shared Ordered Binary Decision Diagrams for Dempster-Shafer Theory

The binary representation is widely used for representing focal sets of Dempster-Shafer belief functions because it allows to compute efficiently all relevant operations. However, as its space requirement grows exponentially with the number of variables involved, computations may become prohibitive or even impossible for belief functions with larger domains. This paper proposes shared ordered binary decision diagrams for representing focal sets. This not only allows to compute efficiently all relevant operations, but also turns out to be a compact representation of focal sets.

Norbert Lehmann
Cautious Conjunctive Merging of Belief Functions

When merging belief functions, Dempster rule of combination is justified only when information sources can be considered as independent. When this is not the case, one must find out a cautious merging rule that adds a minimal amount of information to the inputs. Such a rule is said to follow the principle of minimal commitment. Some conditions it should comply with are studied. A cautious merging rule based on maximizing expected cardinality of the resulting belief function is proposed. It recovers the minimum operation when specialized to possibility distributions. This form of the minimal commitment principle is discussed, in particular its discriminating power and its justification when some conflict is present between the belief functions.

Sebastien Destercke, Didier Dubois, Eric Chojnacki
Consonant Belief Function Induced by a Confidence Set of Pignistic Probabilities

A new method is proposed for building a predictive belief function from statistical data in the Transferable Belief Model framework. The starting point of this method is the assumption that, if the probability distribution ℙ

X

of a random variable X is known, then the belief function quantifying our belief regarding a future realization of X should have its pignistic probability distribution equal to ℙ

X

. When PX is unknown but a random sample of X is available, it is possible to build a set

$\mathcal{P}$

of probability distributions containing ℙ

X

with some confidence level. Following the Least Commitment Principle, we then look for a belief function less committed than all belief functions with pignistic probability distribution in

$\mathcal{P}$

. Our method selects the most committed consonant belief function verifying this property. This general principle is applied to the case of the normal distribution.

Astride Aregui, Thierry Denoeux
On the Orthogonal Projection of a Belief Function

In this paper we study a new probability associated with any given belief function

b

, i.e. the orthogonal projection

π

[

b

] of

b

onto the probability simplex

$\mathcal P$

. We provide an interpretation of

π

[

b

] in terms of a redistribution process in which the mass of each focal element is equally distributed among its subsets, establishing an interesting analogy with the pignistic transformation. We prove that orthogonal projection commutes with convex combination just as the pignistic function does, unveiling a decomposition of

π

[

b

] as convex combination of basis pignistic functions. Finally we discuss the norm of the difference between orthogonal projection and pignistic function in the case study of a quaternary frame, as a first step towards a more comprehensive picture of their relation.

Fabio Cuzzolin
On Latent Belief Structures

Based on the canonical decomposition of belief functions, Smets introduced the concept of a latent belief structure (LBS). This concept is revisited in this article. The study of the combination of LBSs allows us to propose a less committed version of Dempster’s rule, resulting in a commutative, associative and idempotent rule of combination for LBSs. This latter property makes it suitable to combine non distinct bodies of evidence. A sound method based on the plausibility transformation is also given to infer decisions from LBSs. In addition, an extension of the new rule is proposed so that it may be used to optimize the combination of imperfect information with respect to the decisions inferred.

Frédéric Pichon, Thierry Denoeux
The DSm Approach as a Special Case of the Dempster-Shafer Theory

This contribution deals with a belief processing which enables managing of multiple and overlapping elements of a frame of discernment. An outline of the Dempster-Shafer theory for such cases is presented, including several types of constraints for simplification of its large computational complexity. DSmT – a new theory rapidly developing the last five years – is briefly introduced. Finally, it is shown that the DSmT is a special case of the general Dempster-Shafer approach.

Milan Daniel
Interpreting Belief Functions as Dirichlet Distributions

Traditional Dempster Shafer belief theory does not provide a simple method for judging the effect of statistical and probabilistic data on belief functions and vice versa. This puts belief theory in isolation from probability theory and hinders fertile cross-disciplinary developments, both from a theoretic and an application point of view. It can be shown that a bijective mapping exists between Dirichlet distributions and Dempster-Shafer belief functions, and the purpose of this paper is to describe this correspondence. This has three main advantages; belief based reasoning can be applied to statistical data, statistical and probabilistic analysis can be applied to belief functions, and it provides a basis for interpreting and visualizing beliefs for the purpose of enhancing human cognition and the usability of belief based reasoning systems.

Audun Jøsang, Zied Elouedi
Forward-Backward-Viterbi Procedures in the Transferable Belief Model for State Sequence Analysis Using Belief Functions

The Transferable Belief Model (TBM) relies on belief functions and enables one to represent and combine a variety of knowledge from certain up to ignorance as well as conflict inherent to imperfect data. A lot of applications have used this flexible framework however, in the context of temporal data analysis of belief functions, a few work have been proposed. Temporal aspect of data is essential for many applications such as surveillance (monitoring) and Human-Computer Interfaces. We propose algorithms based on the mechanisms of Hidden Markov Models usually used for state sequence analysis in probability theory. The proposed algorithms are the “credal forward”, “credal backward” and “credal Viterbi” procedures which allow to filter temporal belief functions and to assess state sequences in the TBM framework. Illustration of performance is provided on a human motion analysis problem.

Emmanuel Ramasso, Michéle Rombaut, Denis Pellerin
Uncertainty in Semantic Ontology Mapping: An Evidential Approach

In this paper we propose a new tool called OWL-CM (OWL Combining Matcher) that deals with uncertainty inherent to ontology mapping process. On the one hand, OWL-CM uses the technique of similarity metrics to assess the equivalence between ontology entities and on the other hand, it incorporates belief functions theory into the mapping process in order to improve the effectiveness of the results computed by different matchers and to provide a generic framework for combining them. Our experiments which are carried out with the benchmark of Ontology Alignment Evaluation Initiative 2006 demonstrate good results.

Najoua Laamari, Boutheina Ben Yaghlane

Learning and Classification

Measures of Ruleset Quality Capable to Represent Uncertain Validity

The paper deals with quality measures of rules extracted from data, more precisely with measures of the whole extracted rulesets. Three particular approaches to extending ruleset quality measures from classification to general rulesets are discussed, and one of them, capable to represent uncertain validity of rulesets for objects, is elaborated in some detail. In particular, a generalization of ROC curves is proposed. The approach is illustrated on rulesets extracted with four important methods from the well-known iris data.

Martin Holeňa
A Discriminative Learning Method of TAN Classifier

TAN (Tree-augmented Naïve Bayes) classifier makes a compromise between the model complexity and classification rate, the study of which has now become a hot research issue. In this paper, we propose a discriminative method that is based on KL (Kullback-Leibler) divergence to learn TAN classifier. First, we use EAR (explaining away residual) method to learn the structure of TAN, and then optimize TAN parameters by an objective function based on KL divergence. The results of the experiments on benchmark datasets show that our approach produces better classification rate.

Qi Feng, Fengzhan Tian, Houkuan Huang
Discriminative vs. Generative Learning of Bayesian Network Classifiers

Discriminative learning of Bayesian network classifiers has recently received considerable attention from the machine learning community. This interest has yielded several publications where new methods for the discriminative learning of both structure and parameters have been proposed. In this paper we present an empirical study used to illustrate how discriminative learning performs with respect to generative learning using simple Bayesian network classifiers such as naive Bayes or TAN, and we discuss when and why a discriminative learning is preferred. We also analyzed how log-likelihood and conditional log-likelihood scores guide the learning process of Bayesian network classifiers.

Guzmán Santafé, Jose A. Lozano, Pedro Larrañaga
PADUA Protocol: Strategies and Tactics

In this paper we describe an approach to classifying objects in a domain where classifications are uncertain using a novel combination of argumentation and data mining. Classification is the topic of a dialogue game between two agents, based on an argument scheme and critical questions designed for use by agents whose knowledge of the domain comes from data mining. Each agent has its own set of examples which it can mine to find arguments based on association rules for and against a classification of a new instance. These arguments are exchanged in order to classify the instance. We describe the dialogue game, and in particular discuss the strategic considerations which agents can use to select their moves. Different strategies give rise to games with different characteristics, some having the flavour of persuasion dialogues and other deliberation dialogues.

Maya Wardeh, Trevor Bench-Capon, Frans Coenen
A Semi-naive Bayes Classifier with Grouping of Cases

In this work, we present a semi-naive Bayes classifier that searches for dependent attributes using different filter approaches. In order to avoid that the number of cases of the compound attributes be too high, a grouping procedure is applied each time after two variables are merged. This method tries to group two or more cases of the new variable into an unique value. In an emperical study, we show as this approach outperforms the naive Bayes classifier in a very robust way and reaches the performance of the Pazzani’s semi-naive Bayes [1] without the high cost of a wrapper search.

Joaquín Abellán, Andrés Cano, Andrés R. Masegosa, Serafín Moral
Split Criterions for Variable Selection Using Decision Trees

In the field of attribute mining, several feature selection methods have recently appeared indicating that the use of sets of decision trees learnt from a data set can be an useful tool for selecting relevant and informative variables regarding to a main class variable. With this aim, in this study, we claim that the use of a new split criterion to build decision trees outperforms another classic split criterions for variable selection purposes. We present an experimental study on a wide and different set of databases using only one decision tree with each split criterion to select variables for the Naive Bayes classifier.

Joaquín Abellán, Andrés R. Masegosa
Inference and Learning in Multi-dimensional Bayesian Network Classifiers

We describe the family of multi-dimensional Bayesian network classifiers which include one or more class variables and multiple feature variables. The family does not require that every feature variable is modelled as being dependent on every class variable, which results in better modelling capabilities than families of models with a single class variable. For the family of multidimensional classifiers, we address the complexity of the classification problem and show that it can be solved in polynomial time for classifiers with a graphical structure of bounded treewidth over their feature variables and a restricted number of class variables. We further describe the learning problem for the subfamily of fully polytree-augmented multi-dimensional classifiers and show that its computational complexity is polynomial in the number of feature variables.

Peter R. de Waal, Linda C. van der Gaag
Combining Decision Trees Based on Imprecise Probabilities and Uncertainty Measures

In this article, we shall present a method for combining classification trees obtained by a simple method from the imprecise Dirichlet model (IDM) and uncertainty measures on closed and convex sets of probability distributions, otherwise known as credal sets. Our combine method has principally two characteristics: it obtains a high percentage of correct classifications using a few number of classification trees and it can be parallelized to apply on very large databases.

Joaquín Abellán, Andrés R. Masegosa
Belief Classification Approach Based on Generalized Credal EM

The EM algorithm is widely used in supervised and unsupervised classification when applied for mixture model parameter estimation. It has been shown that this method can be applied for partially supervised classification where the knowledge about the class labels of the observations can be imprecise and/or uncertain. In this paper, we propose to generalize this approach to cope with imperfect knowledge at two levels: the attribute values of the observations and their class labels. This knowledge is represented by belief functions as understood in the Transferable Belief Model. We show that this approach can be applied when the data are categorical and generated from multinomial mixtures.

Imene Jraidi, Zied Elouedi

Bayesian Networks and Probabilistic Reasoning

Logical Compilation of Bayesian Networks with Discrete Variables

This paper presents a new direction in the area of compiling Bayesian networks. The principal idea is to encode the network by logical sentences and to compile the resulting encoding into an appropriate form. From there, all possible queries are answerable in linear time relative to the size of the logical form. Therefore, our approach is a potential solution for real-time applications of probabilistic inference with limited computational resources. The underlying idea is similar to both the differential and the weighted model counting approach to inference in Bayesian networks, but at the core of the proposed encoding we avoid the transformation from discrete to binary variables. This alternative encoding enables a more natural solution.

Michael Wachter, Rolf Haenni
Local Monotonicity in Probabilistic Networks

It is often desirable that a probabilistic network is monotone, e.g., more severe symptoms increase the likeliness of a more serious disease. Unfortunately, determining whether a network is monotone is highly intractable. Often, approximation algorithms are employed that work on a local scale. For these algorithms, the monotonicity of the arcs (rather than the network as a whole) is determined. However, in many situations monotonicity depends on the ordering of the values of the nodes, which is sometimes rather arbitrary. Thus, it is desirable to order the values of these variables such that as many arcs as possible are monotone. We introduce the concept of local monotonicity, discuss the computational complexity of finding an optimal ordering of the values of the nodes in a network, and sketch a branch-and-bound exact algorithm to find such an optimal solution.

Johan Kwisthout, Hans Bodlaender, Gerard Tel
Independence Decomposition in Dynamic Bayesian Networks

Dynamic Bayesian networks are a special type of Bayesian network that explicitly incorporate the dimension of time. They can be distinguished into repetitive and non-repetitive networks. Repetitiveness implies that the set of random variables of the network and their independence relations are the same at each time step. Due to their structural symmetry, repetitive networks are easier to use and are, therefore, often taken as the standard. However, repetitiveness is a very strong assumption, which normally does not hold, as particular dependences and independences may only hold at certain time steps.

In this paper, we propose a new framework for independence modularisation in dynamic Bayesian networks. Our theory provides a method for separating atemporal and temporal independence relations, and offers a practical approach to building dynamic Bayesian networks that are possibly non-repetitive. A composition operator for temporal and atemporal independence relations is proposed and its properties are studied. Experimental results obtained by learning dynamic Bayesian networks from real data show that this framework offers a more accurate way for knowledge representation in dynamic Bayesian networks.

Ildikó Flesch, Peter Lucas
Average and Majority Gates: Combining Information by Means of Bayesian Networks

In this paper we focus on the problem of belief aggregation, i.e. the task of forming a group consensus probability distribution by combining the beliefs of the individual members of the group. We propose the use of Bayesian Networks to model the interactions between the individuals of the group and introduce average and majority canonical models and their application to information aggregation. Due to efficiency restrictions imposed by the Group Recommending problem, where our research is framed, we have had to develop specific inference algorithms to compute group recommendations.

Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete, Miguel A. Rueda-Morales
A Fast Hill-Climbing Algorithm for Bayesian Networks Structure Learning

In the score plus search based Bayesian networks structure learning approach, the most used method is hill climbing (HC), because its implementation is good trade-off between CPU requirements, accuracy of the obtained model, and ease of implementation. Because of these features and to the fact that HC with the classical operators guarantees to obtain a minimal I-map, this approach is really appropriate to deal with high dimensional domains. In this paper we revisited a previously developed HC algorithm (termed constrained HC, or CHC in short) that takes advantage of some scoring metrics properties in order to restrict during the search the parent set of each node. The main drawback of CHC is that there is no warranty of obtaining a minimal I-map, and so the algorithm includes a second stage in which an unconstrained HC is launched by taking as initial solution the one returned by the constrained search stage. In this paper we modify CHC in order to guarantee that its output is a minimal I-map and so the second stage is not needed. In this way we save a considerable amount of CPU time, making the algorithm best suited for high dimensional datasets. A proof is provided about the minimal I-map condition of the returned network, and also computational experiments are reported to show the gain with respect to CPU requirements.

José A. Gámez, Juan L. Mateo, José M. Puerta
On Directed and Undirected Propagation Algorithms for Bayesian Networks

Message-passing inference algorithms for Bayes nets can be broadly divided into two classes: i) clustering algorithms, like Lazy Propagation, Jensen’s or Shafer-Shenoy’s schemes, that work on secondary undirected trees; and ii) conditioning methods, like Pearl’s, that use directly Bayes nets. It is commonly thought that algorithms of the former class always outperform those of the latter because Pearl’s-like methods act as particular cases of clustering algorithms. In this paper, a new variant of Pearl’s method based on a secondary directed graph is introduced, and it is shown that the computations performed by Shafer-Shenoy or Lazy propagation can be precisely reproduced by this new variant, thus proving that directed algorithms can be as efficient as undirected ones.

Christophe Gonzales, Khaled Mellouli, Olfa Mourali

Reasoning About Preferences

Lexicographic Refinements of Sugeno Integrals

This paper deals with decision-making under uncertainty when the worth of acts is evaluated by means of Sugeno integral on a finite scale. One limitation of this approach is the coarse ranking of acts it produces. In order to refine this ordering, a mapping from the common qualitative utility and uncertainty scale to the reals is proposed, whereby Sugeno integral is changed into a Choquet integral. This work relies on a previous similar attempt at refining possibilistic preference functionals of the max-min into a so-called big-stepped expected utility, encoding a very refined qualitative double lexicographic ordering of acts.

Didier Dubois, Hélène Fargier
Algebraic Structures for Bipolar Constraint-Based Reasoning

The representation of both scales of cost and scales of benefit is very natural in a decision-making problem: scales of evaluation of decisions are often bipolar. The aim of this paper is to provide algebraic structures for the representation of bipolar rules, in the spirit of the algebraic approaches of constraint satisfaction. The structures presented here are general enough to encompass a large variety of rules from the bipolar literature, as well as having appropriate algebraic properties to allow the use of CSP algorithms such as forward-checking and algorithms based on variable elimination.

Hélène Fargier, Nic Wilson
A Revised Qualitative Choice Logic for Handling Prioritized Preferences

Qualitative Choice Logic (QCL)

is a convenient tool for representing and reasoning with “basic” preferences. However, this logic presents some limitations when dealing with complex preferences that, for instance, involve negated preferences. This paper proposes a new logic that correctly addresses QCL’s limitations. It is particularly appropriate for handling prioritized preferences, which is very useful for aggregating preferences of users having different priority levels. Moreover, we show that any set of preferences, can equivalently be transformed into a set of normal form preferences from which efficient inferences can be applied.

Salem Benferhat, Karima Sedki
An Abstract Theory of Argumentation That Accommodates Defeasible Reasoning About Preferences

Dung’s abstract theory of argumentation has become established as a general framework for non-monotonic reasoning, and, more generally, reasoning in the presence of conflict. In this paper we extend Dung’s theory so that an argumentation framework distinguishes between: 1) attack relations modelling different notions of conflict; 2) arguments that themselves claim preferences, and so determine defeats, between other conflicting arguments. We then define the acceptability of arguments under Dung’s extensional semantics. We claim that our work provides a general unifying framework for logic based systems that facilitate defeasible reasoning about preferences. This is illustrated by formalising argument based logic programming with defeasible priorities in our framework.

Sanjay Modgil
Relaxing Ceteris Paribus Preferences with Partially Ordered Priorities

Conditional preference networks (CP-nets) are a simple approach to the compact representation of preferences. In spite of their merit the application of the ceteris paribus principle underlying them is too global and systematic and sometimes leads to questionable incomparabilities. Moreover there is a natural need for expressing default preferences that generally hold, together with more specific ones that reverse them. This suggests the introduction of priorities for handling preferences in a more local way. After providing the necessary background on CP-nets and identifying the representation issues, the paper presents a logical encoding of preferences under the form of a partially ordered base of logical formulas using a discrimin ordering of the preferences. It is shown that it provides a better approximation of CP-nets than other approaches. This approximation is faithful w.r.t. the strict preferences part of the CP-net and enables a better control of the incomparabilites. Its computational cost remains polynomial w.r.t. the size of the CP-net. The case of cyclic CP-nets is also discussed.

Souhila Kaci, Henri Prade

Reasoning and Decision Making Under Uncertainty

Conceptual Uncertainty and Reasoning Tools

Problems of conceptual uncertainty have been dealt with in theories of formal logic. Such theories try to accommodate vagueness in two main ways. One is fuzzy logic that introduces degrees of truth. The other way of accommodating formal logic to vagueness is super valuations and its descendants. This paper studies a more inclusive class of reasoning support than formal logic. In the present approach, conceptual uncertainty, including vagueness is represented as higher order uncertainty. A taxonomy of epistemic and conceptual uncertainty is provided. Finally, implications of conceptual uncertainty for reasoning support systems are analyzed.

Bertil Rolf
Reasoning with an Incomplete Information Exchange Policy

In this paper, we deal with information exchange policies that may exist in multi-agent systems in order to regulate exchanges of information between agents. More precisely, we discuss two properties of information exchange policies, that is the consistency and the completeness. After having defined what consistency and completeness mean for such policies, we propose two methods to deal with incomplete policies.

Laurence Cholvy, Stéphanie Roussel
Qualitative Constraint Enforcement in Advanced Policy Specification

We consider advanced policy description specifications in the context of Answer Set Programming (ASP). Motivated by our application scenario, we further extend an existing policy description language, so that it allows for expressing preferences among sets of objects. This is done by extending the concept of ordered disjunctions to cardinality constraints. We demonstrate that this extension is obtained by combining existing ASP techniques and show how it allows for handling advanced policy description specifications.

Alessandra Mileo, Torsten Schaub
A Qualitative Hidden Markov Model for Spatio-temporal Reasoning

We present a Hidden Markov Model that uses qualitative order of magnitude probabilities for its states and transitions. We use the resulting model to construct a formalization of qualitative spatiotemporal events as random processes and utilize it to build high-level natural language description of change. We use the resulting model to show an example of foreseen usage of well-known prediction and recognition techniques used in Hidden Markov Models to perform useful queries with the representation.

Zina M. Ibrahim, Ahmed Y. Tawfik, Alioune Ngom
A Multiobjective Resource-Constrained Project-Scheduling Problem

The planning and scheduling activities are viewed profoundly important to generate successful plans and to maximize the utilization of scarce resources. Moreover, real life planning problems often involve several objectives that should be simultaneously optimized and real world environment is usually characterized by uncertain and incontrollable information. Thus, finding feasible and efficient plans is a considerable challenge. In this respect, theMulti-Objective Resource-Constrained Project- Scheduling problem (RCPSP) tries to schedule activities and allocate resources in order to find an efficient course of actions to help the project manager and to optimize several optimization criteria. In this research, we are developing a new method based on Ant System meta-heuristic and multi-objective concepts to raise the issue of the environment uncertainty and to schedule activities. We implemented and ran it on various sizes of the problem. Experimental results show that the CPU time is relatively short. We have also developed a lower bound for each objective in order to measure the degree of correctness of the obtained set of potentially efficient solutions. We have noticed that our set of potentially efficient solutions is comparable with these lower bounds. Thus, the average gap of the generated solutions is not far from the lower bounds.

Fouad Ben Abdelaziz, Saoussen Krichen, Olfa Dridi

Game Theory

Extending Classical Planning to the Multi-agent Case: A Game-Theoretic Approach

When several agents operate in a common environment, their plans may interfere so that the predicted outcome of each plan may be altered, even if it is composed of deterministic actions, only. Most of the multi-agent planning frameworks either view the actions of the other agents as exogeneous events or consider goal sharing cooperative agents. In this paper, we depart from such frameworks and extend the well-known single agent framework for classical planning to a multi-agent one. Focusing on the two agents case, we show how valuable plans can be characterized using game-theoretic notions, especially Nash equilibrium.

Ramzi Ben Larbi, Sébastien Konieczny, Pierre Marquis
Dependencies Between Players in Boolean Games

Boolean games are a logical setting for representing static games in a succinct way, taking advantage of the expressive power and conciseness of propositional logic. A Boolean game consists of a set of players, each of them controls a set of propositional variables and has a specific goal expressed by a propositional formula. There is a lot of graphical structures hidden in a Boolean game: the satisfaction of each player’s goal depends on players whose actions have an influence on these goals. Even if these dependencies are not specific to Boolean games, in this particular setting they give a way of finding simple characterizations of Nash equilibria and computing them.

Elise Bonzon, Marie-Christine Lagasquie-Schiex, Jérôme Lang

Fuzzy Sets and Fuzzy Logic

The Use of Fuzzy t-Conorm Integral for Combining Classifiers

Choquet or Sugeno fuzzy integrals are commonly used for aggregating the results of different classifiers. However, both these integrals belong to a more general class of fuzzy t-conorm integrals. In this paper, we describe a framework of a fuzzy t-conorm integral and its use for combining classifiers. We show the advantages of this approach to classifier combining in several benchmark tests.

David Štefka, Martin Holeňa
Integrated Query Answering with Weighted Fuzzy Rules

Weighted fuzzy logic programs increase the expressivity of fuzzy logic programs by allowing the association of a significance weight with each atom in the body of a fuzzy rule. In this paper, we propose a prototype system for the practical integration of weighted fuzzy logic programs with relational database systems in order to provide efficient query answering services. In the system, a dynamic weighted fuzzy logic program is a set of rules together with a set of database queries, fuzzification transformations and fact derivation rules, which allow the provided set of rules to be augmented with a set of fuzzy facts retrieved from the underlying databases. The weights of the rules may be estimated by a neural network-based machine learning process using some specially designated for this purpose training database data.

Alexandros Chortaras, Giorgos Stamou, Andreas Stafylopatis
On Decision Support Under Risk by the WOWA Optimization

The problem of averaging outcomes under several scenarios to form overall objective functions is of considerable importance in decision support under uncertainty. The fuzzy operator defined as the so-called Weighted OWA (WOWA) aggregation offers a well-suited approach to this problem. The WOWA aggregation, similar to the classical ordered weighted averaging (OWA), uses the preferential weights assigned to the ordered values (i.e. to the worst value, the second worst and so on) rather than to the specific criteria. This allows one to model various preferences with respect to the risk. Simultaneously, importance weighting of scenarios can be introduced. In this paper we analyze solution procedures for optimization problems with the WOWA objective function. A linear programming formulation is introduced for optimization of the WOWA objective with monotonic preferential weights. Its computational efficiency is analyzed.

Włodzimierz Ogryczak, Tomasz Śliwiński
Transposing the Sociology of Organized Action into a Fuzzy Environment

In this work, we address the transposition of a fragment of the modeling of the Sociology of Organized Action to the fuzzy setting. We present two different ways of developing fuzzy models in this context, that depend on the kind of available data furnished by the user: one based on the extension principle and another using fuzzy rule-based inference with similarity relations. We illustrate our approach with an example from the sociology literature.

Sandra Sandri, Christophe Sibertin-Blanc

Possibility Theory

An Axiomatization of Conditional Possibilistic Preference Functionals

The aim of the paper is to extend the Savage like axiomatization of possibilistic preference functionals in qualitative decision theory to conditional acts, so as to make a step towards the dynamic decision setting. To this end, the de Finetti style approach to conditional possibility recently advocated by Coletti and Vantaggi is exploited, extending to conditional acts the basic axioms pertaining to conditional events.

Didier Dubois, Hélène Fargier, Barbara Vantaggi
Conflict Analysis and Merging Operators Selection in Possibility Theory

In possibility theory,

the degree of inconsistency

is commonly used to measure the level of conflict in information from multiple sources after merging, especially conjunctive merging. However, as shown in [HL05,Liu06b], this measure alone is not enough when pairs of uncertain information have the same degree of inconsistency, since it is not possible to tell which pair contains information that is actually

better

, in the sense that the two pieces of information in one pair agree with each other more than the information does in other pairs. In this paper, we investigate what additional measures can be used to judge the

closeness

between two pieces of uncertain information. We deploy the concept of

distance between betting commitments

developed in DS theory in [Liu06a], since possibility theory can be viewed as a special case of DS theory. We present properties that reveal the interconnections and differences between the degree of inconsistency and the distance between betting commitments. We also discuss how to use these two measures together to guide the possible selection of various merging operators in possibility theory.

Weiru Liu
Extending Description Logics with Uncertainty Reasoning in Possibilistic Logic

Possibilistic logic provides a convenient tool for dealing with inconsistency and handling uncertainty. In this paper, we propose possibilistic description logics as an extension of description logics. We give semantics and syntax of possibilistic description logics. We then define two inference services in possibilistic description logics. Since possibilistic inference suffers from the

drowning problem

, we consider a drowning-free variant of possibilistic inference, called linear order inference. Finally, we implement the algorithms for inference services in possibilistic description logics using KAON2 reasoner.

Guilin Qi, Jeff Z. Pan, Qiu Ji
Information Affinity: A New Similarity Measure for Possibilistic Uncertain Information

This paper addresses the issue of measuring similarity between pieces of uncertain information in the framework of possibility theory. In a first part, natural properties of such functions are proposed and a survey of the few existing measures is presented. Then, a new measure so-called Information Affinity is proposed to overcome the limits of the existing ones. The proposed function is based on two measures, namely, a classical informative distance, e.g. Manhattan distance which evaluates the difference, degree by degree, between two normalized possibility distributions and the well known inconsistency measure which assesses the conflict between the two possibility distributions. Some potential applications of the proposed measure are also mentioned in this paper.

Ilyes Jenhani, Nahla Ben Amor, Zied Elouedi, Salem Benferhat, Khaled Mellouli

Applications

Measuring the Quality of Health-Care Services: A Likelihood-Based Fuzzy Modeling Approach

We face the problem of constructing a model which is suited for an effective evaluation of the quality of a health-care provider: to this purpose, we focus on some relevant indicators characterizing the various services run by the provider. We rely on a fuzzy modeling approach by using the interpretation (in terms of coherent conditional probability) of a membership function of a fuzzy set as a suitable likelihood.

Giulianella Coletti, Luca Paulon, Romano Scozzafava, Barbara Vantaggi
Automatic Indexing from a Thesaurus Using Bayesian Networks: Application to the Classification of Parliamentary Initiatives

We propose a method which, given a document to be classified, automatically generates an ordered set of appropriate descriptors extracted from a thesaurus. The method creates a Bayesian network to model the thesaurus and uses probabilistic inference to select the set of descriptors having high posterior probability of being relevant given the available evidence (the document to be classified). We apply the method to the classification of parliamentary initiatives in the regional Parliament of Andalucía at Spain from the Eurovoc thesaurus.

Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete, Alfonso E. Romero
A Genetic Programming Classifier Design Approach for Cell Images

This paper describes an approach for the use of genetic programming (GP) in classification problems and it is evaluated on the automatic classification problem of pollen cell images. In this work, a new reproduction scheme and a new fitness evaluation scheme are proposed as advanced techniques for GP classification applications. Also an effective set of pollen cell image features is defined for cell images. Experiments were performed on Bangor/Aberystwyth Pollen Image Database and the algorithm is evaluated on challenging test configurations. We reached at 96 % success rate on the average together with significant improvement in the speed of convergence.

Aydın Akyol, Yusuf Yaslan, Osman Kaan Erol
Use of Radio Frequency Identification for Targeted Advertising: A Collaborative Filtering Approach Using Bayesian Networks

This article discusses a potential application of radio frequency identification (RFID) and collaborative filtering for targeted advertising in grocery stores. Every day hundreds of items in grocery stores are marked down for promotional purposes. Whether these promotions are effective or not depends primarily on whether the customers are aware of them or not, and secondarily whether the customers are interested in the products or not. Currently, the companies are incapable of influencing the customers’ decisionmaking process while they are shopping. However, the capabilities of RFID technology enable us to transfer the recommendation systems of e-commerce to grocery stores. In our model, using RFID technology, we get real time information about the products placed in the cart during the shopping process. Based on that information we inform the customer about those promotions in which the customer is likely to be interested in. The selection of the product advertised is a dynamic decision making process since it is based on the information of the products placed inside the cart while customer is shopping. Collaborative filtering will be used for the identification of the advertised product and Bayesian networks will be used for the application of collaborative filtering. We are assuming a scenario where all products have RFID tags, and grocery carts are equipped with RFID readers and screens that would display the relevant promotions.

Esma Nur Cinicioglu, Prakash P. Shenoy, Canan Kocabasoglu
Development of an Intelligent Assessment System for Solo Taxonomies Using Fuzzy Logic

In this paper is presented a modeling of assessment systems of taxonomies using fuzzy logic. Specifically the taxonomies system solo is studied, which can be applied in a wide range of fields of diagnostic science. In what concerns education, the test correction is extremely hard and demands experts that are not always available. The intelligent system offers the opportunity to evaluate and classify students’ performance according to the structure of the observed learning outcome, concerning the cognitive development of the students in the field of mathematics. The system was tested on high school and university students.

John Vrettaros, George Vouros, Athanasios Drigas
Backmatter
Metadaten
Titel
Symbolic and Quantitative Approaches to Reasoning with Uncertainty
herausgegeben von
Khaled Mellouli
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-75256-1
Print ISBN
978-3-540-75255-4
DOI
https://doi.org/10.1007/978-3-540-75256-1

Premium Partner