Skip to main content
Top

2009 | Book

Modeling Decisions for Artificial Intelligence

6th International Conference, MDAI 2009, Awaji Island, Japan, November 30–December 2, 2009. Proceedings

Editors: Vicenç Torra, Yasuo Narukawa, Masahiro Inuiguchi

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the 6th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2009, held on Awaji Island, Japan, in November/December 2009. The 28 papers presented in this book together with 5 invited talks were carefully reviewed and selected from 61 submissions. The topics covered are aggregation operators, fuzzy measures and game theory; decision making; clustering and similarity; computational intelligence and optimization; and machine learning.

Table of Contents

Frontmatter

Invited Papers

Interactive Robust Multiobjective Optimization Driven by Decision Rule Preference Model

Interactive procedures for MultiObjective Optimization (MOO) consist of a sequence of steps alternating calculation of a sample of non-dominated solutions and elicitation of preference information from the Decision Maker (DM). We consider

three types of procedures

, where in preference elicitation stage, the DM is just asked to indicate which solutions are relatively good in the proposed sample. In all three cases, the preference model is a set of “if . . . , then . . .” decision rules inferred from the preference information using the Dominance-based Rough Set Approach (DRSA) (3; 4; 11).

Roman Słowiński
g-BDI: A Graded Intensional Agent Model for Practical Reasoning

In intentional agents, actions are derived from the mental attitudes and their relationships. In particular, preferences (positive desires) and restrictions (negative desires) are important proactive attitudes which guide agents to intentions and eventually to actions. In this paper we overview recent developments about a multi-context based agent architecture g-BDI to represent and reasoning about gradual notions of desires and intentions, including sound and complete logical formalizations. We also show that the framework is expressive enough to describe how desires, together with other information, can lead agents to intentions and finally to actions. As a case-study, we will also describe the design and implementation of recommender system on tourism as well as the results of some experiments concerning the flexibility and performance of the g-BDI model.

Ana Casali, Lluís Godo, Carles Sierra
Modeling Ambiguity Averse Behavior of Individual Decision Making: Prospect Theory under Uncertainty

Firstly, a behavioral model based on the "Prospect Theory" developed by Kahneman and Tversky is described. In this model weighting function of non-additive probabilities are introduced where probability of each event occurring is known. The effective application of this approach to the public sector is shown in modeling risks of extreme events with low probability and high outcome. Next, a behavioral model based on our "Prospect Theory under Uncertainty" is described where basic probability of a set of events is known but occurrence probability of each event is not known. It is shown that this model could properly explain the Ellsberg paradox of ambiguity aversion. Potential applicability of this approach to assessing global environmental-economic policies is described.

Hiroyuki Tamura
Generalized Bags, Bag Relations, and Applications to Data Analysis and Decision Making

Bags alias multisets have long been studied in computer science, but recently more attention is paid on bags. In this paper we consider generalized bags which include real-valued bags, fuzzy bags, and a region-valued bags. Basic definitions as well as their properties are established; advanced operations such as

s

-norms,

t

-norms, and their duality are also studied. Moreover bag relations are discussed which has max-plus and max-min algebras as special cases. The reason why generalized bags are useful in applications is described. As two applications, bag-based data analysis and decision making based on convex function optimization related to bags are discussed.

Sadaaki Miyamoto
The Relationship between Interval, Fuzzy and Possibilistic Optimization

The relationship between fuzzy set theory (in particular fuzzy arithmetic) and interval analysis is well-know. This study explores the interconnections between interval analysis, fuzzy interval analysis, and interval and fuzzy/possibilistic optimization. Two key ideas are considered herein: (1) constraint set computation and (2) the clear distinctions and relationships between interval, fuzzy, and possibilistic entities as they are used in optimization within an historical and taxonomic context. Constraint fuzzy interval arithmetic is used to compute constraint sets.

Weldon A. Lodwick

Regular Papers

Aggregation Operators, Fuzzy Measures and Game Theory

Decision Making in Voting Games: An Insight into Theory and Practice

The paper introduces basics of a notion of voting game theory and illustrates its application in practice. Examples outlined in the paper show how the theory works in practice of parliamentary institutions. A few simple examples explain the difference between voting weight and voting power and show how important analysis of voting games is in practice. A study of selected decisive systems explains meaning of such notions as winning and blocking coalitions, initiative and preventive power of a player etc.

Wladyslaw Homenda
A Lyapunov-Type Theorem for Nonadditive Vector Measures

We prove the convexity and compactness of the closure of the lower partition range of an ℝ

n

-valued, nonatomic, supermodular capacity, employing a useful relationship between convex games and their Choquet integrals. The main result is applied to fair division problems, and the existence of Pareto optimal

α

-fair partitions is demonstrated for the case of nonadditive measures.

Nobusumi Sagara
A Formal Theory of Cooperative TU-Games

Results of game theory are often the keys to decisions of economical and political executives. They are also used to create internal tools of many decision making software. For example, coordination games may be cooperative games, when the players choose the strategies by a consensus decision making process, and game trees are used to represent some key cooperative games. Our theory of cooperative games with transferable utilities makes it possible to deliver a formal certificate that contains statements and proofs with each result of any procedure in theory of cooperative TU-games. Such formal certificates can be archived and audited by independent experts to guarantee that the process that lead to the decision is sound and pertaining. As we use an automated proof checker, the review only has to guarantee that the statements of the certificate are correct. The proofs contained in the certificate are guaranteed automatically by the proof checker and our formal theory.

Marc Daumas, Érik Martin-Dorel, Annick Truffert, Michel Ventou
The Functionality-Security-Privacy Game

Privacy preservation in the information society is in many respects parallel to environment preservation in the physical world. In this way, “green ICT services” are those achieving functionality and security with minimum invasion of the privacy of individuals, where such an invasion can be regarded as a kind of pollution as harmful in the long run to their moral welfare as physical pollution is to their physical welfare. Depending on the type of service, individuals can be users, data owners or respondents having supplied data. We show that the conflict between functionality, security and privacy can be viewed as a game between several players whose interests differ. If the game is properly formulated, its equilibria can lead to protocols conciliating the functionality, security and privacy interests of all players.

Josep Domingo-Ferrer
Toward the Theory of Cooperative Games under Incomplete Information

In the conventional cooperative games, it is assumed that the payoff of each coalition is known. However, in the real world problems, there may exist situations in which some coalitional values are unknown. In this paper, we take the first step toward the theory of cooperative games under incomplete information of coalitional values. We define concepts related to such incomplete games. We investigate the solution concepts in a special case when only values of the grand coalition and singleton coalitions are known. We show that there exists a focal point solution which is commonly suggested in many points of view.

Satoshi Masuya, Masahiro Inuiguchi
Comparison of Data Structures for Computing Formal Concepts

Presented is preliminary study of the role of data structures in algorithms for formal concept analysis. Studied is performance of selected algorithms in dependence on chosen data structures and size and density of input object-attribute data. The observations made in the paper can be seen as guidelines on how to select data structures for implementing algorithms for formal concept analysis.

Petr Krajca, Vilem Vychodil

Decision Making

Using Conditional Random Fields for Decision-Theoretic Planning

We propose a means of extending Conditional Random Field modeling to decision-theoretic planning where valuation is dependent upon fully-observable factors. Representation is discussed, and a comparison with existing decision problem methodologies is presented. Included are exact and inexact message passing schemes for policy making, examples of decision making in practice, extensions to solving general decision problems, and suggestions for future use.

Paul A. Ardis, Christopher M. Brown
Interactive Decision Making for Hierarchical Multiobjective Linear Programming Problems

In this paper, we focus on hierarchical multiobjective linear programming problems where multiple decision makers in a hierarchical organization have their own multiple objective linear functions together with common linear constraints, and propose an interactive decision making method to obtain the satisfactory solution which reflects not only the hierarchical relationships between multiple decision makers but also their own preferences for their objective functions. In the proposed method, instead of Pareto optimal concept, the generalized Λ-extreme point concept is introduced. In order to obtain the satisfactory solution from among the generalized Λ-extreme point set, an interactive decision making method based on the linear programming is proposed, and an interactive processes are demonstrated by means of an illustrative numerical example.

Hitoshi Yano
A Perception-Based Portfolio Under Uncertainty: Minimization of Average Rates of Falling

A perception-based portfolio model under uncertainty is discussed. In the proposed model, randomness and fuzziness are evaluated respectively by the probabilistic expectation and the mean values with evaluation weights and

λ

-mean functions. The means, the variances and the covariances fuzzy numbers/fuzzy random variables are evaluated in the possibility case and the necessity case, and the rate of return with portfolios is estimated by the both random factors and imprecise factors. In the portfolio model, the average rate of falling is minimized using average value-at-risks as a coherent risk measure. By analytical approach, we derive a solution of the portfolio problem to minimize the average rate of falling. A numerical example is given to illustrate our idea.

Yuji Yoshida
A Differential Evolution Based Time-Frequency Atom Decomposition for Analyzing Emitter signals

This paper discusses the use of time-frequency atom decomposition based on a differential evolution to analyze radar emitter signals. Decomposing a signal into an appropriate time-frequency atoms is a well-known NP-hard problem. This paper applies a differential evolution to replace the traditional approach, a greedy strategy, to approximately solve this problem within a tolerable time. A large number of experiments conducted on various radar emitter signals verify the feasibilities that the time-frequency characteristics are shown by using a small number of decomposed time-frequency atoms, instead of traditional time-frequency distributions.

Gexiang Zhang, Jixiang Cheng
Combining Various Methods of Automated User Decision and Preferences Modelling

In this paper we present a proposal of a system that combines various methods of user modelling. This system may find its application in e-commerce, recommender systems, etc. The main focus of this paper is on automatic methods that require only a small amount of data from user. The different ways of integration of user models are studied. A proof-of-concept implementation is compared to standard methods in an initial experiment with artificial user data...

Alan Eckhardt, Peter Vojtáš
Target-Oriented Decision Analysis with Different Target Preferences

Decision maker’s behavioral aspects play an important role in human decision making, and this was recognized by the award of the 2002 Nobel Prize in Economics to Daniel Kahneman. Target-oriented decision analysis lies in the philosophical root of bounded rationality as well as represents the

S

-shaped value function. In most studies on target-oriented decision making, monotonic assumptions are given in advance to simplify the problems, e.g., the attribute wealth. However, there are three types of target preferences: “the more the better” (corresponding to benefit target preference), “the less the better” (corresponding to cost target preference), and equal/range targets (too much or too little is not acceptable). Toward this end, two methods have been proposed to model the different types of target preferences: cumulative distribution function (cdf) based method and level set based method. These two methods can both induce four shaped value functions:

S

-shaped, inverse

S

-shaped, convex, and concave, which represents decision maker’s psychological preference. The main difference between these two methods is that the level set based method induces a steeper value function than that by the cdf based method.

Hong-Bin Yan, Van-Nam Huynh, Yoshiteru Nakamori
A Novel Method for Multibiometric Fusion Based on FAR and FRR

Based on the fusion of multiple biometric sources, Multibiometric systems can be expected to be more accurate due to the presence of multiple pieces of evidence. Multibiometric system design is a challenging problem because it is very difficult to choose the optimal fusion strategy. Score level fusion is the most commonly used approach in Multibiometric systems. The distribution of genuine and imposter scores are very important for score fusion of Multibiometric systems. FRR (False Reject Rate) and FAR (False Accept Rate) are two key parameters to cultivate the distribution of genuine and imposter scores. In this paper, we first present a model for Multibiometric fusion and then proposed a novel approach for score level fusion which is based on FAR and FRR. By this method, the match scores first are transformed into LL1s and then the sum rule is used to combine the LL1s of the scores. The experimental results show that the new fusion scheme is efficient for different Multibiometric systems.

Yong Li, Jianping Yin, Jun Long, En Zhu
Performance Evaluation of TEWA Systems for Improved Decision Support

In air defense situations, decision makers have to protect defended assets through assigning available firing units to threatening targets in real-time. To their help they have decision support systems known as threat evaluation and weapon allocation (TEWA) systems. The problem of performance evaluation of such systems is of great importance, due to their critical role. Despite this, research on this problem is close to non-existing. We are discussing the use of survivability and resource usage cost as comparative performance metrics, which can be used for comparing the effectiveness of different system configurations, by using simulations. These metrics have been implemented into a testbed, in which we have performed some comparative experiments. Our results show that changes of individual parts of the threat evaluation and weapon allocation system configuration can have a large effect on the effectiveness of the system as a whole, and illustrate how the metrics and the testbed can be used.

Fredrik Johansson, Göran Falkman
Discounting and Combination Scheme in Evidence Theory for Dealing with Conflict in Information Fusion

Recently combination rules as well as the issue of conflict management in Dempster-Shafer theory have received considerable attention in information fusion research. Mostly these studies considered the combined mass assigned to the empty set as the conflict and have tried to provide alternatives to Dempster’s rule of combination, which mainly differ in the way of how to manage the conflict. In this paper, we introduce a hybrid measure to judge the difference between two bodies of evidence as a basis for conflict analysis, and argue that using the combined mass assigned to the empty set

as a whole

to quantify conflict seems inappropriate. We then propose to use the

discounting

operator in association with the

combination

operator to resolve conflict when combining evidence, in which the discount rate of a basic probability assignment is defined using the entropy of its corresponding pignistic probability function. Finally, an application of this

discounting and combination scheme

to fusion of decisions in classifier combination is demonstrated.

Van-Nam Huynh
Evaluation Based on Pessimistic Efficiency in Interval DEA

In

Interval DEA

(Data Envelopment Analysis),

efficiency interval

has been proposed and its bounds are obtained from the optimistic and

pessimistic

viewpoints, respectively. Intervals are suitable to represent uncertainty of the given input-output data and decision makers’ intuitive evaluations. Although the intervals give elements a partial order relation, it is sometimes complex, especially in case of many elements. The efficiency measurement combining optimistic and pessimistic efficiencies in Interval DEA is proposed. They are compared from the view that both of them represent the difference of the analyzed DMU (Decision Making Unit) from the most efficient one. The proposed efficiency measurement is mainly determined by the pessimistic efficiency. The optimistic one is considered if it is inadequate comparing to the pessimistic one. Such a pessimistic efficiency based evaluation is more similar to our natural evaluation and DMUs are

arranged

as a linear order.

Tomoe Entani
Stochastic Facility Construction Problem with Preference of Candidate Sites

This paper considers a facility construction problem in a rectangular urban area with some barriers and rectilinear distance. There exist some demand points and possible construction sites with preference. A random construction cost according to a normal distribution. The probability that the cost becomes below the budget should not be below the fixed level. One objective is that the budget should be minimized under the condition demand points are covered by at least one of facilities constructed within a certain critical distance. Another is that the minimal preference among constructed sites should be maximized. The other is to maximize minimal satisfaction degree with respect to critical distances among all demand points. We formulate our problem as a three criteria problem with a chance constraint. Since usually there exists no solution optimizing three objectives at a time, we seek some non-dominated solutions after the definition of non-domination.

Hiroaki Ishii, Yung Lung Lee, Kuang-Yih Yeh
A Consensus Reaching Model for Web 2.0 Communities

Web 2.0 Communities allow large amounts of users to interact with each others. In fact, new Web 2.0 technologies allow to share resources and information in an easy and timely manner, allowing real time communication among persons all over the world. However, as Web 2.0 Communities are a quite recent phenomenon with its own characteristics and particularities, there is still a necessity of developing new tools that allow to reach decisions with a high enough consensus level among their users. In this contribution we present a new consensus reaching model designed to incorporate the benefits that a Web 2.0 Community offers (rich and diverse knowledge due to a large number of users, real-time communication...) and that tries to minimize the main problems that this kind of organization presents (low and intermittent participation rates, difficulty of establishing trust relations and so on).

Sergio Alonso, Ignacio J. Pérez, Francisco J. Cabrerizo, Enrique Herrera-Viedma

Clustering and Similarity

Refinement Properties in Agglomerative Hierarchical Clustering

Refinement properties that means each cluster of a method is included in another cluster of another method in agglomerative clustering which was proposed by Miyamoto are further studied. Although we have simple conditions so that a method generates refinements of clusters of the single linkage method, whether or not generalizations hold when the single linkage is not used is unknown. Here three conditions for refinement properties for the single linkage are shown, while three counterexamples are shown for the average linkage and the complete linkage, which show the theory of refinements is far from trivial and future works are needed.

Sadaaki Miyamoto
Some Pairwise Constrained Semi-Supervised Fuzzy c-Means Clustering Algorithms

In this paper, some semi-supervised clustering methods are proposed with two types of pair constraints: two data have to be together in the same cluster, and two data have to be in different clusters, which are classified into two types: one is based on the standard fuzzy

c

-means algorithm and the other is on the entropy regularized one. First, the standard fuzzy

c

-means and the entropy regularized one are introduced. Second, a pairwise constrained semi-supervised fuzzy

c

means are introduced, which is derived from pairwise constrained competitive agglomeration. Third, some new optimization problem are proposed, which are derived from adding new loss function of memberships to the original optimization problem, respectively. Last, an iterative algorithm is proposed by solving the optimization problem.

Yuchi Kanzawa, Yasunori Endo, Sadaaki Miyamoto
PCA-Guided k-Means with Variable Weighting and Its Application to Document Clustering

PCA-guided

k

-Means is a deterministic approach to

k

-Means clustering, in which cluster indicators are derived in a PCA-guided manner. This paper proposes a new approach to

k

-Means with variable selection by introducing variable weighting mechanism into PCA-guided

k

-Means. The relative responsibility of variables is estimated in a similar way with FCM clustering while the membership indicator is derived from a PCA-guided manner, in which the principal component scores are calculated by considering the responsibility weights of variables. So, the variables that have meaningful information for capturing cluster structures are emphasized in calculation of membership indicators. Numerical experiments including an application to document clustering demonstrate the characteristics of the proposed method.

Katsuhiro Honda, Akira Notsu, Hidetomo Ichihashi
Partial Symbol Ordering Distance

Nowadays sequences of symbols are becoming more important, as they are the standard format for representing information in a large variety of domains such as ontologies, sequential patterns or non numerical attributes in databases. Therefore, the development of new distances for this kind of data is a crucial need. Recently, many similarity functions have been proposed for managing sequences of symbols; however, such functions do not always hold the triangular inequality. This property is a mandatory requirement in many data mining algorithms like clustering or k-nearest neighbors algorithms, where the presence of a metric space is a must. In this paper, we propose a new distance for sequences of (non-repeated) symbols based on the partial distances between the positions of the common symbols. We prove that this Partial Symbol Ordering distance satisfies the triangular inequality property, and we finally describe a set of experiments supporting that the new distance outperforms the Edit distance in those scenarios where sequence similarity is related to the positions occupied by the symbols.

Javier Herranz, Jordi Nin

Computational Intelligence and Optimization

Situation Recognition and Hypothesis Management Using Petri Nets

Situation recognition – the task of tracking states and identifying situations – is a problem that is important to look into for aiding decision makers in achieving enhanced situation awareness. The purpose of situation recognition is, in contrast to producing more data and information, to aid decision makers in focusing on information that is important for them, i.e. to detect potentially interesting situations. In this paper we explore the applicability of a Petri net based approach for modeling and recognizing situations, as well as for managing the hypothesis space coupled to matching situation templates with the present stream of data.

Anders Dahlbom, Lars Niklasson, Göran Falkman
A Hybrid Algorithm Based on Tabu Search and Ant Colony Optimization for k-Minimum Spanning Tree Problems

This paper considers an efficient approximate algorithm for solving

k

-minimum spanning tree problems which is one of the combinatorial optimization in networks. A new hybrid algorithm based on tabu search and ant colony optimization is provided. Results of numerical experiments show that the proposed method updates some of the best known values and that the proposed method provides a relatively better performance with solution accuracy over existing algorithms.

Hideki Katagiri, Tomohiro Hayashida, Ichiro Nishizaki, Jun Ishimatsu

Machine Learning

Dynamic Neighborhood Selection for Nonlinear Dimensionality Reduction

Neighborhood construction is a necessary and important step in nonlinear dimensionality reduction algorithm. In this paper, we first summarize the two principles for neighborhood construction via analyzing existing nonlinear dimensionality reduction algorithms: 1) data points in the same neighborhood should approximately lie on a low dimensional linear subspace; and 2) each neighborhood should be as large as possible. Then a dynamic neighborhood selection algorithm based on this two principles is proposed in this paper. The proposed method exploits PCA technique to measure the linearity of a finite points set. Moreover, for isometric embedding, we present an improved method of constructing neighborhood graph, which can improve the accuracy of geodesic distance estimation. Experiments on both synthetic data sets and real data sets show that our method can construct neighborhood according to local curvature of data manifold and then improve the performance of most manifold algorithms, such as ISOMAP and LLE.

Yubin Zhan, Jianping Yin, Jun Long
A Consistency-Constrained Feature Selection Algorithm with the Steepest Descent Method

This paper proposes a new consistency-based feature selection algorithm, which presents a new balance to the fundamental tradeoff between the quality of outputs of feature selection algorithms and their efficiency. Consistency represents the extent of corrective relevance of features to classification, and hence, consistency-based feature selection algorithms such as INTERACT, LCC and CCC can select relevant features more correctly by taking interaction among features into account. INTERACT and LCC are fast by employing the linear search strategy. By contrast, CCC is slow, since it is based on the complete search strategy, but can output feature subsets of higher quality. The algorithm that we propose in this paper, on the other hand, takes the steepest descent method as the search strategy. Consequently, it can find better solutions than INTERACT and LCC, and simultaneously restrains the increase in computational complexity within a reasonable level: it evaluates

$(|{\mathcal F}| + |{\tilde {\mathcal F}}|)(|{\mathcal F}| - |{\tilde {\mathcal F}}| + 1)/2$

feature subsets to output

${\tilde {\mathcal F}}$

. We prove effectiveness of the new algorithm through experiments.

Kilho Shin, Xian Ming Xu
A Heuristic Algorithm for Attribute Reduction Based on Discernibility and Equivalence by Attributes

In this paper, we consider a heuristic method to partially calculate relative reducts with better evaluation by the evaluation criterion proposed by the authors. By considering discernibility and equivalence of elements with respect to values of condition attributes that appear in relative reducts, we introduce an evaluation criterion of condition attributes, and consider a heuristic method for calculating a relative reduct with better evaluation.

Yasuo Kudo, Tetsuya Murai
Multiobjective Multiclass Soft-Margin Support Vector Machine and Its Solving Technique Based on Benson’s Method

In this paper, we focus on the

all together

model, which is one of the support vector machine (SVM) using a piece-wise linear function for multiclass classification. We already proposed a multiobjective hard-margin SVM model as a new all together model for piecewise linearly separable data, which maximizes all of the geometric margins simultaneously for the generalization ability. In addition, we derived a single-objective convex problem and showed that a Pareto optimal solution for the proposed multiobjective SVM is obtained by solving single-objective problems. However, in the real-world classification problem the data are often piecewise linearly inseparable. Therefore, in this paper we extend the hard-margin SVM for the data by using penalty functions for the margin slack variables between outliers and the corresponding discriminant hyperplane. Those functions are incorporated into the objective functions. Moreover, we derive a single-objective second-order cone programming (SOCP) problem based on Benson’s method and some techniques, and show that a Pareto optimal solution for the proposed soft-margin SVM is obtained by solving the SOCP iteratively. Furthermore through numerical experiments we verify that the proposed iterative method maximizes the geometric margins and constructs a classifier with a high generalization ability.

Keiji Tatsumi, Ryo Kawachi, Kenji Hayashida, Tetsuzo Tanino
Backmatter
Metadata
Title
Modeling Decisions for Artificial Intelligence
Editors
Vicenç Torra
Yasuo Narukawa
Masahiro Inuiguchi
Copyright Year
2009
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04820-3
Print ISBN
978-3-642-04819-7
DOI
https://doi.org/10.1007/978-3-642-04820-3

Premium Partner