Skip to main content

2015 | Buch

Algorithmic Decision Theory

4th International Conference, ADT 2015, Lexington, KY, USA, September 27–30, 2015, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed conference proceedings of the 4th International Conference on Algorithmic Decision Theory , ADT 2015, held in September 2015 in Lexington, USA. The 32 full papers presented were carefully selected from 76 submissions. The papers are organized in topical sections such as preferences; manipulation, learning and other issues; utility and decision theory; argumentation; bribery and control; social choice; allocation and other problems; doctoral consortium.

Inhaltsverzeichnis

Frontmatter

Preferences

Frontmatter
Beyond Theory and Data in Preference Modeling: Bringing Humans into the Loop

Many mathematical frameworks aim at modeling human preferences, employing a number of methods including utility functions, qualitative preference statements, constraint optimization, and logic formalisms. The choice of one model over another is usually based on the assumption that it can accurately describe the preferences of humans or other subjects/processes in the considered setting and is computationally tractable. Verification of these preference models often leverages some form of real life or domain specific data; demonstrating the models can predict the series of choices observed in the past. We argue that this is not enough: to evaluate a preference model, humans must be brought into the loop. Human experiments in controlled environments are needed to avoid common pitfalls associated with exclusively using prior data including introducing bias in the attempt to clean the data, mistaking correlation for causality, or testing data in a context that is different from the one where the data were produced. Human experiments need to be done carefully and we advocate a multi-disciplinary research environment that includes experimental psychologists and AI researchers. We argue that experiments should be used to validate models. We detail the design of an experiment in order to highlight some of the significant computational, conceptual, ethical, mathematical, psychological, and statistical hurdles to testing whether decision makers’ preferences are consistent with a particular mathematical model of preferences.

Thomas E. Allen, Muye Chen, Judy Goldsmith, Nicholas Mattei, Anna Popova, Michel Regenwetter, Francesca Rossi, Christopher Zwilling
Reasoning with Preference Trees over Combinatorial Domains

Preference trees, or

P-trees

for short, offer an intuitive and often concise way of representing preferences over combinatorial domains. In this paper, we propose an alternative definition of P-trees, and formally introduce their compact representation that exploits occurrences of identical subtrees. We show that P-trees generalize lexicographic preference trees and are strictly more expressive. We relate P-trees to answer-set optimization programs and possibilistic logic theories. Finally, we study reasoning with P-trees and establish computational complexity results for the key reasoning tasks of comparing outcomes with respect to orders defined by P-trees, and of finding optimal outcomes.

Xudong Liu, Miroslaw Truszczynski
Group Activity Selection from Ordinal Preferences

We consider the situation in which group activities need to be organized for a set of agents when each agent can take part in at most one activity. The agents’ preferences depend both on the activity and the number of participants in that activity. In particular, the preferences are given by means of strict orders over such pairs “(activity, group size)”, including the possibility “do nothing”. Our goal will be to assign agents to activities on basis of their preferences, the minimum requirement being that no agent prefers doing nothing, i.e., not taking part in any activity at all. We take two different approaches to establish such an assignment: (i) by use of

k

-approval scores; (ii) considering stability concepts such as Nash and core stability.

For each of these approaches, we analyse the computational complexity involved in finding a desired assignment. Particular focus is laid on two natural special cases of agents’ preferences which allow for positive complexity results.

Andreas Darmann
Towards Decision Making via Expressive Probabilistic Ontologies

We propose a framework for automated multi-attribute decision making, employing the probabilistic non-monotonic description logics proposed by Lukasiewicz in 2008. Using this framework, we can model artificial agents in decision-making situation, wherein background knowledge, available alternatives and weighted attributes are represented via probabilistic ontologies. It turns out that extending traditional utility theory with such description logics, enables us to model decision-making problems where probabilistic ignorance and default reasoning plays an important role. We provide several decision functions using the notions of expected utility and probability intervals, and study their properties.

Erman Acar, Camilo Thorne, Heiner Stuckenschmidt

Manipulation

Frontmatter
Manipulation of k-Approval in Nearly Single-Peaked Electorates

For agents it can be advantageous to vote insincerely in order to change the outcome of an election. This behavior is called manipulation. The Gibbard-Satterthwaite theorem states that in principle every non-trivial voting rule with at least three candidates is susceptible to manipulation. Since the seminal paper by Bartholdi, Tovey, and Trick in 1989, (coalitional) manipulation has been shown

$$\mathrm{NP}$$

-hard for many voting rules. However, under single-peaked preferences – one of the most influential domain restrictions – the complexity of manipulation often drops from

$$\mathrm{NP}$$

-hard to

$$\mathrm{P}$$

.

In this paper, we investigate the complexity of manipulation for the

k

-approval and veto families of voting rules in nearly single-peaked elections, exploring the limits where the manipulation problem turns from

$$\mathrm{P}$$

to

$$\mathrm{NP}$$

-hard. Compared to the classical notion of single-peakedness, notions of nearly single-peakedness are more robust and thus more likely to appear in real-world data sets.

Gábor Erdélyi, Martin Lackner, Andreas Pfandler
Manipulation and Bribery When Aggregating Ranked Preferences

Manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome. We assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problems to decide that.

Ying Zhu, Miroslaw Truszczynski
Complexity of Manipulative Actions When Voting with Ties

Most of the computational study of election problems has assumed that each voter’s preferences are, or should be extended to, a total order. However in practice voters may have preferences with ties. We study the complexity of manipulative actions on elections where voters can have ties, extending the definitions of the election systems (when necessary) to handle voters with ties. We show that for natural election systems allowing ties can both increase and decrease the complexity of manipulation and bribery, and we state a general result on the effect of voters with ties on the complexity of control.

Zack Fitzsimmons, Edith Hemaspaandra

Learning and Other Issues

Frontmatter
Bayesian Network Structure Learning with Messy Inputs: The Case of Multiple Incomplete Datasets and Expert Opinions

In this paper, we present an approach to build the structure of a Bayesian network from multiple disparate inputs. Specifically, our method accepts as input multiple partially overlapping datasets with missing data along with expert opinions about the structure of the model and produces an associated directed acyclic graph representing the graphical layer of a Bayesian network. We provide experimental results where we compare our algorithm with an application of Structural Expectation Maximization. We also provide a real world example motivating the need for combining disparate sources of information even when noisy and not fully aligned with one another.

Shravan Sajja, Léa A. Deleris
Reducing the Number of Queries in Interactive Value Iteration

To tackle the potentially hard task of defining the reward function in a Markov Decision Process (MDPs), a new approach, called Interactive Value Iteration (IVI) has recently been proposed by Weng and Zanuttini (2013). This solving method, which interweaves elicitation and optimization phases, computes a (near) optimal policy without knowing the precise reward values. The procedure as originally presented can be improved in order to reduce the number of queries needed to determine an optimal policy. The key insights are that (1) asking queries should be delayed as much as possible, avoiding asking queries that might not be necessary to determine the best policy, (2) queries should be asked by following a priority order because the answers to some queries can enable to resolve some other queries, (3) queries can be avoided by using heuristic information to guide the process. Following these ideas, a modified IVI algorithm is presented and experimental results show a significant decrease in the number of queries issued.

Hugo Gilbert, Olivier Spanjaard, Paolo Viappiani, Paul Weng
Learning the Parameters of a Non Compensatory Sorting Model

We consider a multicriteria sorting procedure based on a majority rule, called MR-Sort. This procedure allows to sort each object of a set, evaluated on multiple criteria, in a category selected among a set of pre-defined and ordered categories. With MR-Sort, the ordered categories are separated by profiles which are vectors of performances on the different attributes. Using the MR-Sort rule, an object is assigned to a category if it is at least as good as the category lower profile and not better than the category upper profile. To determine whether an object is as good as a profile, the weights of the criteria on which the object performances are better than the profile performances are summed up and compared to a threshold. If the sum of weights is at least equal to the threshold, then the object is considered at least as good as the profile. In view of increasing the expressiveness of the model, we substitute additive weights by a capacity to represent the power of coalitions of criteria. This corresponds to the Non-Compensatory Sorting model characterized by Bouyssou and Marchant. In the paper we describe a mixed integer program and a heuristic algorithm that enable to learn the parameters of this model from assignment examples.

Olivier Sobrie, Vincent Mousseau, Marc Pirlot
k-Agent Sufficiency for Multiagent Stochastic Physical Search Problems

In many multi-agent applications, such as patrol, shopping, or mining, a group of agents must use limited resources to successfully accomplish a task possibly available at several distinct sites. We investigate problems where agents must expend resources (e.g. battery power) to both travel between sites and to accomplish the task at a site, and where agents only have probabilistic knowledge about the availability and cost of accomplishing the task at any location. Previous research on Multiagent Stochastic Physical Search (mSPS) has only explored the case when sites are located along a path, and has not investigated the minimal number of agents required for an optimal solution. We extend previous work by exploring physical search problems on both paths and in 2-dimensional Euclidean space. Additionally, we allow the number of agents to be part of the optimization. Often, research into multiagent systems ignores the question of how many agents should actually be used to solve a problem. To investigate this question, we introduce the condition of

k

-agent sufficiency for a multiagent optimization problem, which means that an optimal solution exists that requires only

k

agents. We show that mSPS along a path with a single starting location is at most 2-agent sufficient, and quite often 1-agent sufficient. Using an optimal branch-and-bound algorithm, we also show that even in Euclidean space, optimal solutions are often only 2- or 3-agent sufficient on average.

Daniel S. Brown, Steven Loscalzo, Nathaniel Gemelli

Utility and Decision Theory

Frontmatter
Geometry on the Utility Space

We study the geometrical properties of the utility space (the space of expected utilities over a finite set of options), which is commonly used to model the preferences of an agent in a situation of uncertainty. We focus on the case where the model is neutral with respect to the available options, i.e. treats them,

a priori

, as being symmetrical from one another. Specifically, we prove that the only Riemannian metric that respects the geometrical properties and the natural symmetries of the utility space is the round metric. This canonical metric allows to define a uniform probability over the utility space and to naturally generalize the Impartial Culture to a model with expected utilities.

François Durand, Benoît Kloeckner, Fabien Mathieu, Ludovic Noirie
Sequential Extensions of Causal and Evidential Decision Theory

Moving beyond the dualistic view in AI where agent and environment are separated incurs new challenges for decision making, as calculation of expected utility is no longer straightforward. The non-dualistic decision theory literature is split between

causal decision theory

and

evidential decision theory

. We extend these decision algorithms to the

sequential

setting where the agent alternates between taking actions and observing their consequences. We find that evidential decision theory has two natural extensions while causal decision theory only has one.

Tom Everitt, Jan Leike, Marcus Hutter
MOPIC Properties in the Representation of Preferences by a 2-Additive Choquet Integral

In the context of Multiple Criteria Decision aiding (MCDA), we present necessary conditions to obtain a representation of a cardinal information by a Choquet integral w.r.t a 2-additive capacity. A cardinal information is a preferential information provided by a Decision Maker (DM) containing a strict preference, a quaternary and indifference relations. Our work is focused on the representation of a cardinal information by a particular Choquet integral defined by a 2-additive capacity. Used as an aggregation function, it arises as a generalization of the arithmetic mean, taking into account the interaction between two criteria. Then, it is a good compromise between simple models like arithmetic mean and complex models like general Choquet integral. We consider also the set of fictitious alternatives called binary alternatives or binary actions from which the Choquet integral w.r.t a 2-additive capacity can be entirely specified. The proposed MOPIC (MOnotonicity of Preferential Information for Cardinal) conditions can be viewed as an alternative to balanced cyclones which are complex necessary and sufficient conditions, used in the characterization of a 2-additive Choquet integral through a cardinal information.

Brice Mayag
Profitable Deviation Strong Equilibria

This paper deals with states that are immune to group deviations. Group deviations help the players of a strategic game to escape from undesirable states but they compromise the stability of a system. We propose and analyse a solution concept, called profitable deviation strong equilibrium, which is between two well-known equilibria: the strong equilibrium and the super strong equilibrium. The former precludes joint deviations by groups of players who all benefit. The latter is more demanding in the sense that at least one member of a deviating coalition must be better off while the other members cannot be worst off. We study the existence, computation and convergence to a profitable deviation strong equilibrium in three important games in algorithmic game theory: job scheduling, max cut and singleton congestion game.

Laurent Gourvès
Democratix: A Declarative Approach to Winner Determination

Computing the winners of an election is an important task in voting and preference aggregation. The declarative nature of answer-set programming (ASP) and the performance of state-of-the-art solvers render ASP very well-suited to tackle this problem. In this work we present a novel, reduction-based approach for a variety of voting rules, ranging from tractable cases to problems harder than

NP

. The encoded voting rules are put together in the extensible tool Democratix, which handles the computation of winners and is also available as a web application. To learn more about the capabilities and limits of the approach, the encodings are evaluated thoroughly on real-world data as well as on random instances.

Günther Charwat, Andreas Pfandler

Multiobjective Optimisation

Frontmatter
Approximation Schemes for Multi-objective Optimization with Quadratic Constraints of Fixed CP-Rank

Motivated by the power allocation problem in AC (alternating current) electrical systems, we study the multi-objective (combinatorial) optimization problem where a constant number of (nonnegative) linear functions are simultaneously optimized over a given feasible set of 0–1 points defined by quadratic constraints. Such a problem is very hard to solve if no specific assumptions are made on the structure of the constraint matrices. We focus on the case when the constraint matrices are

completely positive

and have fixed

cp-rank

. We propose a polynomial-time algorithm which computes an

$$\epsilon $$

-

Pareto curve

for the studied multi-objective problem when both the number of objectives and the number of constraints are fixed, for any constant

$$\epsilon >0$$

. This result is then applied to obtain polynomial-time approximation schemes (PTASes) for two NP-hard problems:

multi-criteria power allocation

and

sum-of-ratios optimization

.

Khaled Elbassioni, Trung Thanh Nguyen
Choquet Integral Versus Weighted Sum in Multicriteria Decision Contexts

In this paper, we address the problem of comparing the performances of two popular aggregation operators, the weighted sum and the Choquet integral, for selecting the best alternative among a set of alternatives, all evaluated according to different criteria. While the weighted sum is simple to use and very popular, the Choquet integral is still hard to use in practice but leads theoretically to better results in terms of concordance with the preferences of a decision maker. However, given the efforts needed to set the parameters of the Choquet integral, it is important to measure, for a given decision problem, if it is really worth defining the Choquet integral or if a simple weighted sum could have been used to determine the best alternative. We will compute the probability that a recommendation to a decision maker could only been obtained with the Choquet integral and not with a weighted sum. When the number of criteria increases, the results show that this probability tends to one. However, a high value of probability can only be attained for particular data sets.

Thibaut Lust
Exact Methods for Computing All Lorenz Optimal Solutions to Biobjective Problems

This paper deals with biobjective combinatorial optimization problems where both objectives are required to be well-balanced. Lorenz dominance is a refinement of the Pareto dominance that has been proposed in economics to measure the inequalities in income distributions. We consider in this work the problem of computing the Lorenz optimal solutions to combinatorial optimization problems where solutions are evaluated by a two-component vector. This setting can encompass fair optimization or robust optimization. The computation of Lorenz optimal solutions in biobjective combinatorial optimization is however challenging (it has been shown intractable and NP-hard on certain problems). Nevertheless, to our knowledge, very few works address this problem. We propose thus in this work new methods to generate Lorenz optimal solutions. More precisely, we consider the adaptation of the well-known two-phase method proposed in biobjective optimization for computing Pareto optimal solutions to the direct computing of Lorenz optimal solutions. We show that some properties of the Lorenz dominance can provide a more efficient variant of the two-phase method. The results of the new method are compared to state-of-the-art methods on various biobjective combinatorial optimization problems and we show that the new method is more efficient in a majority of cases.

Lucie Galand, Thibaut Lust
On Possibly Optimal Tradeoffs in Multicriteria Spanning Tree Problems

In this paper, we propose an interactive approach to determine a compromise solution in the multicriteria spanning tree problem. We assume that the Decision Maker’s preferences over spanning trees can be represented by a weighted sum of criteria but that weights are imprecisely known. In the first part of the paper, we propose a generalization of Prim’s algorithm to determine the set of possibly optimal tradeoffs. In the second part, we propose an incremental weight elicitation method to reduce the set of feasible weights so as to identify a necessary optimal tradeoff. Numerical tests are given to demonstrate the practical feasibility of the approach.

Nawal Benabbou, Patrice Perny

Argumentation

Frontmatter
Verification in Attack-Incomplete Argumentation Frameworks

We tackle the problem of expressing incomplete knowledge about the attack relation in abstract argumentation frameworks. In applications, incomplete argumentation frameworks may arise as intermediate states in an elicitation process, when merging different beliefs about an argumentation framework’s state, or in cases where the complete information cannot be fully obtained. To this end, we employ a model introduced by Cayrol et al. [

10

] and analyze the question of whether certain justification criteria are possibly (or necessarily) fulfilled, i.e., whether they are fulfilled in some (or in every) completion of the incomplete argumentation framework. We formally extend the definition of existing criteria to these incomplete argumentation frameworks and provide characterization and complexity results for variants of the verification problem.

Dorothea Baumeister, Daniel Neugebauer, Jörg Rothe
Verification in Argument-Incomplete Argumentation Frameworks

Incomplete knowledge in argumentation frameworks may occur during the single steps of an elicitation process, when merging different beliefs about the current state of an argumentation framework, or when it is simply not possible to obtain complete information. The semantics of argumentation frameworks with such incomplete knowledge have previously been modeled in terms of an inco mplete attack relation among the given arguments by Cayrol et al. [

12

] or when adding an argument that interacts with already present arguments [

14

]. We propose a more general model of argument-incomplete argumentation frameworks with a variable set of arguments, and we study the related verification problems for various semantics in terms of their computational complexity.

Dorothea Baumeister, Jörg Rothe, Hilmar Schadrack

Bribery and Control

Frontmatter
Complexity of Optimal Lobbying in Threshold Aggregation

This work studies the computational complexity of Optimal Lobbying under Threshold Aggregation. Optimal Lobbying is the problem a lobbyist or a campaign manager faces in a voting scenario of a multi-issue referendum when trying to influence the result. The Lobby is faced with a profile that specifies for each voter and each issue whether the voter approves or rejects the issue, and seeks to find the smallest set of voters it can influence to change their vote, for a desired outcome to be obtained. This problem also describes problems arising in other scenarios of aggregation, such as principal-agents incentives scheme in a complex combinatorial problem, and bribery in Truth-Functional Judgement Aggregation. We study cases when the issues are aggregated by a threshold aggregator, that is, an anonymous monotone function, and the desired outcomes set is upward-closed. We analyze this problem with regard to two parameters: the minimal number of supporters needed to pass an issue, and the size of the maximal minterm of the desired set. For these parameters we separate tractable cases from untractable cases and in that generalize the

NP-complete

result of Christian et al. [

8

]. We show that for the extreme values of the parameters, the problem is solvable in polynomial time, and provide algorithms. On the other hand, we prove the problem is not solvable in polynomial time for the non-extremal values, which are common values for the parameters.

Ilan Nehama
More Natural Models of Electoral Control by Partition

“Control” studies attempts to set the outcome of elections through the addition, deletion, or partition of voters or candidates. The set of benchmark control types was largely set in the 1992 paper by Bartholdi, Tovey, and Trick that introduced control, and there now is a large literature studying how many of the benchmark types various election systems are vulnerable to, i.e., have polynomial-time attack algorithms for.

However, although the longstanding benchmark models of addition and deletion model relatively well the real-world settings that inspire them, the longstanding benchmark models of partition model settings that are arguably quite distant from those they seek to capture.

In this paper, we introduce—and for some important cases analyze the complexity of—new partition models that seek to better capture many real-world partition settings. In particular, in many partition settings one wants the two parts of the partition to be of (almost) equal size, or is partitioning into more than two parts, or has groups of actors who must be placed in the same part of the partition. Our hope is that having these new partition types will allow studies of control attacks to include such models that more realistically capture many settings.

Gábor Erdélyi, Edith Hemaspaandra, Lane A. Hemaspaandra
Elections with Few Candidates: Prices, Weights, and Covering Problems

We show that a number of election-related problems with prices (such as, for example, bribery) are fixed-parameter tractable (in

$${\mathsf {FPT}}$$

) when parameterized by the number of candidates. For bribery, this resolves a nearly 10-year old family of open problems. Our results follow by a general technique that formulates voting problems as covering problems and extends the classic approach of using integer linear programming and the algorithm of Lenstra [

19

]. In this context, our central result is that

Weighted Set Multicover

parameterized by the universe size is fixed-parameter tractable. Our approach is also applicable to weighted electoral control for Approval voting. We improve previously known

$${\mathsf {XP}}$$

-memberships to

$${\mathsf {FPT}}$$

-memberships. Our preliminary experiments on real-world-based data show the practical usefulness of our approach for instances with few candidates.

Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier, Piotr Skowron, Nimrod Talmon
Complexity of Bribery and Control for Uniform Premise-Based Quota Rules Under Various Preference Types

Manipulation of judgment aggregation procedures has first been studied by List [

14

] and Dietrich and List [

8

], and Endriss et al. [

9

] were the first to study it from a computational perspective. Baumeister et al. [

2

,

3

,

6

] introduced the concepts of bribery and control in judgment aggregation and studied their algorithmic and complexity-theoretic properties. However, their results are restricted to Hamming-distance-respecting preferences and their results on bribery apply to the premise-based procedure only. We extend these results to more general preference notions, including closeness-respecting and top-respecting preferences that are due to Dietrich and List and have been applied to manipulation in judgment aggregation by Baumeister et al. [

4

,

5

]. In addition, our results apply to uniform premise-based quota rules that generalize the premise-based procedure.

Dorothea Baumeister, Jörg Rothe, Ann-Kathrin Selker

Social Choice

Frontmatter
Beyond Plurality: Truth-Bias in Binary Scoring Rules

It is well known that standard game-theoretic approaches to voting mechanisms lead to a multitude of Nash Equilibria (NE), many of which are counter-intuitive. We focus on truth-biased voters, a model recently proposed to avoid such issues. The model introduces an incentive for voters to be truthful when their vote is not pivotal. This is a powerful refinement, and recent simulations reveal that the surviving equilibria tend to have desirable properties.

However, truth-bias has been studied only within the context of plurality, which is an extreme example of

k

-approval rules with

$$k=1$$

. We undertake an equilibrium analysis of the complete range of

k

-approval. Our analysis begins with the veto rule, the other extreme point of

k

-approval, where each ballot approves all candidates but one. We identify several crucial properties of pure NE for truth-biased veto. These properties show a clear distinction from the setting of truth-biased plurality. We proceed by establishing that deciding on the existence of NE in truth-biased veto is an NP-hard problem. We also characterise a tight (in a certain sense) subclass of instances for which the existence of a NE can be decided in poly-time. Finally, we study analogous questions for general

k

-approval rules.

Svetlana Obraztsova, Omer Lev, Evangelos Markakis, Zinovi Rabinovich, Jeffrey S. Rosenschein
Winner Determination and Manipulation in Minisum and Minimax Committee Elections

In a committee election, a set of candidates has to be determined as winner of the election. Baumeister and Dennisen [

2

] proposed to extend the minisum and minimax approach, initially defined for approval votes, to other forms of votes. They define minisum and minimax committee election rules for trichotomous votes, incomplete linear orders and complete linear orders, by choosing a winning committee that minimizes the dissatisfaction of the voters. Minisum election rules minimize the voter dissatisfaction by choosing a winning committee with minimum sum of the disagreement values for all individual votes, whereas in a minimax winning committee the maximum disagreement value for an individual vote is minimized. In this paper, we investigate the computational complexity of winner determination in these voting rules. We show that winner determination is possible in polynomial time for all minisum rules we consider, whereas it is

$$\mathrm{NP}$$

-complete for three of the minimax rules. Furthermore, we study different forms of manipulation for these committee election rules.

Dorothea Baumeister, Sophie Dennisen, Lisa Rey
OWA-Based Extensions of the Chamberlin–Courant Rule

Given a set of voters

V

, a set of candidates

C

, and voters’ preferences over the candidates, multiwinner voting rules output a fixed-size subset of candidates (committee). Under the Chamberlin–Courant multiwinner voting rule, one fixes a scoring vector of length |

C

|, and each voter’s ‘utility’ for a given committee is defined to be the score that she assigns to her most preferred candidate in that committee; the goal is then to find a committee that maximizes the joint utility of all voters. The joint utility is typically identified either with the sum of all voters’ utilities or with the utility of the least satisfied voter, resulting in, respectively, the

utilitarian

and the

egalitarian

variant of the Chamberlin–Courant’s rule. For both of these cases, the problem of computing an optimal committee is NP-hard for general preferences, but becomes polynomial-time solvable if voters’ preferences are single-peaked or single-crossing. In this paper, we propose a family of multiwinner voting rules that are based on the concept of

ordered weighted average (OWA)

and smoothly interpolate between the egalitarian and the utilitarian variants of the Chamberlin–Courant rule. We show that under moderate constraints on the weight vector we can recover many of the algorithmic results known for the egalitarian and the utilitarian version of Chamberlin–Courant’s rule in this more general setting.

Edith Elkind, Anisse Ismaili

Allocation and Other Problems

Frontmatter
Optimal Group Manipulation in Facility Location Problems

We address optimal group manipulation in multi-dimensional, multi-facility location problems. We focus on two families of mechanisms,

generalized median

and

quantile

mechanisms, evaluating the difficulty of group manipulation of these mechanisms. We show that, in the case of single-facility problems, optimal group manipulation can be formulated as a linear or second-order cone program, under the

$$L_1$$

- and

$$L_2$$

-norms, respectively, and hence can be solved in polynomial time. For multiple facilities, we show that optimal manipulation is NP-hard, but can be formulated as a mixed integer linear or second-order cone program, under the

$$L_1$$

- and

$$L_2$$

-norms, respectively. Despite this hardness result, empirical evaluation shows that multi-facility manipulation can be computed in reasonable time with our formulations.

Xin Sui, Craig Boutilier
Fairness and Rank-Weighted Utilitarianism in Resource Allocation

In multiagent resource allocation with indivisible goods, boolean fairness criteria and optimization of inequality-reducing collective utility functions (CUFs) are orthogonal approaches to fairness. We investigate the question of whether the proposed scale of criteria by Bouveret and Lemaître [

5

] applies to nonadditive utility functions and find that only the more demanding part of the scale remains intact for

k

-additive utility functions. In addition, we show that the min-max fair-share allocation existence problem is NP-hard and that under strict preferences competitive equilibrium from equal incomes does not coincide with envy-freeness and Pareto-optimality. Then we study the approximability of rank-weighted utilitarianism problems. In the special case of rank dictator functions the approximation problem is closely related to the

MaxMin-Fairness

problem: Approximation and/or hardness results would immediately transfer to the

MaxMin-Fairness

problem. For general inequality-reducing rank-weighted utilitarianism we show (strong) NP-completeness. Experimentally, we answer the question of how often maximizers of rank-weighted utilitarianism satisfy the max-min fair-share criterion, the weakest fairness criterion according to Bouveret and Lemaître’s scale. For inequality-reducing weight vectors there is high compatibility. But even for weight vectors that do not imply inequality-reducing CUFs, the Hurwicz weight vectors, we find a high compatibility that decreases as the Hurwicz parameter decreases.

Tobias Heinen, Nhan-Tam Nguyen, Jörg Rothe
Randomized Assignments for Barter Exchanges: Fairness vs. Efficiency

We study fairness and efficiency properties of randomized algorithms for barter exchanges with direct applications to kidney exchange problems. It is well documented that randomization can serve as a tool to ensure fairness among participants. However, in many applications, practical constraints often restrict the maximum allowed cycle-length of the exchange and for randomized algorithms, this imposes constraints of the cycle-length of every realized exchange in their decomposition. We prove that standard fairness properties such as envy-freeness or symmetry are incompatible with even the weakest notion of economic efficiency in this setting. On the plus side, we adapt some well-known matching mechanisms to incorporate the restricted cycle constraint and evaluate their performance experimentally on instances of the kidney exchange problem, showing tradeoffs between fairness and efficiency.

Wenyi Fang, Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Pingzhong Tang, Song Zuo

Doctoral Consortium

Frontmatter
CP-nets: From Theory to Practice

Conditional preference networks (CP-nets) have been proposed for modeling and reasoning about combinatorial decision domains. However, the study of CP-nets has not advanced sufficiently for their widespread use in complex, real-world applications. My research involves addressing these issues to make CP-nets more useful in such settings.

Thomas E. Allen
Possible Optimality and Preference Elicitation for Decision Making

Decision support systems often rely on a mathematical decision model allowing the comparison of alternatives and the selection of a proper solution.

Nawal Benabbou
Measuring Intrinsic Quality of Human Decisions

Decision making is an integral part of artificial intelligence.

Tamal T. Biswas
Sequential Decision Making Under Uncertainty Using Ordinal Preferential Information

The research work undertaken in my thesis aims at facilitating the conception of autonomous agents able to solve complex problems in sequential decision problems (e.g., planning problems in robotics).

Hugo Gilbert
Efficiency in Multi-objective Games

In a multi-objective game, each agent individually evaluates each overall action-profile by a vector. I generalize the price of anarchy to multi-objective games and provide a polynomial-time algorithm to assess this new generalization. A working paper is on:

http://arxiv.org/abs/1506.04251

.

Anisse Ismaili
Modeling, Learning and Reasoning with Qualitative Preferences

My research is focused on knowledge representation and reasoning, especially, preference modeling, learning and reasoning, and computational social choice.

Xudong Liu
Backmatter
Metadaten
Titel
Algorithmic Decision Theory
herausgegeben von
Toby Walsh
Copyright-Jahr
2015
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-23114-3
Print ISBN
978-3-319-23113-6
DOI
https://doi.org/10.1007/978-3-319-23114-3

Premium Partner