Skip to main content
Top

2010 | Book

Algorithmic Game Theory

Third International Symposium, SAGT 2010, Athens, Greece, October 18-20, 2010. Proceedings

Editors: Spyros Kontogiannis, Elias Koutsoupias, Paul G. Spirakis

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter
When the Players Are Not Expectation Maximizers
Abstract
Much of Game Theory, including the Nash equilibrium concept, is based on the assumption that players are expectation maximizers. It is known that if players are risk averse, games may no longer have Nash equilibria [11,6]. We show that
1
Under risk aversion (convex risk valuations), and for almost all games, there are no mixed Nash equilibria, and thus either there is a pure equilibrium or there are no equilibria at all, and,
 
1
For a variety of important valuations other than expectation, it is NP-complete to determine if games between such players have a Nash equilibrium.
 
Amos Fiat, Christos Papadimitriou
How Do You Like Your Equilibrium Selection Problems? Hard, or Very Hard?
Abstract
The PPAD-completeness of Nash equilibrium computation is taken as evidence that the problem is computationally hard in the worst case. This evidence is necessarily rather weak, in the sense that PPAD is only know to lie “between P and NP”, and there is not a strong prospect of showing it to be as hard as NP. Of course, the problem of finding an equilibrium that has certain sought-after properties should be at least as hard as finding an unrestricted one, thus we have for example the NP-hardness of finding equilibria that are socially optimal (or indeed that have various efficiently checkable properties), the results of Gilboa and Zemel [6], and Conitzer and Sandholm [3]. In the talk I will give an overview of this topic, and a summary of recent progress showing that the equilibria that are found by the Lemke-Howson algorithm, as well as related homotopy methods, are PSPACE-complete to compute. Thus we show that there are no short cuts to the Lemke-Howson solutions, subject only to the hardness of PSPACE. I mention some open problems.
Paul W. Goldberg
A Simplex-Like Algorithm for Fisher Markets
Abstract
We propose a new convex optimization formulation for the Fisher market problem with linear utilities. Like the Eisenberg-Gale formulation, the set of feasible points is a polyhedral convex set while the cost function is non-linear; however, unlike that, the optimum is always attained at a vertex of this polytope. The convex cost function depends only on the initial endowments of the buyers. This formulation yields an easy simplex-like pivoting algorithm which is provably strongly polynomial for many special cases.
Bharat Adsul, Ch. Sobhan Babu, Jugal Garg, Ruta Mehta, Milind Sohoni
Nash Equilibria in Fisher Market
Abstract
Much work has been done on the computation of market equilibria. However due to strategic play by buyers, it is not clear whether these are actually observed in the market. Motivated by the observation that a buyer may derive a better payoff by feigning a different utility function and thereby manipulating the Fisher market equilibrium, we formulate the Fisher market game in which buyers strategize by posing different utility functions. We show that existence of a conflict-free allocation is a necessary condition for the Nash equilibria (NE) and also sufficient for the symmetric NE in this game. There are many NE with very different payoffs, and the Fisher equilibrium payoff is captured at a symmetric NE. We provide a complete polyhedral characterization of all the NE for the two-buyer market game. Surprisingly, all the NE of this game turn out to be symmetric and the corresponding payoffs constitute a piecewise linear concave curve. We also study the correlated equilibria of this game and show that third-party mediation does not help to achieve a better payoff than NE payoffs.
Bharat Adsul, Ch. Sobhan Babu, Jugal Garg, Ruta Mehta, Milind Sohoni
Partition Equilibrium Always Exists in Resource Selection Games
Abstract
We consider the existence of Partition Equilibrium in Resource Selection Games. Super-strong equilibrium, where no subset of players has an incentive to change their strategies collectively, does not always exist in such games. We show, however, that partition equilibrium (introduced in [4] to model coalitions arising in a social context) always exists in general resource selection games, as well as how to compute it efficiently. In a partition equilibrium, the set of players has a fixed partition into coalitions, and the only deviations considered are by coalitions that are sets in this partition. Our algorithm to compute a partition equilibrium in any resource selection game (i.e., load balancing game) settles the open question from [4] about existence of partition equilibrium in general resource selection games. Moreover, we show how to always find a partition equilibrium which is also a Nash equilibrium. This implies that in resource selection games, we do not need to sacrifice the stability of individual players when forming solutions stable against coalitional deviations. In addition, while super-strong equilibrium may not exist in resource selection games, we show that its existence can be decided efficiently, and how to find one if it exists.
Elliot Anshelevich, Bugra Caskurlu, Ameya Hate
Mixing Time and Stationary Expected Social Welfare of Logit Dynamics
Abstract
We study logit dynamics [Blume, Games and Economic Behavior, 1993] for strategic games. At every stage of the game a player is selected uniformly at random and she plays according to a noisy best-response dynamics where the noise level is tuned by a parameter β. Such a dynamics defines a family of ergodic Markov chains, indexed by β, over the set of strategy profiles. Our aim is twofold: On the one hand, we are interested in the expected social welfare when the strategy profiles are random according to the stationary distribution of the Markov chain, because we believe it gives a meaningful description of the long-term behavior of the system. On the other hand, we want to estimate how long it takes, for a system starting at an arbitrary profile and running the logit dynamics, to get close to the stationary distribution; i.e., the mixing time of the chain.
In this paper we study the stationary expected social welfare for the 3-player congestion game that exhibits the worst Price of Anarchy  [Christodoulou and Koutsoupias, STOC’05], for 2-player coordination games (the same class of games studied by Blume), and for a simple n-player game. For all these games, we give almost-tight upper and lower bounds on the mixing time of logit dynamics.
Vincenzo Auletta, Diodato Ferraioli, Francesco Pasquale, Giuseppe Persiano
Pareto Efficiency and Approximate Pareto Efficiency in Routing and Load Balancing Games
Abstract
We analyze the Pareto efficiency, or inefficiency, of solutions to routing games and load balancing games, focusing on Nash equilibria and greedy solutions to these games. For some settings, we show that the solutions are necessarily Pareto optimal. When this is not the case, we provide a measure to quantify the distance of the solution from Pareto efficiency. Using this measure, we provide upper and lower bounds on the “Pareto inefficiency” of the different solutions. The settings we consider include load balancing games on identical, uniformly-related, and unrelated machines, both using pure and mixed strategies, and nonatomic routing in general and some specific networks.
Yonatan Aumann, Yair Dombb
On Nash-Equilibria of Approximation-Stable Games
Abstract
One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ε-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ε-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ε,Δ) approximation-stable games must have an ε-equilibrium of support \(O(\frac{\Delta^{2-o(1)}}{\epsilon^{2}}{\rm log n})\), yielding an immediate \(n^{O(\frac{\Delta^{2-o(1)}}{\epsilon^2}log n)}\)-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ε are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.
Pranjal Awasthi, Maria-Florina Balcan, Avrim Blum, Or Sheffet, Santosh Vempala
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
Abstract
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
Vittorio Bilò, Ioannis Caragiannis, Angelo Fanelli, Gianpiero Monaco
On the Rate of Convergence of Fictitious Play
Abstract
Fictitious play is a simple learning algorithm for strategic games that proceeds in rounds. In each round, the players play a best response to a mixed strategy that is given by the empirical frequencies of actions played in previous rounds. There is a close relationship between fictitious play and the Nash equilibria of a game: if the empirical frequencies of fictitious play converge to a strategy profile, this strategy profile is a Nash equilibrium. While fictitious play does not converge in general, it is known to do so for certain restricted classes of games, such as constant-sum games, non-degenerate 2×n games, and potential games. We study the rate of convergence of fictitious play and show that, in all the classes of games mentioned above, fictitious play may require an exponential number of rounds (in the size of the representation of the game) before some equilibrium action is eventually played. In particular, we show the above statement for symmetric constant-sum win-lose-tie games.
Felix Brandt, Felix Fischer, Paul Harrenstein
On Learning Algorithms for Nash Equilibria
Abstract
Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or non-convergence properties of such dynamics may inform our understanding of the applicability of Nash equilibria as a plausible solution concept in some settings. A second reason for asking this question is in the hope of being able to prove an impossibility result, not dependent on complexity assumptions, for computing Nash equilibria via a restricted class of reasonable algorithms. In this work, we begin to answer this question by considering the dynamics of the standard multiplicative weights update learning algorithms (which are known to converge to a Nash equilibrium for zero-sum games). We revisit a 3×3 game defined by Shapley [10] in the 1950s in order to establish that fictitious play does not converge in general games. For this simple game, we show via a potential function argument that in a variety of settings the multiplicative updates algorithm impressively fails to find the unique Nash equilibrium, in that the cumulative distributions of players produced by learning dynamics actually drift away from the equilibrium.
Constantinos Daskalakis, Rafael Frongillo, Christos H. Papadimitriou, George Pierrakos, Gregory Valiant
On the Structure of Weakly Acyclic Games
Abstract
The class of weakly acyclic games, which includes potential games and dominance-solvable games, captures many practical application domains. Informally, a weakly acyclic game is one where natural distributed dynamics, such as better-response dynamics, cannot enter inescapable oscillations. We establish a novel link between such games and the existence of pure Nash equilibria in subgames. Specifically, we show that the existence of a unique pure Nash equilibrium in every subgame implies the weak acyclicity of a game. In contrast, the possible existence of multiple pure Nash equilibria in every subgame is insufficient for weak acyclicity.
Alex Fabrikant, Aaron D. Jaggard, Michael Schapira
A Direct Reduction from k-Player to 2-Player Approximate Nash Equilibrium
Abstract
We present a direct reduction from k-player games to 2-player games that preserves approximate Nash equilibrium. Previously, the computational equivalence of computing approximate Nash equilibrium in k-player and 2-player games was established via an indirect reduction. This included a sequence of works defining the complexity class PPAD, identifying complete problems for this class, showing that computing approximate Nash equilibrium for k-player games is in PPAD, and reducing a PPAD-complete problem to computing approximate Nash equilibrium for 2-player games. Our direct reduction makes no use of the concept of PPAD, eliminating some of the difficulties involved in following the known indirect reduction.
Uriel Feige, Inbal Talgam-Cohen
Responsive Lotteries
Abstract
Given a set of alternatives and a single player, we introduce the notion of a responsive lottery. These mechanisms receive as input from the player a reported utility function, specifying a value for each one of the alternatives, and use a lottery to produce as output a probability distribution over the alternatives. Thereafter, exactly one alternative wins (is given to the player) with the respective probability. Assuming that the player is not indifferent to which of the alternatives wins, a lottery rule is called truthful dominant if reporting his true utility function (up to affine transformations) is the unique report that maximizes the expected payoff for the player. We design truthful dominant responsive lotteries. We also discuss their relations with scoring rules and with VCG mechanisms.
Uriel Feige, Moshe Tennenholtz
On the Existence of Optimal Taxes for Network Congestion Games with Heterogeneous Users
Abstract
We consider network congestion games in which a finite number of non-cooperative users select paths. The aim is to mitigate the inefficiency caused by the selfish users by introducing taxes on the network edges. A tax vector is strongly (weakly)-optimal if all (at least one of) the equilibria in the resulting game minimize(s) the total latency. The issue of designing optimal tax vectors for selfish routing games has been studied extensively in the literature. We study for the first time taxation for networks with atomic users which have unsplittable traffic demands and are heterogeneous, i.e., have different sensitivities to taxes. On the positive side, we show the existence of weakly-optimal taxes for single-source network games. On the negative side, we show that the cases of homogeneous and heterogeneous users differ sharply as far as the existence of strongly-optimal taxes is concerned: there are parallel-link games with linear latencies and heterogeneous users that do not admit strongly-optimal taxes.
Dimitris Fotakis, George Karakostas, Stavros G. Kolliopoulos
Computing Stable Outcomes in Hedonic Games
Abstract
We study the computational complexity of finding stable outcomes in symmetric additively-separable hedonic games. These coalition formation games are specified by an undirected edge-weighted graph: nodes are players, an outcome of the game is a partition of the nodes into coalitions, and the utility of a node is the sum of incident edge weights in the same coalition. We consider several natural stability requirements defined in the economics literature. For all of them the existence of a stable outcome is guaranteed by a potential function argument, so local improvements will converge to a stable outcome and all these problems are in PLS. The different stability requirements correspond to different local search neighbourhoods. For different neighbourhood structures, our findings comprise positive results in the form of polynomial-time algorithms for finding stable outcomes, and negative (PLS-completeness) results.
Martin Gairing, Rahul Savani
A Perfect Price Discrimination Market Model with Production, and a (Rational) Convex Program for It
Abstract
Recent results showed PPAD-completeness of the problem of computing an equilibrium for Fisher’s market model under additively separable, piecewise-linear, concave utilities. We show that introducing perfect price discrimination in this model renders its equilibrium polynomial time computable. Moreover, its set of equilibria are captured by a convex program that generalizes the classical Eisenberg-Gale program, and always admits a rational solution.
We also introduce production into our model; our goal is to carve out as big a piece of the general production model as possible while still maintaining the property that a single (rational) convex program captures its equilibria, i.e., the convex program must optimize individually for each buyer and each firm.
Gagan Goel, Vijay Vazirani
The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements
Abstract
The king of refinements of Nash equilibrium is trembling hand perfection. We show that it is NP-hard and Sqrt-Sum-hard to decide if a given pure strategy Nash equilibrium of a given three-player game in strategic form with integer payoffs is trembling hand perfect. Analogous results are shown for a number of other solution concepts, including proper equilibrium, (the strategy part of) sequential equilibrium, quasi-perfect equilibrium and CURB.
The proofs all use a reduction from the problem of comparing the minmax value of a three-player game in strategic form to a given rational number. This problem was previously shown to be NP-hard by Borgs et al., while a Sqrt-Sum hardness result is given in this paper. The latter proof yields bounds on the algebraic degree of the minmax value of a three-player game that may be of independent interest.
Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, Troels Bjerre Sørensen
Complexity of Safe Strategic Voting
Abstract
We investigate the computational aspects of safe manipulation, a new model of coalitional manipulation that was recently put forward by Slinko and White [10]. In this model, a potential manipulator v announces how he intends to vote, and some of the other voters whose preferences coincide with those of v may follow suit. Depending on the number of followers, the outcome could be better or worse for v than the outcome of truthful voting. A manipulative vote is called safe if for some number of followers it improves the outcome from v’s perspective, and can never lead to a worse outcome. In this paper, we study the complexity of finding a safe manipulative vote for a number of common voting rules, including Plurality, Borda, k-approval, and Bucklin, providing algorithms and hardness results for both weighted and unweighted voters. We also propose two ways to extend the notion of safe manipulation to the setting where the followers’ preferences may differ from those of the leader, and study the computational properties of the resulting extensions.
Noam Hazon, Edith Elkind
Bottleneck Congestion Games with Logarithmic Price of Anarchy
Abstract
We study bottleneck congestion games where the social cost is determined by the worst congestion on any resource. In the literature, bottleneck games assume player utility costs determined by the worst congested resource in their strategy. However, the Nash equilibria of such games are inefficient since the price of anarchy can be very high and proportional to the number of resources. In order to obtain smaller price of anarchy we introduce exponential bottleneck games, where the utility costs of the players are exponential functions of their congestions. In particular, the delay function for any resource r is \(\mathcal{M}^{C_r}\), where C r denotes the number of players that use r, and \(\mathcal{M}\) is an integer constant. We find that exponential bottleneck games are very efficient and give the following bound on the price of anarchy: O(log|R|), where R is the set of resources. This price of anarchy is tight, since we demonstrate a game with price of anarchy Ω(log|R|). We obtain our tight bounds by using two novel proof techniques: transformation, which we use to convert arbitrary games to simpler games, and expansion, which we use to bound the price of anarchy in a simpler game.
Rajgopal Kannan, Costas Busch
Single-Parameter Combinatorial Auctions with Partially Public Valuations
Abstract
We consider the problem of designing truthful auctions, when the bidders’ valuations have a public and a private component. In particular, we consider combinatorial auctions where the valuation of an agent i for a set S of items can be expressed as v i f(S), where v i is a private single parameter of the agent, and the function f is publicly known. Our motivation behind studying this problem is two-fold: (a) Such valuation functions arise naturally in the case of ad-slots in broadcast media such as Television and Radio. For an ad shown in a set S of ad-slots, f(S) is, say, the number of unique viewers reached by the ad, and v i is the valuation per-unique-viewer. (b) From a theoretical point of view, this factorization of the valuation function simplifies the bidding language, and renders the combinatorial auction more amenable to better approximation factors. We present a general technique, based on maximal-in-range mechanisms, that converts any α-approximation non-truthful algorithm (α ≤ 1) for this problem into \(\Omega(\frac{\alpha}{\log{n}})\) and Ω(α)-approximate truthful mechanisms which run in polynomial time and quasi-polynomial time, respectively.
Gagan Goel, Chinmay Karande, Lei Wang
On the Efficiency of Markets with Two-Sided Proportional Allocation Mechanisms
Abstract
We analyze the performance of resource allocation mechanisms for markets in which there is competition amongst both consumers and suppliers (namely, two-sided markets). Specifically, we examine a natural generalization of both Kelly’s proportional allocation mechanism for demand-competitive markets [9] and Johari and Tsitsiklis’ proportional allocation mechanism for supply-competitive markets [7].
We first consider the case of a market for one divisible resource. Assuming that marginal costs are convex, we derive a tight bound on the price of anarchy of about 0.5887. This worst case bound is achieved when the demand-side of the market is highly competitive and the supply-side consists of a duopoly. As more firms enter the market, the price of anarchy improves to 0.64. In contrast, on the demand side, the price of anarchy improves when the number of consumers decreases, reaching a maximum of 0.7321 in a monopsony setting. When the marginal cost functions are concave, the above bound smoothly degrades to zero as the marginal costs tend to constants. For monomial cost functions of the form \(C(x)= cx^{1+\frac{1}{d}}\), we show that the price of anarchy is \(\Omega(\frac{1}{d^2})\).
We complement these guarantees by identifying a large class of two-sided single-parameter market-clearing mechanisms among which the proportional allocation mechanism uniquely achieves the optimal price of anarchy. We also prove that our worst case bounds extend to general multi-resource markets, and in particular to bandwidth markets over arbitrary networks.
Volodymyr Kuleshov, Adrian Vetta
Braess’s Paradox for Flows over Time
Abstract
We study the properties of Braess’s paradox in the context of the model of congestion games with flow over time introduced by Koch and Skutella. We compare them to the well known properties of Braess’s paradox for Wardrop’s model of games with static flows. We show that there are networks which do not admit Braess’s paradox in Wardrop’s model, but which admit it in the model with flow over time. Moreover, there is a topology that admits a much more severe Braess’s ratio for this model. Further, despite its symmetry for games with static flow, we show that Braess’s paradox is not symmetric for flows over time. We illustrate that there are network topologies which exhibit Braess’s paradox, but for which the transpose does not. Finally, we conjecture a necessary and sufficient condition of existence of Braess’s paradox in a network, and prove the condition of existence of the paradox either in the network or in its transpose.
Martin Macko, Kate Larson, Ľuboš Steskal
The Price of Anarchy in Network Creation Games Is (Mostly) Constant
Abstract
We study the price of anarchy and the structure of equilibria in network creation games. A network creation game (first defined and studied by Fabrikant et al. [4]) is played by n players {1,2,...,n}, each identified with a vertex of a graph (network), where the strategy of player i, i = 1,...,n, is to build some edges adjacent to i. The cost of building an edge is α> 0, a fixed parameter of the game. The goal of every player is to minimize its creation cost plus its usage cost. The creation cost of player i is α times the number of built edges. In the SumGame (the original variant of Fabrikant et al. [4]) the usage cost of player i is the sum of distances from i to every node of the resulting graph. In the MaxGame (variant defined and studied by Demaine et al. [3]) the usage cost is the eccentricity of i in the resulting graph of the game. In this paper we improve previously known bounds on the price of anarchy of the game (of both variants) for various ranges of α, and give new insights into the structure of equilibria for various values of α. The two main results of the paper show that for α > 273·n all equilibria in SumGame are trees and thus the price of anarchy is constant, and that for α> 129 all equilibria in MaxGame are trees and the price of anarchy is constant. For SumGame this (almost) answers one of the basic open problems in the field – is price of anarchy of the network creation game constant for all values of α? – in an affirmative way, up to a tiny range of α.
Matúš Mihalák, Jan Christoph Schlegel
Truthful Fair Division
Abstract
We address the problem of fair division, or cake cutting, with the goal of finding truthful mechanisms. In the case of a general measure space (“cake”) and non-atomic, additive individual preference measures - or utilities - we show that there exists a truthful “mechanism” which ensures that each of the k players gets at least 1/k of the cake. This mechanism also minimizes risk for truthful players. Furthermore, in the case where there exist at least two different measures we present a different truthful mechanism which ensures that each of the players gets more than 1/k of the cake.
We then turn our attention to partitions of indivisible goods with bounded utilities and a large number of goods. Here we provide similar mechanisms, but with slightly weaker guarantees. These guarantees converge to those obtained in the non-atomic case as the number of goods goes to infinity.
Elchanan Mossel, Omer Tamuz
No Regret Learning in Oligopolies: Cournot vs. Bertrand
Abstract
Cournot and Bertrand oligopolies constitute the two most prevalent models of firm competition. The analysis of Nash equilibria in each model reveals a unique prediction about the stable state of the system. Quite alarmingly, despite the similarities of the two models, their projections expose a stark dichotomy. Under the Cournot model, where firms compete by strategically managing their output quantity, firms enjoy positive profits as the resulting market prices exceed that of the marginal costs. On the contrary, the Bertrand model, in which firms compete on price, predicts that a duopoly is enough to push prices down to the marginal cost level. This suggestion that duopoly will result in perfect competition, is commonly referred to in the economics literature as the “Bertrand paradox”.
In this paper, we move away from the safe haven of Nash equilibria as we analyze these models in disequilibrium under minimal behavioral hypotheses. Specifically, we assume that firms adapt their strategies over time, so that in hindsight their average payoffs are not exceeded by any single deviating strategy. Given this no-regret guarantee, we show that in the case of Cournot oligopolies, the unique Nash equilibrium fully captures the emergent behavior. Notably, we prove that under natural assumptions the daily market characteristics converge to the unique Nash. In contrast, in the case of Bertrand oligopolies, a wide range of positive average payoff profiles can be sustained. Hence, under the assumption that firms have no-regret the Bertrand paradox is resolved and both models arrive to the same conclusion that increased competition is necessary in order to achieve perfect pricing.
Uri Nadav, Georgios Piliouras
On the Complexity of Pareto-optimal Nash and Strong Equilibria
Abstract
We consider the computational complexity of coalitional solution concepts in scenarios related to load balancing such as anonymous and congestion games. In congestion games, Pareto-optimal Nash and strong equilibria, which are resilient to coalitional deviations, have recently been shown to yield significantly smaller inefficiency. Unfortunately, we show that several problems regarding existence, recognition, and computation of these concepts are hard, even in seemingly special classes of games. In anonymous games with constant number of strategies, we can efficiently recognize a state as Pareto-optimal Nash or strong equilibrium, but deciding existence for a game remains hard. In the case of player-specific singleton congestion games, we show that recognition and computation of both concepts can be done efficiently. In addition, in these games there are always short sequences of coalitional improvement moves to Pareto-optimal Nash and strong equilibria that can be computed efficiently.
Martin Hoefer, Alexander Skopalik
2-Player Nash and Nonsymmetric Bargaining Games: Algorithms and Structural Properties
Abstract
The solution to a Nash or a nonsymmetric bargaining game is obtained by maximizing a concave function over a convex set, i.e., it is the solution to a convex program. We show that each 2-player game whose convex program has linear constraints, admits a rational solution and such a solution can be found in polynomial time using only an LP solver. If in addition, the game is succinct, i.e., the coefficients in its convex program are “small”, then its solution can be found in strongly polynomial time. We also give non-succinct linear games whose solution can be found in strongly polynomial time.
Vijay V. Vazirani
On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games
Abstract
We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.
Bart de Keijzer, Guido Schäfer, Orestis A. Telelis
Minimal Subsidies in Expense Sharing Games
Abstract
A key solution concept in cooperative game theory is the core. The core of an expense sharing game contains stable allocations of the total cost to the participating players, such that each subset of players pays at most what it would pay if acting on its own. Unfortunately, some expense sharing games have an empty core, meaning that the total cost is too high to be divided in a stable manner. In such cases, an external entity could choose to induce stability using an external subsidy. We call the minimal subsidy required to make the core of a game non-empty the Cost of Stability (CoS), adopting a recently coined term for surplus sharing games.
We provide bounds on the CoS for general, subadditive and anonymous games, discuss the special case of Facility Games, as well as consider the complexity of computing the CoS of the grand coalition and of coalitional structures.
Reshef Meir, Yoram Bachrach, Jeffrey S. Rosenschein
Backmatter
Metadata
Title
Algorithmic Game Theory
Editors
Spyros Kontogiannis
Elias Koutsoupias
Paul G. Spirakis
Copyright Year
2010
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-16170-4
Print ISBN
978-3-642-16169-8
DOI
https://doi.org/10.1007/978-3-642-16170-4

Premium Partner