Skip to main content

2011 | Buch

Internet and Network Economics

7th International Workshop, WINE 2011, Singapore, December 11-14, 2011. Proceedings

herausgegeben von: Ning Chen, Edith Elkind, Elias Koutsoupias

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 7th International Workshop on Internet and Network Economics, WINE 2011, held in Singapore, in December 2011. The 31 revised full papers and 5 revised short papers presented together with the abstracts of 3 papers about work in progress were carefully reviewed and selected from 100 submissions. The papers are organized in topical sections on algorithmic game theory, algorithmic mechanism design, computational advertising, computational social choice, convergence and learning in games, economics aspects of security and privacy, information and attention economics, network games and social networks.

Inhaltsverzeichnis

Frontmatter

Full Papers

The Snowball Effect of Uncertainty in Potential Games

Uncertainty is present in different guises in many settings, in particular in environments with strategic interactions. However, most game-theoretic models assume that players can accurately observe interactions and their own costs. In this paper we quantify the effect on social costs of two different types of uncertainty: adversarial perturbations of small magnitude to costs (effect called the Price of Uncertainty (PoU) [3]) and the presence of several players with Byzantine, i.e. arbitrary, behavior (effect we call the Price of Byzantine behavior (PoB)). We provide lower and upper bounds on PoU and PoB in two well-studied classes of potential games: consensus games and set-covering games.

Maria-Florina Balcan, Florin Constantin, Steven Ehrlich
Approximation Algorithm for Security Games with Costly Resources

In recent years, algorithms for computing game-theoretic solutions have been developed for real-world security domains. These games are between a defender, who must allocate her resources to defend potential targets, and an attacker, who chooses a target to attack. Existing work has assumed the set of defender’s resources to be fixed. This assumption precludes the effective use of approximation algorithms, since a slight change in the defender’s allocation strategy can result in a massive change in her utility. In contrast, we consider a model where resources are obtained at a cost, initiating the study of the following optimization problem: Minimize the total cost of the purchased resources, given that every target has to be defended with at least a certain probability. We give an efficient logarithmic approximation algorithm for this problem.

Sayan Bhattacharya, Vincent Conitzer, Kamesh Munagala
On Allocations with Negative Externalities

We consider the problem of a monopolist seller who wants to sell some items to a set of buyers. The buyers are strategic, unit-demand, and connected by a social network. Furthermore, the utility of a buyer is a decreasing function of the number of neighbors who do not own the item. In other words, they exhibit negative externalities, deriving utility from being

unique

in their purchases. In this model, any fixed setting of the price induces a sub-game on the buyers. We show that it is an exact potential game which admits multiple pure Nash Equilibria. A natural problem is to compute those pure Nash equilibria that raise the most and least revenue for the seller. These correspond respectively to the most optimistic and most pessimistic revenues that can be raised.

We show that the revenues of

both

the best and worst equilibria are hard to approximate within sub-polynomial factors. Given this hardness, we consider a relaxed notion of pricing, where the price for the same item can vary within a constant factor for different buyers. We show a 4-approximation to the pessimistic revenue when the prices are relaxed by a factor of 4. The interesting aspect of this algorithm is that it uses a linear programming relaxation that only encodes part of the strategic behavior of the buyers in its constraints, and rounds this relaxation to obtain a starting configuration for performing relaxed Nash dynamics. Finally, for the maximum revenue Nash equilibrium, we show a 2-approximation for bipartite graphs (without price relaxation), and complement this result by showing that the problem is NP-Hard even on trees.

Sayan Bhattacharya, Janardhan Kulkarni, Kamesh Munagala, Xiaoming Xu
An Improved 2-Agent Kidney Exchange Mechanism

We study a mechanism design version of matching computation in graphs that models the game played by hospitals participating in pairwise kidney exchange programs. We present a new randomized matching mechanism for two agents which is truthful in expectation and has an approximation ratio of 3/2 to the maximum cardinality matching. This is an improvement over a recent upper bound of 2 [Ashlagi et al., EC 2010] and, furthermore, our mechanism beats for the first time the lower bound on the approximation ratio of deterministic truthful mechanisms. We complement our positive result with new lower bounds. Among other statements, we prove that the weaker incentive compatibility property of truthfulness in expectation in our mechanism is necessary; universally truthful mechanisms that have an inclusion-maximality property have an approximation ratio of at least 2.

Ioannis Caragiannis, Aris Filos-Ratsikas, Ariel D. Procaccia
Optimal Pricing in Social Networks with Incomplete Information

In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others’ private valuations, and make decisions simultaneously. The “rational behavior” of agents in this case is captured by the well-known Bayesian Nash equilibrium.

Two challenging questions arise: how to

compute

an equilibrium and how to

optimize

a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging.

Our main result is a polynomial-time algorithm that can

exactly

compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.

Wei Chen, Pinyan Lu, Xiaorui Sun, Bo Tang, Yajun Wang, Zeyuan Allen Zhu
On the Approximation Ratio of k-Lookahead Auction

We consider the problem of designing a profit-maximizing single-item auction, where the valuations of bidders are correlated. We revisit the

k

-lookahead auction introduced by Ronen [6] and recently further developed by Dobzinski, Fu and Kleinberg [2]. By a more delicate analysis, we show that the

k

-lookahead auction can guarantee at least

$\frac{e^{1-1/k}}{e^{1-1/k}+1}$

of the optimal revenue, improving the previous best results of

$\frac{2k-1}{3k-1}$

in [2]. The 2-lookahead auction is of particular interest since it can be derandomized [2, 5]. Therefore, our result implies a polynomial time deterministic truthful mechanism with a ratio of

$\frac{\sqrt{e}}{\sqrt{e}+1}$

≈ 0.622 for any single-item correlated-bids auction, improving the previous best ratio of 0.6. Interestingly, we can show that our analysis for 2-lookahead is tight. As a byproduct, a theoretical implication of our result is that the gap between the revenues of the optimal deterministically truthful and truthful-in-expectation mechanisms is at most a factor of

$\frac{1+\sqrt{e}}{\sqrt{e}}$

. This improves the previous best factor of

$\frac{5}{3}$

in [2].

Xue Chen, Guangda Hu, Pinyan Lu, Lei Wang
Decision Markets with Good Incentives

Decision markets both predict and decide the future. They allow experts to predict the effects of each of a set of possible actions, and after reviewing these predictions a decision maker selects an action to perform. When the future is independent of the market, strictly proper scoring rules myopically incentivize experts to predict consistent with their beliefs, but this is not generally true when a decision is to be made. When deciding, only predictions for the chosen action can be evaluated for their accuracy since the other predictions become counterfactuals. This limitation can make some actions more valuable than others for an expert, incentivizing the expert to mislead the decision maker. We construct and characterize decision markets that are – like prediction markets using strictly proper scoring rules – myopic incentive compatible. These markets require the decision maker always risk taking every available action, and reducing this risk increases the decision maker’s worst-case loss. We also show a correspondence between strictly proper decision markets and strictly proper sets of prediction markets, creating a formal connection between the incentives of prediction and decision markets.

Yiling Chen, Ian Kash, Mike Ruberry, Victor Shnayder
A Global Characterization of Envy-Free Truthful Scheduling of Two Tasks

We study envy-free and truthful mechanisms for domains with additive valuations, like the ones that arise in scheduling on unrelated machines. We investigate the allocation functions that are both weakly monotone (truthful) and locally efficient (envy-free), in the case of only two tasks, but

many

players. We show that the only allocation functions that satisfy both conditions are affine minimizers, with strong restrictions on the parameters of the affine minimizer. As a further result, we provide a common payment function, i.e., a single mechanism that is both truthful and envy-free.

For additive combinatorial auctions our approach leads us (only) to a non- affine maximizer similar to the counterexample of Lavi et al. [26]. Thus our result demonstrates the inherent difference between the scheduling and the auctions domain, and inspires new questions related to the classic problem of characterizing truthfulness in additive domains.

George Christodoulou, Annamária Kovács
Truth, Envy, and Truthful Market Clearing Bundle Pricing

We give a non-trivial class of valuation functions for which we give auctions that are efficient, truthful and envy-free.

We give interesting classes of valuations for which one can design such auctions. Surprisingly, we also show that minor modifications to these valuations lead to impossibility results, the most surprising of which is that for a natural class of valuations, one cannot achieve efficiency, truthfulness, envy freeness, individual rationality, and no positive transfers.

We also show that such auctions also imply a truthful mechanism for computing bundle prices (“shrink wrapped” bundles of items), that clear the market. This extends the class of valuations for which truthful market clearing prices mechanisms exist.

Edith Cohen, Michal Feldman, Amos Fiat, Haim Kaplan, Svetlana Olonetsky
Simple, Optimal and Efficient Auctions

We study the extent to which simple auctions can simultaneously achieve good revenue and efficiency guarantees in single-item settings. Motivated by the optimality of the second price auction with monopoly reserves when the bidders’ values are drawn i.i.d. from regular distributions [12], and its approximate optimality when they are drawn from independent regular distributions [11], we focus our attention to the second price auction with general (not necessarily monopoly) reserve prices, arguably one of the simplest and most intuitive auction formats. As our main result, we show that for a carefully chosen set of reserve prices this auction guarantees at least 20% of both the optimal welfare and the optimal revenue, when the bidders’ values are distributed according to independent, not necessarily identical, regular distributions. We also prove a similar guarantee, when the values are drawn i.i.d. from a—possibly irregular—distribution.

Constantinos Daskalakis, George Pierrakos
Prior-Independent Multi-parameter Mechanism Design

In a unit-demand multi-unit multi-item auction, an auctioneer is selling a collection of different items to a set of agents each interested in buying at most unit. Each agent has a different private value for each of the items. We consider the problem of designing a truthful auction that maximizes the auctioneer’s profit in this setting. Previously, there has been progress on this problem in the setting in which each value is drawn from a known prior distribution. Specifically, it has been shown how to design auctions tailored to these priors that achieve a constant factor approximation ratio [2, 5]. In this paper, we present a prior-independent auction for this setting. This auction is guaranteed to achieve a constant fraction of the optimal expected profit for a large class of, so called, “regular” distributions, without specific knowledge of the distributions.

Nikhil Devanur, Jason Hartline, Anna Karlin, Thach Nguyen
Discrete Choice Models of Bidder Behavior in Sponsored Search

There are two kinds of bidders in sponsored search: most keep their bids static for long periods of time, but some do actively manage their bids. In this work we develop a model of bidder behavior in sponsored search that applies to both active and inactive bidders. Our observations on real keyword auction data show that advertisers see substantial variation in rank, even if their bids are static. This motivates a discrete choice approach that bypasses bids and directly models an advertiser’s (perhaps passive) choice of rank. Our model’s value per click estimates are consistent with basic theory which states that bids should not exceed values, even though bids are not directly used to fit the model. An empirical evaluation confirms that our model performs well in terms of predicting realized ranks and clicks.

Quang Duong, Sébastien Lahaie
Social Learning in a Changing World

We study a model of learning on social networks in dynamic environments, describing a group of agents who are each trying to estimate an underlying state that varies over time, given access to weak signals and the estimates of their social network neighbors.

We study three models of agent behavior. In the

fixed response

model, agents use a fixed linear combination to incorporate information from their peers into their own estimate. This can be thought of as an extension of the DeGroot model to a dynamic setting. In the

best response

model, players calculate minimum variance linear estimators of the underlying state.

We show that regardless of the initial configuration, fixed response dynamics converge to a steady state, and that the same holds for best response on the complete graph. We show that best response dynamics can, in the long term, lead to estimators with higher variance than is achievable using well chosen fixed responses.

The

penultimate prediction

model is an elaboration of the best response model. While this model only slightly complicates the computations required of the agents, we show that in some cases it greatly increases the efficiency of learning, and on complete graphs is in fact optimal, in a strong sense.

Rafael M. Frongillo, Grant Schoenebeck, Omer Tamuz
Budget-Balanced and Nearly Efficient Randomized Mechanisms: Public Goods and beyond

Many scenarios where participants hold private information require payments to encourage truthful revelation. Some of these scenarios have no natural

residual claimant

who would absorb the budget surplus or cover the deficit. Faltings [7] proposed the idea of excluding one agent uniformly at random and making him the residual claimant. Based on this idea, we propose two classes of public good mechanisms and derive optimal ones within each class: Faltings’ mechanism is optimal in one of the classes. We then move on to general mechanism design settings, where we prove guarantees on the social welfare achieved by Faltings’ mechanism. Finally, we analyze a modification of the mechanism where budget balance is achieved without designating any agent as the residual claimant.

Mingyu Guo, Victor Naroditskiy, Vincent Conitzer, Amy Greenwald, Nicholas R. Jennings
Online Stochastic Weighted Matching: Improved Approximation Algorithms

Motivated by the display ad allocation problem on the Internet, we study the

online stochastic weighted matching

problem. In this problem, given an edge-weighted bipartite graph, nodes of one side arrive online i.i.d. according to a known probability distribution. Recently, a sequence of results by Feldman et. al [14] and Manshadi et. al [20] result in a 0.702-approximation algorithm for the unweighted version of this problem, aka

online stochastic matching

, breaking the 1 − 1 /

e

barrier. Those results, however, do no hold for the more general online stochastic weighted matching problem. Moreover, all of these results employ the idea of

power of two choices

.

In this paper, we present the first approximation (0.667-competitive) algorithm for the online stochastic weighted matching problem beating the 1 − 1 /

e

barrier. Moreover, we improve the approximation factor of the online stochastic matching by analyzing the more general framework of

power of multiple choices

. In particular, by computing a careful third pseudo-matching along with the two offline solutions, and using it in the online algorithm, we improve the approximation factor of the online stochastic matching for any bipartite graph to 0.7036.

Bernhard Haeupler, Vahab S. Mirrokni, Morteza Zadimoghaddam
On Strategy-Proof Allocation without Payments or Priors

In this paper we study the problem of allocating divisible items to agents without payments. We assume no prior knowledge about the agents. The utility of an agent is additive. The social welfare of a mechanism is defined as the overall utility of all agents. This model is first defined by Guo and Conitzer [7]. Here we are interested in strategy-proof mechanisms that have a good

competitive ratio

, that is, those that are able to achieve social welfare close to the maximal social welfare in all cases. First, for the setting of

n

agents and

m

items, we prove that there is no (1/

m

 + 

ε

)-competitive strategy-proof mechanism, for any

ε

 > 0. And, no mechanism can achieve a

competitive ratio

better than

$4/\sqrt{n}$

, when

$m \ge \sqrt{n}$

. Next we study the setting of two agents and

m

items, which is also the focus of [7]. We prove that the

competitive ratio

of any swap-dictatorial mechanism is no greater than

$1/2 + 1/\sqrt{\left\lbrack\log{m}\right\rbrack }$

. Then we give a characterization result: for the case of 2 items, if the mechanism is strategy-proof, symmetric and second order continuously differentiable, then it is always swap-dictatorial. In the end we consider a setting where an agent’s valuation of each item is bounded by

C

/

m

, where

C

is an arbitrary constant. We show a mechanism that is (1/2 + 

ε

(

C

))-competitive, where

ε

(

C

) > 0.

Li Han, Chunzhi Su, Linpeng Tang, Hongyang Zhang
Demand Allocation Games: Integrating Discrete and Continuous Strategy Spaces

In this paper, we introduce a class of games which we term

demand allocation games

that combines the characteristics of finite games such as congestion games and continuous games such as Cournot oligopolies. In a strategy profile each player may choose both an action out of a finite set and a non-negative demand out of a convex and compact interval. The utility of each player is assumed to depend solely on the action, the chosen demand, and the aggregated demand on the action chosen. We show that this general class of games possess a pure Nash equilibrium whenever the players’ utility functions satisfy the assumptions

negative externality

,

decreasing marginal returns

and

homogeneity

. If one of the assumptions is violated, then a pure Nash equilibrium may fail to exist. We demonstrate the applicability of our results by giving several concrete examples of games that fit into our model.

Tobias Harks, Max Klimm
Controlling Infection by Blocking Nodes and Links Simultaneously

In this paper we study the problem of controlling the spread of undesirable things (viruses, epidemics, rumors, etc.) in a network. We present a model called the

mixed generalized network security model

, denoted by MGNS(

d

), which unifies and generalizes several well-studied infection control model in the literature. Intuitively speaking, our goal under this model is to secure a subset of nodes and links in a network so as to minimize the expected total loss caused by a possible infection (with a spreading limit of

d

-hops) plus the cost spent on the preventive actions. Our model has wide applications since it incorporates both node-deletion and edge-removal operations. Our main results are as follows:

1

For all 1 ≤ 

d

 < ∞, we present a polynomial time (

d

 + 1)-approximation algorithm for computing the optimal solution of MGNS(

d

). This improves the approximation factor of 2

d

obtained in [19] for a special case of our model. We derive an

O

(log

n

)-approximation for the case

d

 = ∞. Moreover, we give a polynomial time

$\frac{3}{2}$

-approximation for MGNS(1) on bipartite graphs.

2

We prove that for all

d

 ∈ ℕ ∪ { ∞ }, it is

$\mathcal{APX}$

-hard to compute the optimum cost of MGNS(

d

) even on 3-regular graphs. We also show that, assuming the Unique Games Conjecture 13, we cannot obtain a

$(\frac{3}{2}-\epsilon)$

-approximation for MGNS(

d

) in polynomial time. Our hardness results hold for the special case GNS(

d

) in [19] as well.

3

We show that an optimal solution of MGNS(

d

) can be found in polynomial time for every fixed

d

 ∈ ℕ ∪ { ∞ } if the underlying graph is a tree, and the infection cost and attack probability are both uniform. Our algorithm also works for the case where there are budget constraints on the number of secured nodes and edges in a solution. This in particular settles an open question from [21] that asks whether there exists an efficient algorithm for the minimum average contamination problem on trees.

Jing He, Hongyu Liang, Hao Yuan
A General Framework for Computing Optimal Correlated Equilibria in Compact Games
(Extended Abstract)

We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g., social welfare). Papadimitriou and Roughgarden [2008] gave a sufficient condition for the tractability of this problem; however, this condition only applies to a subset of existing representations. We propose a different algorithmic approach for the optimal CE problem that applies to

all

compact representations, and give a sufficient condition that generalizes that of Papadimitriou and Roughgarden [2008]. In particular, we reduce the optimal CE problem to the

deviation

 − 

adjusted

social

welfare

problem

, a combinatorial optimization problem closely related to the optimal social welfare problem. This framework allows us to identify new classes of games for which the optimal CE problem is tractable; we show that graphical polymatrix games on tree graphs are one example. We also study the problem of computing the optimal

coarse

correlated

equilibrium

, a solution concept closely related to CE. Using a similar approach we derive a sufficient condition for this problem, and use it to prove that the problem is tractable for singleton congestion games.

Albert Xin Jiang, Kevin Leyton-Brown
Buy-Sell Auction Mechanisms in Market Equilibrium

In this paper we consider the problem of computing market equilibrium when utilties are homothetic concave functions. We use the Fisher market model. The problem of finding a tâtonnement process for equilibrium in this case has been the subject of recent papers and determining an approximation is of considerable interest. Our buy-sell algorithm starts with an arbitrary price vector and converges to an

ε

-equilibrium price vector in time proportional to

O

(1/

ε

2

). This process attempts to closely mimic the convergence process of real-life markets.

Sanjiv Kapoor
Behavioral Conflict and Fairness in Social Networks

We report on a series of behavioral experiments in social networks in which human subjects continuously choose to play either a dominant role (called a

King

) or a submissive one (called a

Pawn

). Kings receive a higher payoff rate, but only if all their network neighbors are Pawns, and thus the maximum social welfare states correspond to

maximum independent sets

. We document that

fairness

is of vital importance in driving interactions between players. First, we find that payoff disparities between network neighbors gives rise to conflict, and the specifics depend on the network topology. However, allowing Kings to offer “tips” or side payments to their neighbors substantially reduces conflict, and consistently increases social welfare. Finally, we observe that tip reductions lead to increased conflict. We describe these and a broad set of related findings.

Stephen Judd, Michael Kearns, Yevgeniy Vorobeychik
Efficient Ranking in Sponsored Search

In the standard model of sponsored search auctions, an ad is ranked according to the product of its bid and its estimated click-through rate (known as the quality score), where the estimates are taken as exact. This paper re-examines the form of the efficient ranking rule when uncertainty in click-through rates is taken into account. We provide a sufficient condition under which applying an exponent—strictly less than one—to the quality score improves expected efficiency. The condition holds for a large class of distributions known as natural exponential families, and for the lognormal distribution. An empirical analysis of Yahoo’s sponsored search logs reveals that exponent settings substantially smaller than one can be efficient for both high and low volume keywords, implying substantial deviations from the traditional ranking rule.

Sébastien Lahaie, R. Preston McAfee
The Complexity of Approximate Nash Equilibrium in Congestion Games with Negative Delays

We extend the study of the complexity of computing an

ε

-approximate Nash equilibrium in symmetric congestion games from the case of positive delay functions to delays of arbitrary sign. Our results show that with this extension the complexity has a richer structure, and it depends on the exact nature of the signs allowed. We first prove that in symmetric games with increasing delay functions and with

α

-bounded jump the

ε

-Nash dynamic converges in polynomial time when all delays are negative, similarly to the case of positive delays. We are able to extend this result to monotone delay functions. We then establish a hardness result for symmetric games with increasing delay functions and with

α

-bounded jump when the delays can be both positive and negative: in that case computing an

ε

-approximate Nash equilibrium becomes

PLS

-complete, even if each delay function is of constant sign or of constant absolute value.

Frédéric Magniez, Michel de Rougemont, Miklos Santha, Xavier Zeitoun
On Worst-Case Allocations in the Presence of Indivisible Goods

We study a fair division problem, where a set of indivisible goods is to be allocated to a set of

n

agents. In the continuous case, where goods are infinitely divisible, it is well known that proportional allocations always exist, i.e., allocations where every agent receives a bundle of goods worth to him at least

$\frac{1}{n}$

. With indivisible goods however, this is not the case and one would like to find worst case guarantees on the value that every agent can have. We focus on algorithmic and mechanism design aspects of this problem.

An explicit lower bound was identified by Hill [5], depending on

n

and the maximum value of any agent for a single good, such that for any instance, there exists an allocation that provides at least this guarantee to every agent. The proof however did not imply an efficient algorithm for finding such allocations. Following upon the work of [5], we first provide a slight strengthening of the guarantee we can make for every agent, as well as a polynomial time algorithm for computing such allocations. We then move to the design of truthful mechanisms. For deterministic mechanisms, we obtain a negative result showing that a truthful

$\frac{2}{3}$

-approximation of these guarantees is impossible. We complement this by exhibiting a simple truthful algorithm that can achieve a constant approximation when the number of goods is bounded. Regarding randomized mechanisms, we also provide a negative result, under the restrictions that they are Pareto-efficient and satisfy certain symmetry requirements.

Evangelos Markakis, Christos-Alexandros Psomas
Natural Models for Evolution on Networks

Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact

simultaneously

and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an

“aggregation”

vs. an

“all-or-nothing”

strategy, respectively. We prove that our new model of mutual influences admits a

potential function

, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.

George B. Mertzios, Sotiris Nikoletseas, Christoforos Raptopoulos, Paul G. Spirakis
Approximate Judgement Aggregation

In this paper we analyze judgement aggregation problems in which a group of agents independently votes on a set of complex propositions that has some interdependency constraint between them (e.g., transitivity when describing preferences). We consider the issue of judgement aggregation from the perspective of approximation. That is, we generalize the previous results by studying approximate judgement aggregation. We relax the main two constraints assumed in the current literature, Consistency and Independence and consider mechanisms that only approximately satisfy these constraints, that is, satisfy them up to a small portion of the inputs. The main question we raise is whether the relaxation of these notions significantly alters the class of satisfying aggregation mechanisms. The recent works for preference aggregation of Kalai, Mossel, and Keller fit into this framework. The main result of this paper is that, as in the case of preference aggregation, in the case of a subclass of a natural class of aggregation problems termed ‘truth-functional agendas’, the set of satisfying aggregation mechanisms does not extend non-trivially when relaxing the constraints. Our proof techniques involve boolean Fourier transform and analysis of voter influences for voting protocols.

The question we raise for Approximate Aggregation can be stated in terms of Property Testing. For instance, as a corollary from our result we get a generalization of the classic result for property testing of linearity of boolean functions.

Ilan Nehama
Liquidity-Sensitive Automated Market Makers via Homogeneous Risk Measures

Automated market makers are algorithmic agents that provide liquidity in electronic markets. A recent stream of research in automated market making is the design of

liquidity-sensitive

automated market makers, which are able to adjust their price response to the level of active interest in the market. In this paper, we introduce homogeneous risk measures, the general class of liquidity-sensitive automated market makers, and show that members of this class are (necessarily and sufficiently) the convex conjugates of compact convex sets in the non-negative orthant. We discuss the relation between features of this convex conjugate set and features of the corresponding automated market maker in detail, and prove that it is the curvature of the convex conjugate set that is responsible for implicitly regularizing the price response of the market maker. We use our insights into the dual space to develop a new family of liquidity-sensitive automated market makers with desirable properties.

Abraham Othman, Tuomas Sandholm
Manipulating Stochastically Generated Single-Elimination Tournaments for Nearly All Players

We study the power of a tournament organizer in manipulating the outcome of a balanced single-elimination tournament by fixing the initial seeding. This problem is known as

agenda control for balanced voting trees

. It is not known whether there is a polynomial time algorithm that computes a seeding for which a given player can win the tournament, even if the match outcomes for all pairwise player match-ups are known in advance. We approach the problem by giving a sufficient condition under which the organizer can

always

efficiently find a tournament seeding for which the given player will win the tournament. We then use this result to show that for most match outcomes generated by a natural random model attributed to Condorcet, the tournament organizer can very efficiently make a large constant fraction of the players win, by manipulating the initial seeding.

Isabelle Stanton, Virginia Vassilevska Williams
Computing Nash Equilibria of Action-Graph Games via Support Enumeration

The support-enumeration method (SEM) for computation of Nash equilibrium has been shown to achieve state-of-the-art empirical performance on normal-form games. Action-graph games (AGGs) are exponentially smaller than the normal form on many important classes of games. We show how SEM can be extended to the AGG representation, yielding an exponential improvement in worst-case runtime. Empirically, we demonstrate that our AGG-optimized SEM algorithm substantially outperforms the original SEM, and also outperforms state-of-the-art AGG-optimized algorithms on most problem distributions.

David R. M. Thompson, Samantha Leung, Kevin Leyton-Brown
Heavy Traffic Approximation of Equilibria in Resource Sharing Games

We consider a model of priced resource sharing that combines both

queueing behavior

and

strategic behavior

. We study a priority service model where a single server allocates its capacity to agents in proportion to their payment to the system, and users from different classes act to minimize the sum of their cost for processing delay and payment. As the exact processing time of this system is hard to compute, we introduce the notion of

heavy traffic equilibrium

as an approximation of the Nash equilibrium, derived by considering the asymptotic regime where the system load approaches capacity. We discuss efficiency and revenue, and in particular provide a bound for the price of anarchy of the heavy traffic equilibrium.

Yu Wu, Loc Bui, Ramesh Johari
An NTU Cooperative Game Theoretic View of Manipulating Elections

Social choice theory and cooperative (coalitional) game theory have become important foundations for the design and analysis of multiagent systems. In this paper, we use cooperative game theory tools in order to explore the coalition formation process in the coalitional manipulation problem. Unlike earlier work on a cooperative-game-theoretic approach to the manipulation problem [2], we consider a model where utilities are not transferable. We investigate the issue of stability in coalitional manipulation voting games; we define two notions of the core in these domains, the

α

-core and the

β

-core. For each type of core, we investigate how hard it is to determine whether a given candidate is in the core. We prove that for both types of core, this determination is at least as hard as the coalitional manipulation problem. On the other hand, we show that for some voting rules, the

α

- and the

β

-core problems are no harder than the coalitional manipulation problem. We also show that some prominent voting rules, when applied to the truthful preferences of voters, may produce an outcome not in the core, even when the core is not empty.

Michael Zuckerman, Piotr Faliszewski, Vincent Conitzer, Jeffrey S. Rosenschein

Short Papers

The Price of Civil Society

Most work in algorithmic game theory assumes that players ignore costs incurred by their fellow players. In this paper, we consider superimposing a social network over a game, where players are concerned with minimizing not only their own costs, but also the costs of their neighbors in the network. We aim to understand how properties of the underlying game are affected by this alteration to the standard model. The new social game has its own equilibria, and the

price of civil society

denotes the ratio of the social cost of the worst such equilibrium relative to the worst Nash equilibrium under standard selfish play. We initiate the study of the price of civil society in the context of a simple class of games. Counterintuitively, we show that when players become less selfish (optimizing over both themselves and their friends), the resulting outcomes may be worse than they would have been in the base game. We give tight bounds on this phenomenon in a simple class of load-balancing games, over arbitrary social networks, and present some extensions.

Russell Buehler, Zachary Goldman, David Liben-Nowell, Yuechao Pei, Jamie Quadri, Alexa Sharp, Sam Taggart, Tom Wexler, Kevin Woods
The Robust Price of Anarchy of Altruistic Games

We study the inefficiency of equilibria for several classes of games when players are (partially) altruistic. We model altruistic behavior by assuming that player

i

’s perceived cost is a convex combination of 1 − 

α

i

times his direct cost and

α

i

times the social cost. Tuning the parameters

α

i

allows smooth interpolation between purely selfish and purely altruistic behavior. Within this framework, we study altruistic extensions of cost-sharing games, utility games, and linear congestion games. Our main contribution is an adaptation of Roughgarden’s

smoothness

notion to altruistic extensions of games. We show that this extension captures the essential properties to determine the

robust price of anarchy

of these games, and use it to derive mostly tight bounds.

Po-An Chen, Bart de Keijzer, David Kempe, Guido Schäfer
Revenue Enhancement in Ad Auctions

We consider the revenue of the Generalized Second Price (GSP) auction, which is one of the most widely used mechanisms for ad auctions. While the standard model of ad auctions implies that the revenue of GSP in equilibrium is at least as high as the revenue of VCG, the literature suggests that it is not strictly higher due to the selection of a natural equilibrium that coincides with the VCG outcome. We propose a randomized modification of the GSP mechanism, which eliminates the low-revenue equilibria of the GSP mechanism under some natural restrictions. The proposed mechanism leads to a higher revenue to the seller.

Michal Feldman, Reshef Meir, Moshe Tennenholtz
Bilinear Games: Polynomial Time Algorithms for Rank Based Subclasses

Motivated by the sequence form formulation of Koller et al. [16], this paper considers

bilinear games

, represented by two payoff matrices (

A

,

B

) and compact polytopal strategy sets. Bilinear games are very general and capture many interesting classes of games including bimatrix games, two player Bayesian games, polymatrix games, and two-player extensive form games with perfect recall as special cases, and hence are hard to solve in general. For a bilinear game, we define its

best response polytopes

(BRPs) and characterize its Nash equilibria as the

fully-labeled

pairs of the BRPs. Rank of a game (

A

,

B

) is defined as

rank

(

A

 + 

B

). In this paper, we give polynomial-time algorithms for computing Nash equilibria of (

i

) rank-1 games, (

ii

) FPTAS for constant-rank games, and (

iii

) when

rank

(

A

) or

rank

(

B

) is constant.

Jugal Garg, Albert Xin Jiang, Ruta Mehta
Extending Characterizations of Truthful Mechanisms from Subdomains to Domains

The already extended literature in Combinatorial Auctions, Public Projects and Scheduling demands a more systematic classification of the domains and a clear comparison of the results known. Connecting characterization results for different settings and providing a characterization proof using another characterization result as a black box without having to repeat a tediously similar proof is not only elegant and desirable, but also greatly enhances our intuition and provides a classification of different results and a unified and deeper understanding. We consider whether one can extend a characterization of a subdomain to a domain in a black-box manner. We show that this is possible for n-player stable mechanisms if the only truthful mechanisms for the subdomain are the affine maximizers. We further show that if the characterization of the subdomain involves a combination of affine maximizers and threshold mechanisms, the threshold mechanisms for the subdomain cannot be extended to truthful mechanisms for the union of the subdomain with a (very slight) affine transformation of it. We also show that for every truthful mechanism in a domain there exists a corresponding truthful mechanism for any affine transformation of the domain. We finally plug in as a black box to our theorems the characterization of additive 2-player combinatorial auctions that are decisive and allocate all items (which essentially is the domain for scheduling unrelated machines) and show that the 2-player truthful mechanisms of any domain, which is strictly a superdomain of it are only the affine maximizers. This gives a unique characterization proof of the decisive 2-player mechanisms for: Combinatorial Public Projects, Unrestricted domains, as well as for Submodular and Subadditive Combinatorial Auctions that allocate all items.

Angelina Vidali

Working Papers

Optimal Multi-period Pricing with Service Guarantees
Working Paper

We consider the multi-period pricing problem of a service firm facing time-varying capacity levels. Customers are assumed to be fully strategic with respect to their purchasing decisions, and heterogeneous with respect to their valuations, and arrival-departure periods. The firm’s objective is to set a sequence of prices that maximizes its revenue while guaranteeing service to all paying customers. Although the corresponding optimization problem is non-convex, we provide a polynomial-time algorithm that computes the optimal sequence of prices. We show that due to the presence of strategic customers, available service capacity at a time period may bind the price offered at another time period. Consequently, when customers are more patient for service, the firm offers higher prices. This leads to the underutilization of capacity, lower revenues, and reduced customer welfare. Variants of the pricing algorithm we propose can be used in more general settings, such as a robust optimization formulation of the pricing problem.

Christian Borgs, Ozan Candogan, Jennifer Chayes, Ilan Lobel, Hamid Nazerzadeh
Pricing and Efficiency in the Market for IP Addresses
Working Paper

We consider market rules for the transfer of IP addresses, numeric identifiers required by all computers connected to the Internet. Excessive fragmentation of IP address blocks causes growth in the Internet’s routing table, which is socially costly, so an IP address market should discourage subdividing IP address blocks more than necessary. Yet IP address transfer rules also need to facilitate purchase by the networks that need the addresses most, from the networks who value them least. We propose a market rule that avoids excessive fragmentation while almost achieving social efficiency, and we argue that implementation of this rule is feasible despite the limited powers of central authorities. We also offer a framework for the price trajectory of IPv4 addresses. In a world without uncertainty, the unit price of IPv4 is constant before the first time when all blocks of IPv4 addresses are in use and decreasing after that time. With uncertainty, the price before that time is a martingale, and the price trajectory afterwards is a supermartingale.

Benjamin Edelman, Michael Schwarz
A Note on the Incompatibility of Strategy-Proofness and Pareto-Optimality in Quasi-Linear Settings with Public Budgets
Working Paper

We study the problem of allocating multiple identical items that may be complements to budget-constrained bidders with private values. We show that there does not exist a deterministic mechanism that is individually rational, strategy-proof, Pareto-efficient, and that does not make positive transfers. This is true even if there are only two players, two items, and the budgets are common knowledge. The same impossibility naturally extends to more abstract social choice settings with an arbitrary outcome set, assuming players with quasi-linear utilities and public budget limits. Thus, the case of infinite budgets (in which the VCG mechanism satisfies all these properties) is really the exception.

Ron Lavi, Marina May
Backmatter
Metadaten
Titel
Internet and Network Economics
herausgegeben von
Ning Chen
Edith Elkind
Elias Koutsoupias
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25510-6
Print ISBN
978-3-642-25509-0
DOI
https://doi.org/10.1007/978-3-642-25510-6