Skip to main content

2008 | Buch

Internet and Network Economics

4th International Workshop, WINE 2008, Shanghai, China, December 17-20, 2008. Proceedings

herausgegeben von: Christos Papadimitriou, Shuzhong Zhang

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Workshop on Internet and Network Economics, WINE 2008, held in Shanghai, China, in December 2008. The 68 revised full papers presented together with 10 invited talks were carefully reviewed and selected from 126 submissions. The papers are organized in topical sections on market equilibrium, congestion games, information markets, nash equilibrium, network games, solution concepts, algorithms and optimization, mechanism design, equilibrium, online advertisement, sponsored search auctions, and voting problems.

Inhaltsverzeichnis

Frontmatter

Invited Talks 1: Special Session

Mechanism Design Theory: How to Implement Social Goals

The theory of mechanism design can be thought of as the engineering side of economic theory. One begins by identifying a social or economic goal. The theory then addresses the question of whether or not an appropriate institution or procedure (that is, a mechanism) could be designed to attain that goal.

Eric Maskin
Thirty Years of Chinese Economic Reform: Reasons for Its Success and Future Directions

What are the principal reasons for the highly successful Chinese economic reform that began in 1978? One may say that they are the strong Chinese economic fundamentals-surplus labor, abundant savings, huge domestic market, etc. But the strong fundamentals have always been there, at least since the 1950s. Why did the Chinese economy not take off earlier?

The introduction of the market system, first in the rural area, and then in the urban area, must be regarded as the primary reason for the success of the economic reform. But the former Soviet Union and subsequently Russia also introduced the market system, with disastrous economic results for the entire first decade. Why was China able to do it while others failed?

Three important reasons can be identified: First, Chinese economic reform is characterized by openness-China welcomed international trade with and direct investment from all countries and regions, including Hong Kong, Taiwan, and the United States, and with trade and direct investment came technology, business models, and ideas that were new to China. Second, the Chinese economic reformers are characterized by their pragmatism-they are willing to try almost anything-whatever works-but they will just as readily abandon whatever that proves not to work. Third, Chinese economic reform has been implemented in such a way that it is mostly Pareto-improving, that is, almost everyone is made better off by the economic reform and no one is made worse off, which maximizes support, minimizes opposition and preserves social harmony.

What are some future directions of reform? They should consist of various ways to perfect the market mechanism in China. First, China has reached a stage of development that it needs to make and keep the markets truly competitive, through anti-monopoly laws and other means-and this applies to the both the goods market and the factors (including capital) market. When markets are not competitive, they may result in outcomes that are worse than those under central planning. Second, the markets can also be made more competitive, and hence more efficient, if information asymmetry can be reduced or eliminated. Thus, the Chinese Government can set standards for goods and services and assure quality through government-mandated and operated testing agencies. Third, markets frequently fail when there is moral hazard. The Chinese Government can reduce the incidence of moral hazard by limiting leverage and requiring bonding. Fourth, the Chinese Government can also make the market system more complete by establishing and maintaining socially desirable markets that do not arise naturally without government intervention, for example, a long-term market for bonds backed by qualified long-term owner-occupied residential mortgages. Finally, the market system is not equipped to redistribute, but redistribution is often necessary on grounds of fairness and social harmony. The Chinese Government should design an equitable tax system as well as undertake public investments in education, health care, environmental protection and mass transportation so that the benefits of the continuing economic reform can be shared by the majority of the people.

Lawrence J. Lau

Invited Talks 2: Plenary Session

Average Distance, Diameter, and Clustering in Social Networks with Homophily

I examine a random network model where nodes are categorized by type and linking probabilities can differ across types. I show that as homophily increases (so that the probability to link to other nodes of the same type increases and the probability of linking to nodes of some other types decreases) the average distance and diameter of the network are unchanged, while the average clustering in the network increases.

Matthew O. Jackson
Assignment Exchanges

We analyze “assignment exchanges”- auction and exchange mechanisms which are tight simplifications of direct Walrasian exchange mechanisms. These simplifications are distinguished by their use of assignment messages, which parameterize certain substitutable preferences. The “basic” assignment exchanges respect integer constraints, generalizing the Shapley-Shubik mechanism for indivisible goods. Connections are reported between the assignment exchanges, ascending multi-product clock auctions, uniform price auctions for a single product, and Vickrey auctions. The exchange mechanisms accommodate bids by buyers, sellers and swappers and can support trading for certain kinds of complementary goods.

Paul Milgrom
Search Engine Ad Auctions

Auctions for search engine advertising have been one of the most successful examples of economic mechanism design, at least in the private sector. This talk will review some of the history, theory, and practical issues surrounding these auctions.

Hal R. Varian
Computational Economy Equilibrium and Application

The rise of the Internet and the emerging E-Commerce applications has created new economic markets of unprecedented scale. They have introduced many cross-disciplinary challenges in mathematics and computer scientists, and engineering, one of which is the algorithmic and complexity issue of economy market equilibrium theory. In this talk, we examine the mathematical connections as well as the computational equivalences between equilibrium and optimization, between game equilibrium and market equilibrium, existence and NP-hardness, and between exact computation and approximation. Being able to compute equilibria numerically also significantly expands the applicability of game/economy equilibrium theory to a wide range of decision problems. We present applications of computational equilibrium from developing communication network protocols in spectrum management and resource allocation to adopting free trade policies in international trade between nations.

Yinyu Ye

Invited Talks 3: Tutorial Session

Four Graph Partitioning Algorithms

We will discuss four partitioning algorithms using eigenvectors, random walks, PageRank and their variations. In particular, we will examine local partitioning algorithms, which find a cut near a specified starting vertex, with a running time that depends on the size of the small side of the cut, rather than on the size of the input graph (which can be prohibitively large). Three of the four partitioning algorithms are local algorithms and are particularly appropriate for applications arising in connection with Webgraphs and Internet economics.

Fan Chung Graham
Dynamic Spectrum Management: Optimization and Game Theoretic Formulations

Achieving efficient spectrum usage is a major challenge in the management of a complex communication system. With multiple users having conflicting objectives who share a common spectrum, some of whom may be hostile, careful resource allocation is essential for the effective utilization of the available frequency. Conventionally, spectrum sharing is achieved via orthogonal transmission schemes whereby the available frequency band is divided into multiple tones (or bands) which are pre-assigned to all the users on a non-overlapping basis. However, such “static orthogonal spectrum sharing” approach can lead to low bandwidth utilization. In fact, various recent spectrum occupancy studies have demonstrated that a typical geographical region has wide swathes of frequencies (up to 2/3 of the allocated radio spectrum) that are not used at any given time. While the utilization of spectrum varies with time, a significant amount of spectrum is available for opportunistic wireless applications among secondary users.

Spectrum-sensing cognitive radio technology allows devices to dynamically and automatically seek out and use the optimum frequencies and bandwidth. To take advantage of the unused spectrum capacity, the users dynamically adapt to the spectral environment and change transmission or reception parameters on the fly. This allows for more efficient wireless communication without causing harmful interference with legacy systems or other devices using the same frequency bands. In these systems all users are allowed to use all the tones simultaneously. In comparison with the static spectrum sharing policies, this setup offers significantly greater freedom in utilizing the spectrum. A major challenge in the development of opportunistic spectrum sharing technology is to devise efficient algorithms for the distributed management of frequency slots and transmit power.

This tutorial will describe various optimization and game theoretic formulations of the dynamic spectrum management and present some recent results on its complexity, duality and approximation.

Zhi-Quan (Tom) Luo
Some Recent Results in Algorithmic Game Theory

There are three major trends in the field of Algorithmic Game Theory: computational mechanism design, the price of anarchy, and the computation of equilibria; this talk describes one recent result in each. We show computational complexity lower bounds on truthful and approximately efficient mechanisms; we revisit the Roughgarden-Tardos result on selfish routing when routing decisions are made by the nodes, not the flows; and we show that Nash equilibria can be approximated well in several broad, unexpected, and useful classes of games. (Joint work with Costis Daskalakis, Michael Schapira, Yaron Singer, and Greg Valiant).

Christos Papadimitriou
The Elements of General Equilibrium Theory

The lecture will be an introduction to the model of economic equilibrium. The basic concepts: preferences, initial endowments and market clearing prices will discussed - in general and by means of examples. I will indicate how fixed point theorems are used to demonstrate the existence of equilibrium prices and sketch an algorithm for Brouwers theorem. If time permits, there will be some remarks on equilibrium models with production.

Herbert E. Scarf

Session A.1: Market Equilibrium

A Fast and Simple Algorithm for Computing Market Equilibria

We give a new mathematical formulation of market equilibria using an

indirect utility function

: the function of prices and income that gives the maximum utility achievable. The formulation is a

convex program

and can be solved when the indirect utility function is convex in prices. We illustrate that many economies including

Homogeneous utilities of degree

α

 ∈ [0,1] in Fisher economies — this includes Linear, Leontief, Cobb-Douglas

Resource allocation utilities

like multi-commodity flows

satisfy this condition and can be efficiently solved.

Further, we give a natural and decentralized price-adjusting algorithm in these economies. Our algorithm, mimics the natural tâtonnement dynamics for the markets as suggested by Walras: it iteratively adjusts a good’s price upward when the demand for that good under current prices exceeds its supply; and downward when its supply exceeds its demand. The algorithm computes an approximate equilibrium in a number of iterations that is independent of the number of traders and is almost

linear

in the number of goods. Interestingly, our algorithm applies to certain classes of utility functions that are not

weak gross substitutes

.

Lisa Fleischer, Rahul Garg, Sanjiv Kapoor, Rohit Khandekar, Amin Saberi
A FPTAS for Computing a Symmetric Leontief Competitive Economy Equilibrium

We consider a linear complementarity problem (LCP) arisen from the Arrow-Debreu-Leontief competitive economy equilibrium where the LCP coefficient matrix is symmetric. We prove that the decision problem, to decide whether or not there exists a complementary solution, is NP-complete. Under certain conditions, an LCP solution is guaranteed to exist and we present a fully polynomial-time approximation scheme (FPTAS) for computing such a solution, although the LCP solution set can be non-convex or non-connected. Our method is based on solving a quadratic social utility optimization problem (QP) and showing that a certain KKT point of the QP problem is an LCP solution. Then, we further show that such a KKT point can be approximated with running time

$\mathcal{O}((\frac{1}{\epsilon})\log (\frac{1}{\epsilon})\log( \log(\frac{1}{\epsilon}))$

in accuracy

ε

 ∈ (0,1) and a polynomial in problem dimensions. We also report preliminary computational results which show that the method is highly effective.

Zhisu Zhu, Chuangyin Dang, Yinyu Ye
Online and Offline Selling in Limit Order Markets

Completely automated electronic securities exchanges and algorithms for trading in these exchanges have become very important for modern finance. In [4], Kakade

et

al

. introduced the limit order market model, which is a prevalent paradigm in electronic markets. In this paper, we consider both online and offline algorithms for maximizing revenue when selling in limit order markets. We first prove that the standard reservation price algorithm has an optimal competitive ratio for this problem. This ratio is not constant, and so we consider computing solutions offline. We show that the offline optimization problem is

NP

-hard, even for very restricted instances. We complement the hardness result by presenting an approximation scheme that runs in polynomial time for a wide class of instances.

Kevin L. Chang, Aaron Johnson
Predictive Pricing and Revenue Sharing

Predictive pricing

(e.g., Google’s “Smart Pricing” and Yahoo’s “Quality Based Pricing”) and

revenue sharing

are two important tools that online advertising networks can use in order to attract content publishers and advertisers. We develop a simple model of the pay-per-click advertising market to study the market effects of these tools. We then present an algorithm,

PricingPolicy

, for computing an advertising network’s best response i.e., given the predictive pricing and revenue sharing policies used by its competitors, what policy should an advertising network use in response? Using

PricingPolicy

, we gain insight into the structure of optimal predictive pricing and revenue sharing policies.

Bobji Mungamuru, Hector Garcia-Molina
Dual Payoffs, Core and a Collaboration Mechanism Based on Capacity Exchange Prices in Multicommodity Flow Games

Given a network in which the edge capacities and the commodities are owned by the players, a cooperative multicommodity flow (MCF) game (

N

,

v

) can be defined such that

v

(

S

), the value of a sub-coalition

S

, is the maximum profit achievable within

S

by shipping its commodities through the sub-network owned by its members. In this paper, we study MCF games under a partially decentralized setting where the players make their own routing and resource exchange decisions given a set of capacity prices determined by a central authority.

Luyi Gui, Özlem Ergun

Session B.1: Congestion Games

Graphical Congestion Games

We consider congestion games with linear latency functions in which each player is aware only of a subset of all the other players. This is modeled by means of a social knowledge graph

G

in which nodes represent players and there is an edge from

i

to

j

if

i

knows

j

. Under the assumption that the payoff of each player is affected only by the strategies of the adjacent ones, we first give a complete characterization of the games possessing pure Nash equilibria. We then investigate the impact of the limited knowledge of the players on the performance of the game. More precisely, given a bound on the maximum degree of

G

, for the convergent cases we provide tight lower and upper bounds on the price of stability and asymptotically tight bounds on the price of anarchy. All the results are then extended to load balancing games.

Vittorio Bilò, Angelo Fanelli, Michele Flammini, Luca Moscardelli
How Hard Is It to Find Extreme Nash Equilibria in Network Congestion Games?

We study the complexity of finding extreme pure Nash equilibria in symmetric (unweighted) network congestion games. In our context best and worst equilibria are those with minimum respectively maximum makespan. On series-parallel graphs a worst Nash equilibrium can be found by a Greedy approach while finding a best equilibrium is

NP

-hard. For a fixed number of users we give a pseudo-polynomial algorithm to find the best equilibrium in series-parallel networks. For general network topologies also finding a worst equilibrium is

NP

-hard.

Elisabeth Gassner, Johannes Hatzl, Sven O. Krumke, Heike Sperber, Gerhard J. Woeginger
On the Road to $\mathcal{PLS}$ -Completeness: 8 Agents in a Singleton Congestion Game

In this paper, we investigate the complexity of computing

locally optimal

solutions for

Singleton Congestion Games

(SCG) in the framework of

$\mathcal{PLS}$

, as defined in Johnson et al. [25]. Here, in an instance

weighted agents

choose links from a set of

identical links

. The

cost of an agent

is the load (the sum of the weights of the agents) on the link it chooses. The agents are selfish and try to minimize their individual cost. Agents may form arbitrary,

non-fixed coalitions

. The cost of a coalition is defined to be the maximum cost of its members. The potential function is defined as the

lexicographical order of the agents’ cost

. In each selfish step of a coalition, the potential function decreases. Thus, a local minimum is a

Nash Equilibrium

among coalitions of size at most

k

—an assignment where no coalition of size at most

k

has an incentive to unilaterally decrease its cost by switching to different links. The neighborhood of a feasible assignment (every agent chooses a link) are all assignments, where the cost of some arbitrary non-fixed coalition of at most

k

reallocating agents decreases. We call this problem SCG-(

k

) and show that SCG-(

k

) is

$\mathcal{PLS}$

-complete for

k

 ≥ 8. On the other hand, for

k

 = 1, it is well known that the solution computed by Graham’s LPT-algorithm [14,16,22] is locally optimal for SCG-(

k

).

We show our result by tight reduction from the

MaxConstraintAssignment

-problem (

p

,

q

,

r

)-MCA, which is an extension of

Generalized Satisfiability

to higher valued variables. Here,

p

is the maximum number of variables occurring in a constraint,

q

is the maximum number of appearances of a variable, and

r

is the valuedness of the variables.

To the best of our knowledge, SCG-(

k

) is the first problem, which is known to be solvable in polynomial time for a small neighborhood and

$\mathcal{PLS}$

-complete for a larger, but still constant neighborhood.

Dominic Dumrauf, Burkhard Monien
Conflicting Congestion Effects in Resource Allocation Games

We consider resource allocation games with heterogeneous users and identical resources. Most of the previous work considered cost structures with either negative or positive congestion effects. We study a cost structure that encompasses both the resource’s load and the job’s share in the resource’s activation cost.

We consider the

proportional

sharing rule, where the resource’s activation cost is shared among its users proportionally to their lengths. We also challenge the assumption regarding the existence of a fixed set of resources, and consider settings with an unlimited supply of resources.

We provide results with respect to equilibrium existence, computation, convergence and quality. We show that if the resource’s activation cost is shared equally among its users, a pure Nash equilibrium (NE) might not exist. In contrast, under the proportional sharing rule, a pure NE always exists, and can be computed in polynomial time. Yet, starting at an arbitrary profile of actions, best-response dynamics might not converge to a NE. Finally, we prove that the price of anarchy is unbounded and the price of stability is between 18/17 and 5/4.

Michal Feldman, Tami Tamir
The Price of Malice in Linear Congestion Games

We study the

price of malice

in linear congestion games using the technique of no-regret analysis in the presence of Byzantine players. Our assumptions about the behavior both of rational players, and of malicious players are strictly weaker than have been previously used to study the price of malice. Rather than assuming that rational players route their flow according to a Nash equilibrium, we assume only that they play so as to have no

regret

. Rather than assuming that malicious players myopically seek to maximize the social cost of the game, we study Byzantine players about whom we make no assumptions, who may be seeking to optimize any utility function, and who may engage in an arbitrary degree of counter-speculation. Because our assumptions are strictly weaker than in previous work, the bounds we prove on two measures of the price of malice hold also for the quantities studied by Babaioff et al. [2] and Moscibroda et al. [15] We prove tight bounds both for the special case of parallel link routing games, and for general congestion games.

Aaron Roth

Session C.1: Information Markets

Parimutuel Betting on Permutations

We focus on a permutation betting market under parimutuel call auction model where traders bet on final rankings of

n

candidates. We present a

Proportional Betting

mechanism for this market. Our mechanism allows traders to bet on any subset of the

n

2

‘candidate-rank’ pairs, and rewards them proportionally to the number of pairs that appear in the final outcome. We show that market organizer’s decision problem for this mechanism can be formulated as a convex program of polynomial size. Further, the formulation yields a set of

n

2

unique

marginal prices that are sufficient to price the bets in this mechanism, and are computable in polynomial-time. These marginal prices reflect the traders’ beliefs about the marginal distributions over outcomes. More importantly, we propose techniques to compute the joint distribution over

n

! permutations from these marginal distributions. We show that using a maximum entropy criterion, we can obtain a concise parametric form (with only

n

2

parameters) for the joint distribution which is defined over an exponentially large state space. We then present an approximation algorithm for computing the parameters of this distribution. In fact, our algorithm addresses a generic problem of finding the maximum entropy distribution over permutations that has a given mean, and is of independent interest.

Shipra Agrawal, Zizhuo Wang, Yinyu Ye
Strategies in Dynamic Pari-Mutual Markets

We present a strategic model for pari-mutual markets by traders using a cumulative utility function. Under this model, we derive guidelines for the traders on how much to buy or sell. Those guidelines can be implemented with three action combinations, called strategies. We prove that those strategies are payoff equivalent for both the involved trader and the others in the current transaction. However, in the long run, their payoffs can be quite different.

We show that the

buy-only strategy(BOS)

achieves the highest market capitalization for the current transaction. In addition, simulation results also prove that BOS always yields the fastest growth of market capitalization even when multiple stages are taken into consideration. Simulation results also show that BOS is a better revelation of the traders’ personal beliefs, though it exhibits a higher risk in traders’ payoffs.

Tian-Ming Bu, Xiaotie Deng, Qianya Lin, Qi Qi
Truthful Surveys

We consider the problem of truthfully sampling opinions of a population for statistical analysis purposes, such as estimating the population distribution of opinions. To obtain accurate results, the surveyor must incentivize individuals to report unbiased opinions. We present a rewarding scheme to elicit opinions that are representative of the population. In contrast with the related literature, we do not assume a specific information structure. In particular, our method does not rely on a common prior assumption.

Nicolas Lambert, Yoav Shoham
Correlated Equilibrium of Bertrand Competition

This paper explores the relation between equilibrium coarsenings and equilibrium refinements via Bertrand competition example and similar situations, it shows that the typical equilibrium coarsening -— a unique correlated equilibrium -— is equivalent to the unique Nash equilibrium itself, is also equivalent to the equilibrium refinement, for the standard n-firms Bertrand competition model with linear demand and symmetric, linear costs in the most special and simplest case, and compares some wonderful and remarkable differences of the existence, uniqueness, stability, connectivity, and strategic property of Nash equilibrium and correlated equilibrium between Cournot and Bertrand model. We also propose some open questions.

John Wu
Diffusion of Innovations on Random Networks: Understanding the Chasm

We analyze diffusion models on sparse random networks with neighborhood effects. We show how large cascades can be triggered by small initial shocks and compute critical parameters: contagion threshold for a random network, phase transition in the size of the cascade.

Marc Lelarge

Session A.2: Nash Equilibrium I

An Efficient PTAS for Two-Strategy Anonymous Games

We present a novel polynomial time approximation scheme for two-strategy anonymous games, in which the players’ utility functions, although potentially different, do not differentiate among the identities of the other players. Our algorithm computes an

ε

-approximate Nash equilibrium of an

n

-player 2-strategy anonymous game in time

$\text{poly}(n) \cdot (1/\epsilon)^{O(1/\epsilon^2)}$

, which significantly improves upon the running time

$n^{O(1/\epsilon^2)}$

required by the algorithm of Daskalakis & Papadimitriou, 2007. The improved running time is based on a new structural understanding of approximate Nash equilibria: We show that, for any

ε

, there exists an

ε

-approximate Nash equilibrium in which either only

O

(1/

ε

3

) players randomize, or all players who randomize use the same mixed strategy. To show this result we employ tools from the literature on Stein’s Method.

Constantinos Daskalakis
Equilibria of Graphical Games with Symmetries
(Extended Abstract)

We study graphical games where the payoff function of each player satisfies one of four types of symmetry in the actions of his neighbors. We establish that deciding the existence of a pure Nash equilibrium is NP-hard in general for all four types. Using a characterization of games with pure equilibria in terms of even cycles in the neighborhood graph, as well as a connection to a generalized satisfiability problem, we identify tractable subclasses of the games satisfying the most restrictive type of symmetry. Hardness for a different subclass is obtained via a satisfiability problem that remains NP-hard in the presence of a matching, a result that may be of independent interest. Finally, games with symmetries of two of the four types are shown to possess a symmetric

mixed

equilibrium which can be computed in polynomial time. We thus obtain a class of games where the pure equilibrium problem is computationally harder than the mixed equilibrium problem, unless P=NP.

Felix Brandt, Felix Fischer, Markus Holzer
Equilibrium Points in Fear of Correlated Threats

The present work considers the following computational problem: Given any finite game in normal form

G

and the corresponding infinitely repeated game

G

 ∞ 

, determine in

polynomial time

(wrt the representation of

G

) a profile of strategies for the players in

G

 ∞ 

that is an equilibrium point wrt the limit-of-means payoff. The problem has been solved for two players [10], based mainly on the

implementability

of the threats for this case. Nevertheless, [4] demonstrated that the traditional notion of threats is a computationally hard problem for games with at least 3 players (see also [8]). Our results are the following: (i) We propose an alternative notion of

correlated threats

, which is polynomial time computable (and therefore credible). Our correlated threats are also more severe than the traditional notion of threats, but not overwhelming for any individual player. (ii) When for the underlying game

G

there is a

correlated strategy

with payoff vector

strictly larger

than the correlated threats vector, we efficiently compute a polynomial–size (wrt the description of

G

) equilibrium point for

G

 ∞ 

, for any

constant

number of players. (iii) Otherwise, we demonstrate the construction of an equilibrium point for an arbitrary number of players and up to 2 concurrently positive payoff coordinates in any payoff vector of

G

. This completely resolves the cases of 3 players, and provides a direction towards handling the cases of more than 3 players. It is mentioned that our construction is

not

a Nash equilibrium point, because the correlated threats we use are implemented via, not only full synchrony (as in [10]), but also coordination of the other players’ actions. But this seems to be a fair trade-off between efficiency of the construction and players’ coordination, in particular because it only affects the punishments (which are anticipated never to be used).

Spyros C. Kontogiannis, Paul G. Spirakis
Performance Evaluation of a Descent Algorithm for Bi-matrix Games

In this paper we present an implementation and performance evaluation of a descent algorithm that was proposed in [1] for the computation of approximate Nash equilibria of non-cooperative bi-matrix games. This algorithm, which achieves the best polynomially computable

ε

-approximate equilibria till now, is applied here to several problem instances designed so as to avoid the existence of easy solutions. Its performance is analyzed in terms of quality of approximation and speed of convergence. The results demonstrate significantly better performance than the theoretical worst case bounds, both for the quality of approximation and for the speed of convergence. This motivates further investigation into the intrinsic characteristics of descent algorithms applied to bi-matrix games. We discuss these issues and provide some insights about possible variations and extensions of the algorithmic concept that could lead to further understanding of the complexity of computing equilibria. We also prove here a new significantly better bound on the number of loops required for convergence of the descent algorithm.

Haralampos Tsaknakis, Paul G. Spirakis, Dimitrios Kanoulas
Worst-Case Nash Equilibria in Restricted Routing

We study a restricted related model of the network routing problem. There are

m

parallel links with possibly different speeds, between a source and a sink. And there are

n

users, and each user

i

has a traffic of weight

w

i

to assign to one of the links from a subset of all the links, named his/her allowable set. We analyze the

Price of Anarchy

(denoted by PoA) of the system, which is the ratio of the maximum delay in the worst-case Nash equilibrium and in an optimal solution. In order to better understand this model, we introduce a parameter

λ

for the system, and define an instance to be

λ-good

if for every user, there exist a link with speed at least

$\frac{s_{max}}{\lambda}$

in his/her allowable set. In this paper, we prove that for

λ

-good instances, the Price of Anarchy is

$ \Theta \big( \min\{\frac{\log \lambda m}{\log \log \lambda m}, m\}\big)$

. We also show an important application of our result in coordination mechanism design for task scheduling game. We propose a new coordination mechanism,

Group-Makespan

, for unrelated selfish task scheduling game. Our new mechanism ensures the existence of pure Nash equilibrium and its PoA is

$O \big(\frac{\log^2 m}{\log \log m}\big)$

. This result improves the best known result of

O

(log

2

m

) by Azar, Jain and Mirrokni in [2].

Pinyan Lu, Changyuan Yu

Session B.2: Network Games I

Stackelberg Routing in Arbitrary Networks

We investigate the impact of

Stackelberg routing

to reduce the price of anarchy in network routing games. In this setting, an

α

fraction of the entire demand is first routed centrally according to a predefined

Stackelberg strategy

and the remaining demand is then routed selfishly by (nonatomic) players. Although several advances have been made recently in proving that Stackelberg routing can in fact significantly reduce the price of anarchy for certain network topologies, the central question of whether this holds true in general is still open. We answer this question negatively. We prove that the price of anarchy achievable via Stackelberg routing can be unbounded even for single-commodity networks. In light of this negative result, we consider bicriteria bounds. We develop an efficiently computable Stackelberg strategy that induces a flow whose cost is at most the cost of an optimal flow with respect to demands scaled by a factor of

$1 + \sqrt{1-\alpha}$

. Finally, we analyze the effectiveness of an easy-to-implement Stackelberg strategy, called SCALE. We prove bounds for a general class of latency functions that includes polynomial latency functions as a special case. Our analysis is based on an approach which is simple, yet powerful enough to obtain (almost) tight bounds for SCALE in general networks.

Vincenzo Bonifaci, Tobias Harks, Guido Schäfer
Computational Aspects of a 2-Player Stackelberg Shortest Paths Tree Game

Let a communication network be modelled by a directed graph

G

 = (

V

,

E

) of

n

nodes and

m

edges. We consider a one-round two-player network pricing game, the

Stackelberg Shortest Paths Tree

(

StackSPT

) game. This is played on

G

, by assuming that edges in

E

are partitioned into two sets: a set

E

F

of edges with a fixed positive real weight, and a set

E

P

of edges that should be priced by one of the two players (the

leader

). Given a distinguished node

r

 ∈ 

V

, the

StackSPT

game is then as follows: the leader prices the edges in

E

P

in such a way that he will maximize his

revenue

, knowing that the other player (the

follower

) will build a shortest paths tree of

G

rooted at

r

, say

S

(

r

), by running a publicly available algorithm. Quite naturally, for each edge selected in the solution, the leader’s revenue is assumed to be equal to the

loaded price

of an edge, namely the product of the edge price times the number of paths from

r

in

S

(

r

) that use it. First, we show that the problem of maximizing the leader’s revenue is

NP

-hard as soon as |

E

P

| = 

Θ

(

n

). Then, in search of an effective method for solving the problem when the size of

E

P

is constant, we focus on the basic case in which |

E

P

| = 2, and we provide an efficient

O

(

n

2

log

n

) time algorithm. Afterwards, we generalize the approach to the case |

E

P

| = 

k

, and we show that it can be solved in polynomial time whenever

k

 = 

O

(1).

Davide Bilò, Luciano Gualà, Guido Proietti, Peter Widmayer
Local Two-Stage Myopic Dynamics for Network Formation Games

Network formation games

capture two conflicting objectives of self-interested nodes in a network. On one hand, such a node wishes to be able to reach all other nodes in the network; on the other hand, it wishes to minimize its cost of participation. We focus on myopic dynamics in a class of such games inspired by transportation and communication models. A key property of the dynamics we study is that they are

local

: nodes can only deviate to form links with others in a restricted neighborhood. Despite this locality, we find that our dynamics converge to efficient or nearly efficient outcomes in a range of settings of interest.

Esteban Arcaute, Ramesh Johari, Shie Mannor
Interference Games in Wireless Networks

We present a game-theoretic approach to the study of scheduling communications in wireless networks and introduce and study a class of games that we call

Interference Games

. In our setting, a player can successfully transmit if it “

shouts strongly enough

”; that is, if her transmission power is sufficiently higher than all other (simultaneous) transmissions plus the environmental noise. This physical phenomenon is commonly known as the Signal-to-Interference-plus-Noise-Ratio (SINR).

Vincenzo Auletta, Luca Moscardelli, Paolo Penna, Giuseppe Persiano
Taxing Subnetworks

We study taxes in the well-known game theoretic traffic model due to Wardrop. Given a network and a subset of edges, on which we can impose taxes, the problem is to find taxes inducing an equilibrium flow of minimal network-wide latency cost. If all edges are taxable, then marginal cost pricing is known to induce the socially optimal flow for arbitrary multi-commodity networks. In contrast, if only a strict subset of edges is taxable, we show NP-hardness of finding optimal taxes for general networks with linear latency functions and two commodities. On the positive side, for single-commodity networks with parallel links and linear latency function, we provide a polynomial time algorithm for finding optimal taxes.

Martin Hoefer, Lars Olbrich, Alexander Skopalik

Session C.2: Solution Concepts

Anonymity-Proof Voting Rules

A

(randomized, anonymous) voting rule

maps any multiset of total orders (aka.

votes

) over a fixed set of alternatives to a probability distribution over these alternatives. A voting rule

f

is

false-name-proof

if no voter ever benefits from casting more than one vote. It is

anonymity-proof

if it satisfies voluntary participation and it is false-name-proof. We show that the class of anonymity-proof neutral voting rules consists exactly of the rules of the following form. With some probability

k

f

 ∈ [0,1], the rule chooses an alternative uniformly at random. With probability 1 − 

k

f

, the rule first draws a pair of alternatives uniformly at random. If every vote prefers the same alternative between the two (and there is at least one vote), then the rule chooses that alternative. Otherwise, the rule flips a fair coin to decide between the two alternatives. We also show how the characterization changes if group strategy-proofness is added as a requirement.

Vincent Conitzer
Overlapping Coalition Formation

In multiagent domains, agents form coalitions to perform tasks. The usual models of cooperative game theory assume that the desired outcome is either the grand coalition or a coalition structure that consists of disjoint coalitions (i.e., a partition of the set of agents). However, in practice an agent may be involved in executing more than one task, and distributing his resources between several (not necessarily disjoint) coalitions. To tackle such scenarios, we introduce a model for cooperative games with overlapping coalitions. We then focus on concepts of stability in this setting. In particular, we define and study a notion of the core, which is a generalization of the corresponding notion in the traditional models of cooperative game theory. Under some quite general conditions, we characterize the elements of core. As a corollary, we also show that any element of the core maximizes the social welfare. We then introduce a concept of balancedness for overlapping coalitional games, and use it to characterize coalition structures that can be extended to elements of the core. Furthermore, we generalize the notion of convexity to our setting, and show that under some natural assumptions convex games have a non-empty core. To the best of our knowledge, this is the first paper to provide a generic model for overlapping coalition formation, along with a theoretical treatment of stability in this setting.

Georgios Chalkiadakis, Edith Elkind, Evangelos Markakis, Nicholas R. Jennings
A Network-Based Asymmetric Nash Bargaining Solution

This paper presents an evolutionary bargaining model between two groups of buyers and sellers. One buyer and one seller are randomly matched to play the Nash demand game: they choose a best reply based on information about past bargains coming from other members of their group. Information arrival is modeled as a Poisson process, and the rates of these processes form a weighted communication network. Over the long run, the stochastically stable division is the asymmetric Nash bargaining solution (ANB) with weights determined by the structure of the communication network in each group. The optimal networks for a group are (quasi)-regular networks without weak links.

Edoardo Gallo
How Public Opinion Forms

No aspect of the massive participation in content creation that the web enables is more evident than in the countless number of opinions, news and product reviews that are constantly posted on the Internet. Given their importance we have analyzed their temporal evolution in a number of scenarios. We have found that while ignorance of previous views leads to a uniform sampling of the range of opinions among a community, exposure of previous opinions to potential reviewers induces a trend following process which leads to the expression of increasingly extreme views. Moreover, when the expression of an opinion is costly and previous views are known, a selection bias softens the extreme views, as people exhibit a tendency to speak out differently from previous opinions. These findings are not only robust but also suggest simple procedures to extract given types of opinions from the population at large.

Fang Wu, Bernardo A. Huberman
A Game-Theoretic Analysis of Games with a Purpose

We present a simple game-theoretic model for the ESP game, an interactive game devised to label images on the web, and characterize the equilibrium behavior of the model. We show that a simple change in the incentive structure can lead to different equilibrium structure and suggest the possibility of formal incentive design in achieving desirable system-wide outcomes, complementing existing considerations of robustness against cheating and human factors.

Shaili Jain, David C. Parkes

Session A.3: Algorithms and Optimization I

Inapproximability of Combinatorial Public Projects

We study the Combinatorial Public Project Problem (

CPPP

) in which

n

agents are assigned a subset of

m

resources of size

k

so as to maximize the social welfare. Combinatorial public projects are an abstraction of many resource-assignment problems (Internet-related network design, elections, etc.). It is known that if all agents have submodular valuations then a constant approximation is achievable in polynomial time. However, submodularity is a strong assumption that does not always hold in practice. We show that (unlike similar problems such as combinatorial auctions) even slight relaxations of the submodularity assumption result in non-constant lower bounds for approximation.

Michael Schapira, Yaron Singer
Algorithms for Optimal Price Regulations

Since summer 2007, mobile phone users in the European Union (EU) are protected by a ceiling on the roaming tariff when calling or receiving a call abroad. We analyze the effects of this price regulative policy, and compare it to alternative implementations of price regulations. The problem is a three-level mathematical program: The EU determines the price regulative policy, the telephone operator sets profit-maximizing prices, and customers choose to accept or decline the operator’s offer. The first part of this paper contains a polynomial time algorithm to solve such a three-level program. The crucial idea is to partition the polyhedron of feasible price regulative parameters into a polynomial number of smaller polyhedra such that a certain primitive decision problem can be written as an LP on each of those. Then the problem can be solved by a combination of enumeration and linear programming. In the second part, we analyze more specifically an instance of this problem, namely the price regulation problem that the EU encounters. Using customer-data from a large telephone operator, we compare different price regulative policies with respect to their social welfare. On the basis of the specific social welfare function, we observe that other price regulative policies or different ceilings can improve the total social welfare.

Alexander Grigoriev, Joyce van Loon, Marc Uetz
Improving the Efficiency of Load Balancing Games through Taxes

In load balancing games, there is a set of available servers and a set of clients; each client wishes to run her job on some server. Clients are selfish and each of them selects a server that, given an assignment of the other clients to servers, minimizes the latency she experiences with no regard to the global optimum. In order to mitigate the effect of selfishness on the efficiency, we assign taxes to the servers. In this way, we obtain a new game where each client aims to minimize the sum of the latency she experiences and the tax she pays. Our objective is to find taxes so that the worst equilibrium of the new game is as efficient as possible. We present new results concerning the impact of taxes on the efficiency of equilibria, with respect to the total latency of all clients and the maximum latency (makespan).

Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos
Network Formation and Routing by Strategic Agents Using Local Contracts

In the Internet, Autonomous Systems (ASes) make contracts called Service Level Agreements (SLAs) between each other to transit one another’s traffic. ASes also try to control the routing of traffic to and from their networks in order to achieve efficient use of their infrastructure and to attempt to meet some level of quality of service globally. We introduce a game theoretic model in order to gain understanding of this interplay between network formation and routing. Player strategies allow them to make contracts with one another to forward traffic, and to re-route traffic that is currently routed through them. This model extends earlier work of [3] that only considered the network formation aspect of the problem. We study the structure and quality of Nash equilibria and quantify the prices of anarchy and stability, that is, the relative quality of a centralized optimal solution versus that of the Nash equilibria.

Elliot Anshelevich, Gordon Wilfong
Network Creation Games with Disconnected Equilibria

In this paper we extend a popular non-cooperative network creation game (NCG) [11] to allow for disconnected equilibrium networks. There are

n

players, each is a vertex in a graph, and a strategy is a subset of players to build edges to. For each edge a player must pay a cost

α

, and the individual cost for a player represents a trade-off between edge costs and shortest path lengths to all other players. We extend the model to a

penalized game

(PCG), for which we reduce the penalty for a pair of disconnected players to a finite value

β

. We prove that the PCG is not a potential game, but pure Nash equilibria always exist, and pure strong equilibria exist in many cases. We provide tight conditions under which disconnected (strong) Nash equilibria can evolve. Components of these equilibria must be (strong) Nash equilibria of a smaller NCG. But in contrast to the NCG, for the vast majority of parameter values no tree is a stable component. Finally, we show that the price of anarchy is

Θ

(

n

), several orders of magnitude larger than in the NCG. Perhaps surprisingly, the price of anarchy for strong equilibria increases only to at most 4.

Ulrik Brandes, Martin Hoefer, Bobo Nick

Session B.3: Mechanism Design I

Randomized Truthful Mechanisms for Scheduling Unrelated Machines

In this paper, we consider randomized truthful mechanisms for scheduling tasks to unrelated machines, where each machine is controlled by a selfish agent. Some previous work on this topic focused on a special case, scheduling two machines, for which the best approximation ratio is 1.6737 [5] and the best lower bound is 1.5 [6]. For this case, we give a unified framework for designing universally truthful mechanisms, which includes all the known mechanisms, and also a tight analysis method of their approximation ratios. Based on this, we give an improved randomized truthful mechanism, whose approximation ratio is 1.5963. For the general case, when there are

m

machines, the only known technique is to obtain a

$\frac {\gamma m}{2}$

-approximation truthful mechanism by generalizing a

γ

-approximation truthful mechanism for two machines[6]. There is a barrier of 0.75

m

for this technique due to the lower bound of 1.5 for two machines. We break this 0.75

m

barrier by a new designing technique, rounding a fractional solution. We propose a randomized truthful-in-expectation mechanism that achieves approximation of

$\frac{m+5}{2}$

, for

m

machines.

For the lower bound side, we focus on an interesting family of mechanisms, namely

task-independent

truthful mechanisms. We prove a lower bound of 11/7 for two machines and a lower bound of

$\frac{m+1}{2}$

for

m

machines with respect to this family. They almost match our upper bounds in both cases.

Pinyan Lu, Changyuan Yu
Optimal Mechanisms for Single Machine Scheduling

We study the design of optimal mechanisms in a setting where job-agents compete for being processed by a service provider that can handle one job at a time. Each job has a processing time and incurs a waiting cost. Jobs need to be compensated for waiting. We consider two models, one where only the waiting costs of jobs are private information (1-d), and another where both waiting costs and processing times are private (2-d). An optimal mechanism minimizes the total expected expenses to compensate all jobs, while it has to be Bayes-Nash incentive compatible. We derive closed formulae for the optimal mechanism in the 1-d case and show that it is efficient for symmetric jobs. For non-symmetric jobs, we show that efficient mechanisms perform arbitrarily bad. For the 2-d case, we prove that the optimal mechanism in general does not even satisfy IIA, the ‘independent of irrelevant alternatives’ condition. We also show that the optimal mechanism is not even efficient for symmetric agents in the 2-d case.

Birgit Heydenreich, Debasis Mishra, Rudolf Müller, Marc Uetz
Welfare Undominated Groves Mechanisms

A common objective in mechanism design is to choose the outcome (for example, allocation of resources) that maximizes the sum of the agents’ valuations, without introducing incentives for agents to misreport their preferences. The class of Groves mechanisms achieves this; however, these mechanisms require the agents to make payments, thereby reducing the agents’ total welfare.

In this paper we introduce a measure for comparing two mechanisms with respect to the final welfare they generate. This measure induces a partial order on mechanisms and we study the question of finding minimal elements with respect to this partial order. In particular, we say a non-deficit Groves mechanism is

welfare undominated

if there exists no other non-deficit Groves mechanism that always has a smaller or equal sum of payments. We focus on two domains: (i) auctions with multiple identical units and unit-demand bidders, and (ii) mechanisms for public project problems. In the first domain we analytically characterize all welfare undominated Groves mechanisms that are anonymous and have linear payment functions, by showing that the family of optimal-in-expectation linear redistribution mechanisms, which were introduced in [6] and include the Bailey-Cavallo mechanism [1,2], coincides with the family of welfare undominated Groves mechanisms that are anonymous and linear in the setting we study. In the second domain we show that the classic VCG (Clarke) mechanism is welfare undominated for the class of public project problems with equal participation costs, but is not undominated for a more general class.

Krzysztof Apt, Vincent Conitzer, Mingyu Guo, Evangelos Markakis
Redistribution of VCG Payments in Assignment of Heterogeneous Objects

In this paper, we seek to design a Groves mechanism for assigning

p

heterogeneous objects among

n

competing agents (

n

 > 

p

) with unit demand, satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved by Moulin [1] and Guo and Conitzer [2]. However, it remains an open problem to design such a rebate function when the objects are heterogeneous. We propose a mechanism, HETERO and conjecture that HETERO is individually rational and weakly budget balanced. We provide empirical evidence for our conjecture through experimental simulations.

Sujit Gujar, Narahari Yadati
Bin Packing of Selfish Items

We study a bin packing game in which any item to be packed is handled by a selfish agent. Each agent aims at minimizing his sharing cost with the other items staying in the same bin, where the social cost is the number of bins used. We first show that computing a pure Nash equilibrium can be done in polynomial time. We then prove that the price of anarchy for the game is in between 1.6416 and 1.6575, improving the previous bounds.

Guosong Yu, Guochuan Zhang

Session C.3: Network Games II

Restricted Core Stability of Flow Games

In this paper, we introduce a kind of restricted core stability for flow games, which is a generalization of the core stability of simple flow games. We first give a characterization on the restricted core, and then propose a sufficient and necessary condition on the restricted core stability for flow games associated with general networks. This condition yields that testing the restricted core stability can be done in polynomial time.

Mao-cheng Cai, Qizhi Fang
Three Selfish Spanning Tree Games

We study a problem in a network. The input is an edge-weighted graph

G

 = (

V

,

E

) such that

V

contains a specific source node

r

. Every

v

 ∈ 

V

 ∖ {

r

} is an entity which wants to be connected to

r

either directly or via other entities. The main question is how do the entities deviate from a socially optimal network if they are not monitored by a central authority. We provide theoretical bounds on the (strong) price of anarchy of this game. In particular, three variants – each of them being motivated by a practical situation – are studied.

Laurent Gourvès, Jérôme Monnot
Stochastic Submodular Maximization

We study stochastic submodular maximization problem with respect to a cardinality constraint. Our model can capture the effect of uncertainty in different problems, such as cascade effects in social networks, capital budgeting, sensor placement, etc. We study non-adaptive and adaptive policies and give optimal constant approximation algorithms for both cases. We also bound the adaptivity gap of the problem between 1.21 and 1.59.

Arash Asadpour, Hamid Nazerzadeh, Amin Saberi
On Pure and (Approximate) Strong Equilibria of Facility Location Games

We study social cost losses in Facility Location games, where

n

selfish agents install facilities over a network and connect to them, so as to forward their local demand (expressed by a non-negative weight per agent). Agents using the same facility share fairly its installation cost, but every agent pays individually a (weighted) connection cost to the chosen location. We study the Price of Stability (PoS) of

pure

Nash equilibria and the Price of Anarchy of

strong

equilibria (SPoA), that generalize pure equilibria by being resilient to coalitional deviations. For unweighted agents on metric networks we prove upper and lower bounds on

PoS

, while an

O

(ln

n

) upper bound implied by previous work is tight for non-metric networks. We also prove a constant upper bound for the SPoA of metric networks when strong equilibria exist. For the weighted game on general networks we prove existence of

e

-approximate (

e

 = 2.718...) strong equilibria and an upper bound of

O

(ln

W

) on SPoA (

W

is the sum of agents’ weights), which becomes tight

Θ

(ln

n

) for unweighted agents.

Thomas Dueholm Hansen, Orestis A. Telelis
Efficiency, Fairness and Competitiveness in Nash Bargaining Games

Recently, [8] defined the class of Linear Nash Bargaining Games (LNB) and obtained combinatorial, polynomial time algorithms for several games in this class. [8] also defines two natural subclasses within LNB, UNB and SNB, which contain a number of natural Nash bargaining games. In this paper we define three basic game theoretic properties of Nash bargaining games: price of bargaining, fairness and full competitiveness. We show that for each of these properties, a game in UNB has this property iff it is in SNB.

Deeparnab Chakrabarty, Gagan Goel, Vijay V. Vazirani, Lei Wang, Changyuan Yu

Session A.4: Equilibrium

Computing an Extensive-Form Correlated Equilibrium in Polynomial Time

We present a polynomial-time algorithm for finding one

extensive form correlated equilibrium

(EFCE) for multiplayer extensive games with perfect recall. This the first such algorithm for an equilibrium notion for games of this generality. The EFCE concept has been defined by von Stengel and Forges [1]. Our algorithm extends the constructive existence proof and polynomial-time algorithm for finding a correlated equilibrium in succinctly representable games by Papadimitriou and Roughgarden [2,3]. We describe the set of EFCE with a polynomial number of consistency and incentive constraints, and exponentially many variables. The algorithm employs linear programming duality, the ellipsoid algorithm, and Markov chain steady state computations. We also sketch a possible interpretation of the variables in the dual system.

Wan Huang, Bernhard von Stengel
Homogeneous Interference Game in Wireless Networks

We consider the problem of joint usage of a shared wireless channel in a an interference-bound environment, and focus on a distributed setting where there is no central entity managing the various transmissions. In such systems, unlike other multiple access environments, several transmissions may succeed simultaneously, depending on spatial interferences between the different stations. We use a game theoretic view to model the problem, where the stations are selfish agents aiming at maximizing their success probability. We show that when interferences are homogeneous, system performance suffers an exponential degradation in performance at an equilibrium, due to the selfishness of the stations. However, when using a proper penalization scheme for aggressive stations, we can ensure the system’s performance value is at least 1/

e

of the optimal value, while still being at equilibrium.

Joseph (Seffi) Naor, Danny Raz, Gabriel Scalosub
A Network Coloring Game

We analyze a network coloring game which was first proposed by Michael Kearns and others in their experimental study of dynamics and behavior in social networks. In each round of the game, each player, as a node in a network

G

, uses a simple, greedy and selfish strategy by choosing randomly one of the available colors that is different from all colors played by its neighbors in the previous round. We show that the coloring game converges to its Nash equilibrium if the number of colors is at least two more than the maximum degree. Examples are given for which convergence does not happen with one fewer color. We also show that with probability at least 1 − 

δ

, the number of rounds required is

O

(log(

n

/

δ

)).

Kamalika Chaudhuri, Fan Chung Graham, Mohammad Shoaib Jamall

Session B.4: Mechanism Design II

Asynchronous Best-Reply Dynamics

In many real-world settings (e.g., interdomain routing in the Internet) strategic agents are instructed to follow best-reply dynamics in asynchronous environments. In such settings players learn of each other’s actions via update messages that can be delayed or even lost. In particular, several players might update their actions

simultaneously

, or make choices based on outdated information. In this paper we analyze the convergence of best- (and better-)reply dynamics in asynchronous environments. We provide sufficient conditions, and necessary conditions for convergence in such settings, and also study the convergence-rate of these natural dynamics.

Noam Nisan, Michael Schapira, Aviv Zohar
Fault Tolerance in Distributed Mechanism Design

We argue that in distributed mechanism design frameworks it is important to consider not only rational manipulation by players, but also malicious, faulty behavior. To this end, we show that in some instances it is possible to take a centralized mechanism and implement it in a distributed setting in a fault tolerant manner. More specifically, we examine two distinct models of distributed mechanism design – a Nash implementation with the planner as a node on the network, and an ex post Nash implementation with the planner only acting as a “bank”. For each model we show that the implementation can be made resilient to faults.

Ronen Gradwohl
Bargaining Solutions in a Social Network

We study the concept of bargaining solutions, which has been studied extensively in two-party settings, in a generalized setting involving arbitrary number of players and bilateral trade agreements over a social network. We define bargaining solutions in this setting, and show the existence of such solutions on all networks under some natural assumptions on the utility functions of the players. We also investigate the influence of network structure on equilibrium in our model, and note that approximate solutions can be computed efficiently when the networks are trees of bounded degree and the parties have nice utility functions.

Tanmoy Chakraborty, Michael Kearns

Session C.4: Online Advertisement

Sharing Online Advertising Revenue with Consumers

Online service providers generate much of their revenue by monetizing user attention through online advertising. In this paper, we investigate

revenue sharing

, where the user is rewarded with a portion of the surplus generated from the advertising transaction, in a cost-per-conversion advertising system. While revenue sharing can potentially lead to an increased user base, and correspondingly larger revenues in the long-term, we are interested in the effect of cashback in the short-term, in particular for a single auction. We capture the effect of cashback on the auction’s outcome via

price-dependent conversion probabilities

, derived from a model of rational user behavior: this trades off the direct loss in per-conversion revenue against an increase in conversion rate. We analyze equilibrium behavior under two natural schemes for specifying cashback: as a fraction of the search engine’s revenue per conversion, and as a fraction of the posted item price. This leads to some interesting conclusions: first, while there is an equivalence between the search engine and the advertiser providing the cashback specified as a fraction of search engine profit, this equivalence no longer holds when cashback is specified as a fraction of item price. Second, cashback can indeed lead to short-term increase in search engine revenue; however this depends strongly on the scheme used for implementing cashback

as a function

of the input. Specifically, given a particular set of input values (user parameters and advertiser posted prices), one scheme can lead to an increase in revenue for the search engine, while the others may not. Thus, an accurate model of the marketplace and the target user population is essential for implementing cashback.

Yiling Chen, Arpita Ghosh, Randolph Preston McAfee, David Pennock
Budget Constrained Bidding in Keyword Auctions and Online Knapsack Problems

We consider the budget-constrained bidding optimization problem for sponsored search auctions, and model it as an

online (multiple-choice) knapsack problem

. We design both deterministic and randomized algorithms for the online (multiple-choice) knapsack problems achieving a

provably optimal

competitive ratio. This translates back to fully automatic bidding strategies maximizing either profit or revenue for the budget-constrained advertiser. Our bidding strategy for revenue maximization is oblivious (i.e., without knowledge) of other bidders’ prices and/or clickthrough-rates for those positions. We evaluate our bidding algorithms using both synthetic data and real bidding data gathered manually, and also discuss a sniping heuristic that strictly improves bidding performance. With sniping and parameter tuning enabled, our bidding algorithms can achieve a performance ratio above 90% against the optimum by the omniscient bidder.

Yunhong Zhou, Deeparnab Chakrabarty, Rajan Lukose
Position Auctions with Bidder-Specific Minimum Prices

Position auctions such as the Generalized Second Price (GSP) auction are in wide use for sponsored search, e.g., by Yahoo! and Google. We now have an understanding of the equilibria of these auctions, via game-theoretic concepts like Generalized English Auctions and the “locally envy-free” property, as well as through a relationship to the well-known, truthful Vickrey-Clarke-Groves (VCG) mechanism. In practice, however, position auctions are implemented with additional constraints, in particular, bidder-specific minimum prices are enforced by all major search engines. The minimum prices are used to control the quality of the ads that appear on the page.

We study the effect of bidder-specific minimum prices in position auctions with an emphasis on GSP. Some properties proved for standard GSP no longer hold in this setting. For example, we show that the GSP allocation is now not always efficient (in terms of advertiser value). Also, the property of “envy-locality” enjoyed by GSP—which is essential in the prior analysis of strategies and equilibria—no longer holds. Our main result is to show that despite losing envy locality, GSP with bidder-specific minimum prices still has an envy-free equilibrium. We conclude by studying the effect of bidder-specific minimum prices on VCG auctions.

Eyal Even-Dar, Jon Feldman, Yishay Mansour, S. Muthukrishnan

Session A.5: Sponsored Search Auctions

A Cascade Model for Externalities in Sponsored Search

One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page altogether.

In this paper, we propose and analyze a model for externalities in

sponsored search ads

. Our model is based on the assumption that users will visually scan the list of ads from the top to the bottom. After each ad, they make independent random decisions with ad-specific probabilities on whether to continue scanning. We then generalize the model in two ways: allowing for multiple separate blocks of ads, and allowing click probabilities to explicitly depend on ad positions as well. For the most basic model, we present a polynomial-time incentive-compatible auction mechanism for allocating and pricing ad slots. For the generalizations, we give approximation algorithms for the allocation of ads.

David Kempe, Mohammad Mahdian
Sponsored Search Auctions with Reserve Prices: Going Beyond Separability

The original analysis of sponsored search auctions by Varian and independently by Aggarwal et al. did not take into account the notion of reserve prices, which are common across all major search engines. We investigate this further and show that the separability assumption derived by Aggarwal et al. is not sufficient for aligning the greedy allocation employed by GSP and the efficient allocation in the presence of reserve prices. We extend separability and derive the condition under which the greedy ranking allocation is an efficient truthful mechanism. We call this generalization the

extended separability condition

.

To complement the analysis of the extended separability condition we present an extension of the laddered auction in the presence of reserve prices, which we call the

bi-laddered auction

. We show that the bi-laddered auction is the unique truthful auction for advertisers that provides a price vector support for an

extended GSP

SNE scheme. Nevertheless the bi-laddered auction is shown to allow a budget deficit.

Building on our model of reserve prices we continue by depicting advertising networks as double sided sponsored search markets with advertisers on one side, syndicators on the other, and the search engine as the market maker. For the latter model we provide a truthful scheme for the seller and show that by assuming separability one can design a SNE, individually rational, and nearly efficient syndicated market that allows the market maker (search engine) to run the market with a surplus/budget balance. The uniqueness of our bi-laddered auction scheme implies that without the separability condition no truthful syndicated market can run without a deficit.

Rica Gonen, Sergei Vassilvitskii
Auctions for Share-Averse Bidders

We introduce and study

share-averse auctions

, a class of auctions with allocation externalities, in which items can be allocated to arbitrarily many bidders, but the valuation of each individual bidder decreases as the items get allocated to more other bidders. For single-item auctions where players have incomplete information about each others’ valuation, we characterize the truthful mechanism that maximizes the auctioneer’s revenue, and analyze it for some interesting cases.

We then move beyond single-item auctions, and analyze single-minded combinatorial auctions. We derive sufficient conditions for a truthful allocation in this setting. We also obtain a

$\sqrt{m}$

-approximation algorithm for maximizing social welfare, which is essentially tight unless P=NP.

Mahyar Salek, David Kempe
Sponsored Search Auctions with Markovian Users

Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. The most popular auction for sponsored search is the “Generalized Second Price” (GSP) auction where advertisers are assigned to slots in the decreasing order of their

score

, which is defined as the product of their bid and click-through rate. One of the main advantages of this simple ranking is that bidding strategy is intuitive: to move up to a more prominent slot on the results page, bid more. This makes it simple for advertisers to strategize. However this ranking only maximizes efficiency under the assumption that the probability of a user clicking on an ad is independent of the other ads shown on the page. We study a Markovian user model that does not make this assumption. Under this model, the most efficient assignment is no longer a simple ranking function as in GSP. We show that the optimal assignment can be found efficiently (even in near-linear time). As a result of the more sophisticated structure of the optimal assignment, bidding dynamics become more complex: indeed it is no longer clear that bidding more moves one higher on the page. Our main technical result is that despite the added complexity of the bidding dynamics, the optimal assignment has the property that ad position is still monotone in bid. Thus even in this richer user model, our mechanism retains the core bidding dynamics of the GSP auction that make it useful for advertisers.

Gagan Aggarwal, Jon Feldman, S. Muthukrishnan, Martin Pál
On the Equilibria and Efficiency of the GSP Mechanism in Keyword Auctions with Externalities

In the increasingly important market of online search advertising, a multitude of parameters affect the performance of advertising campaigns and their ability to attract users’ attention enough to produce clicks. Thus far, the majority of the relevant literature assumed an advertisement’s probability of receiving a click to be dependent on the advertisement’s quality and its position in the sponsored search list, but independent of the other advertisements shown on the same webpage.

We examine a promising new model [1, 16] that incorporates the

externalities

effect based on the probabilistic behavior of a typical user. We focus on the

Generalized Second Price

mechanism used in practice and examine the Nash equilibria of the model. We also investigate the performance of this mechanism under the new model by comparing the efficiency of its equilibria to the optimal efficiency.

Ioannis Giotis, Anna R. Karlin

Session B.5: Voting Problem

Biased Voting and the Democratic Primary Problem

Inspired by the recent Democratic National Primary, we consider settings in which the members of a distributed population must balance their individual preferences over candidates with a requirement to quickly achieve collective unity. We formalize such settings as the “Democratic Primary Problem” (DPP) over an undirected graph, whose local structure models the social influences acting on individual voters.

After contrasting our model with the extensive literature on diffusion in social networks (in which a force towards collective unity is usually absent), we present the following series of technical results:

An impossibility result establishing exponential convergence time for the DPP for a broad class of local stochastic updating rules, which includes natural generalizations of the well-studied “voter model” from the diffusion literature (and which is known to converge in polynomial time in the absence of differing individual preferences).

A new simple and local stochastic updating protocol whose convergence time is provably polynomial on any instance of the DPP. This new protocol allows voters to declare themselves “undecided”, and has a temporal structure reminiscent of periodic polling or primaries.

An extension of the new protocol that we prove is an approximate Nash equilibrium for a game-theoretic version of the DPP.

Michael Kearns, Jinsong Tan
Frequent Manipulability of Elections: The Case of Two Voters

The recent result of Friedgut, Kalai and Nisan [9] gives a quantitative version of the Gibbard-Satterthwaite Theorem regarding manipulation in elections, but holds only for neutral social choice functions and three alternatives. We complement their theorem by proving a similar result regarding Pareto-Optimal social choice functions when the number of voters is two. We discuss the implications of our results with respect to the agenda of precluding manipulation in elections by means of computational hardness.

Shahar Dobzinski, Ariel D. Procaccia
The Power of Small Coalitions in Cost Sharing

In a cost-sharing problem, finitely many players have an unknown preference for some public excludable good (service), and the task is to determine which players to serve and how to distribute the incurred cost. Therefore, incentive-compatible mechanisms are sought that elicit truthful bids, charge prices that recover the cost, and are economically efficient in that they reasonably balance cost and valuations. A commonplace notion of incentive-compatibility in cost sharing is group-strategyproofness (GSP), meaning that not even coordinated deceit is profitable. However, GSP makes strong implications on players’ coordination abilities and is known to impose severe limitations on other goals in cost sharing. There is hence good reason to seek for a weaker axiom: In this work, we study the following question: Does relaxing GSP to resilience only against

coalitions of bounded size

yield a richer set of possible mechanisms? Surprisingly, the answer is essentially “no”: We prove that already a mechanism resilient to coalitions of size only two (“2-GSP”) is GSP, once we require that cost shares must only depend on the service allocation (and not directly on the bids). Moreover, we show that even without additional requirements, 2-GSP implies weak group-strategyproofness (WGSP). Consequently, our results give some justification that GSP may, after all, still be desirable in various scenarios. As another benefit, we believe that our characterizations will facilitate devising and understanding new GSP cost-sharing mechanisms. Finally, we relate our findings to other concepts of non-manipulability such as (outcome) non-bossiness [19] and weak utility non-bossiness [13].

Florian Schoppmann
Social Context Games

We introduce

social context games

. A social context game is defined by an underlying game in strategic form, and a social context consisting of an undirected graph of neighborhood among players and aggregation functions. The players and strategies in a social context game are as in the underlying game, while the players’ utilities in a social context game are computed from their payoffs in the underlying game based on the graph of neighborhood and the aggregation functions. Examples of social context games are ranking games and coalitional congestion games. In this paper we consider resource selection games as the underlying games, and four basic social contexts. An important property of resource selection games is the existence of pure strategy equilibrium. We study the existence of pure strategy Nash equilibrium in the corresponding social context games. We also show that the social context games possessing pure strategy Nash equilibria are not potential games, and therefore are distinguished from congestion games.

Itai Ashlagi, Piotr Krysta, Moshe Tennenholtz

Session C.5: Algorithms and Optimization II

Approximability and Parameterized Complexity of Minmax Values

We consider approximating the minmax value of a multi-player game in strategic form. Tightening recent bounds by Borgs

et al.

, we observe that approximating the value with a precision of

ε

log

n

digits (for any constant

ε

> 0) is

NP

-hard, where

n

is the size of the game. On the other hand, approximating the value with a precision of

c

loglog

n

digits (for any constant

c

 ≥ 1) can be done in quasi-polynomial time. We consider the parameterized complexity of the problem, with the parameter being the number of pure strategies

k

of the player for which the minmax value is computed. We show that if there are three players,

k

 = 2 and there are only two possible rational payoffs, the minmax value is a rational number and can be computed

exactly

in linear time. In the general case, we show that the value can be approximated with any polynomial number of digits of accuracy in time

n

O

(

k

)

. On the other hand, we show that minmax value approximation is

W[1]

-hard and hence not likely to be fixed parameter tractable. Concretely, we show that if

k

-

Clique

requires time

n

Ω

(

k

)

then so does minmax value computation.

Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, Peter Bro Miltersen, Troels Bjerre Sørensen
An “Ethical” Game-Theoretic Solution Concept for Two-Player Perfect-Information Games

The standard solution concept for perfect-information extensive form games is subgame perfect Nash equilibrium. However, humans do not always play according to a subgame perfect Nash equilibrium, especially in games where it is possible for all the players to obtain much higher payoffs if they place some trust in each other (and this trust is not violated). In this paper, we introduce a new solution concept for two-player perfect-information games that attempts to model this type of trusting behavior (together with the “ethical” behavior of not violating that trust). The concept takes subgame perfect equilibrium as a starting point, but then repeatedly resolves the game based on the players being able to trust each other. We give two distinct algorithmic definitions of the concept and show that they are equivalent. Finally, we give a fast implementation of one of the algorithms for solving the game, and show that it runs in time

O

(

n

log

n

 + 

nh

log(

n

/

h

)).

Joshua Letchford, Vincent Conitzer, Kamal Jain
The Secretary Problem with a Hazard Rate Condition

In the classical secretary problem, the objective is to select the candidate of maximum value among a set of

n

candidates arriving one by one. The value of the candidates come from an unknown distribution and is revealed at the time the candidate arrives, at which point an irrevocable decision on whether to select the candidate must be made. The well-known solution to this problem, due to Dynkin, waits for

n

/

e

steps to set an “aspiration level” equal to the maximum value of the candidates seen, and then accepts the first candidate whose value exceeds this level. This guarantees a probability of at least 1/

e

of selecting the maximum value candidate, and there are distributions for which this is essentially the best possible. One feature of this algorithm that seems at odds with reality is that it prescribes a long waiting period before selecting a candidate. In this paper, we show that if a standard hazard rate condition is imposed on the distribution of values, the waiting period falls from

n

/

e

to

n

/log(

n

), meaning that it is enough to observe a diminishingly small sample to set the optimal aspiration level. This result is tight, as both the hazard condition and the optimal sampling period bind exactly for the exponential distribution.

Mohammad Mahdian, Randolph Preston McAfee, David Pennock
Impact of QoS on Internet User Welfare

In this paper, we investigate the welfare effects of transition from a single-service class to two-service classes in the Internet. We consider an ISP who offers network access to a fixed user base, consisting of users who differ in their quality requirements and willingness to pay for the access. We model user-ISP interactions as a game in which the ISP makes capacity and pricing decisions to maximize his profits and the users only decide which service to buy, if any. Our model provides robust pricing for networks with single- and two-service classes. Our results indicate that transition to multiple service classes is socially desirable, but could be blocked due to the unfavorable distributional consequences that it inflicts on the existing network users. To facilitate the transition, we propose a simple regulatory tool that alleviates the political economic constraints and thus makes the transition feasible.

Galina Schwartz, Nikhil Shetty, Jean Walrand
Nonlinear Pricing with Network Externalities

This paper considers the screening problem faced by a monopolist of a network good in a general setting. We fully characterize the optimal contracts in the joint presence of network externalities and asymmetric information about agents’ types. We find that the pattern of consumption distortion crucially depends on the degree of network congestability. It is shown that an optimal consumption scheme exhibits a two-way distortion, no distortion on the top, or one-way distortion if and only if network is congestible, neutral-congestible or discongestible.

Dawen Meng, Guoqiang Tian, Lei Sun
Backmatter
Metadaten
Titel
Internet and Network Economics
herausgegeben von
Christos Papadimitriou
Shuzhong Zhang
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-92185-1
Print ISBN
978-3-540-92184-4
DOI
https://doi.org/10.1007/978-3-540-92185-1