Skip to main content

2006 | Buch

Internet and Network Economics

Second International Workshop, WINE 2006, Patras, Greece, December 15-17, 2006. Proceedings

herausgegeben von: Paul Spirakis, Marios Mavronicolas, Spyros Kontogiannis

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
Recent Developments in Learning and Competition with Finite Automata (Extended Abstract)

Consider a repeated two-person game. The question is how much smarter should a player be to effectively predict the moves of the other player. The answer depends on the formal definition of effective prediction, the number of actions each player has in the stage game, as well as on the measure of smartness. Effective prediction means that, no matter what the stage-game payoff function, the player can play (with high probability) a best reply in most stages. Neyman and Spencer [4] provide a complete asymptotic solution when smartness is measured by the size of the automata that implement the strategies.

Abraham Neyman
Dynamic Mechanism Design

In this paper we address the question of designing truthful mechanisms for solving optimization problems on dynamic graphs. More precisely, we are given a graph

G

of

n

nodes, and we assume that each edge of

G

is owned by a selfish agent. The strategy of an agent consists in revealing to the system the cost for using its edge, but this cost is not constant and can change over time. Additionally, edges can enter into and exit from

G

. Among the various possible assumptions which can be made to model how these edge-cost modifications take place, we focus on two settings: (i) the

dynamic

, in which modifications are unpredictable and time-independent, and for a given optimization problem on

G

, the mechanism has to maintain efficiently the output specification and the payment scheme for the agents; (ii) the

time-sequenced

, in which modifications happens at fixed time steps, and the mechanism has to minimize an objective function which takes into consideration both the quality and the set-up cost of a new solution. In both settings, we investigate the existence of exact and approximate truthful mechanisms. In particular, for the dynamic setting, we analyze the

minimum spanning tree

problem, and we show that if edge costs can only decrease, then there exists an efficient dynamic truthful mechanism for handling a sequence of

k

edge-cost reductions having runtime

${\cal O}(h \log n +k \log^4 n)$

, where

h

is the overall number of payment changes.

Davide Bilò, Luciano Gualà, Guido Proietti
Unconditional Competitive Auctions with Copy and Budget Constraints

This paper investigates a new auction model in which bidders have both copy and budget constraints. The new model has extensive and interesting applications in auctions of online ad-words, software licenses, etc. We consider the following problem: Suppose all the participators are rational, how to allocate the objects at what price so as to guarantee auctioneer’s high revenue, and how high it is.

We introduce a new kind of mechanisms called

win-win mechanisms

and present the notion of

unconditional competitive auctions

. A notably interesting property of

win-win mechanisms

is that each bidder’s self-interested strategy brings better utility not only to himself but also to the auctioneer. Then we present

win-win mechanisms

for multi-unit auctions with copy and budget constraints. We prove that these auctions are

unconditional competitive

under the situation of both limited and unlimited supply.

Tian-Ming Bu, Qi Qi, Aries Wei Sun
Truthful Auctions with Optimal Profit

We study the design of truthful auction mechanisms for maximizing the seller’s profit. We focus on the case when the auction mechanism does not have any knowledge of bidders’ valuations, especially of their upper bound. For the Single-Item auction, we obtain an “asymptotically” optimal scheme: for any

k

Z

 + 

and

ε

>0, we give a randomized truthful auction that guarantees an expected profit of

$\Omega(\frac{OPT}{\ln OPT \ln\ln OPT \cdots (\ln^{(k)}OPT)^{1+\epsilon}})$

, where

OPT

is the maximum social utility of the auction. Moreover, we show that no truthful auction can guarantee an expected profit of

$\Omega(\frac{OPT}{\ln OPT \ln\ln OPT\cdots \ln^{(k)}OPT})$

.

In addition, we extend our results and techniques to Multi-units auction, Unit-Demand auction, and Combinatorial auction.

Pinyan Lu, Shang-Hua Teng, Changyuan Yu
Mechanisms with Verification for Any Finite Domain

In this work we study

mechanisms with verification

, as introduced by Nisan and Ronen [STOC 1999], to solve problems involving selfish agents. We provide a technique for designing

truthful

mechanisms with verification that optimally solve the underlying optimization problem. Problems (optimally) solved with our technique belong to a rich class that includes, as special cases,

utilitarian

problems and many others considered in literature for so called

one-parameter

agents (e.g., the make-span). Our technique extends the one recently presented by Auletta et al [ICALP 2006] as it works for

any

finite multi-dimensional valuation domain. As special case we obtain an alternative technique to optimally solve (though not in polynomial-time) Scheduling Unrelated Machines studied (and solved) by Nisan and Ronen. Interestingly enough, our technique also solves the case of

compound

agents (i.e., agents declaring more than a value). As an application we provide the

first

optimal truthful mechanism with verification for Scheduling Unrelated Machines in which every selfish agent controls more than one (unrelated) machine. We also provide methods leading to approximate solutions obtained with polynomial-time truthful mechanisms with verification. With such methods we obtain polynomial-time truthful mechanisms with verification for smooth problems involving compound agents composed by one-parameter valuations. Finally, we investigate the construction of mechanisms (with verification) for infinite domains. We show that existing techniques to obtain truthful mechanisms (for the case in which verification is not allowed), dealing with infinite domains, could completely annul advantages that verification implies.

Carmine Ventre
Pure Nash Equilibria in Player-Specific and Weighted Congestion Games

Unlike standard congestion games, weighted congestion games and congestion games with player-specific delay functions do not necessarily possess pure Nash equilibria. It is known, however, that there exist pure equilibria for both of these variants in the case of

singleton congestion games

, i. e., if the players’ strategy spaces contain only sets of cardinality one. In this paper, we investigate how far such a property on the players’ strategy spaces guaranteeing the existence of pure equilibria can be extended. We show that both weighted and player-specific congestion games admit pure equilibria in the case of

matroid congestion games

, i. e., if the strategy space of each player consists of the bases of a matroid on the set of resources. We also show that the matroid property is the maximal property that guarantees pure equilibria without taking into account how the strategy spaces of different players are interweaved. In the case of player-specific congestion games, our analysis of matroid games also yields a polynomial time algorithm for computing pure equilibria.

Heiner Ackermann, Heiko Röglin, Berthold Vöcking
On the Complexity of Pure-Strategy Nash Equilibria in Congestion and Local-Effect Games
–Extended Abstract–

Congestion games are a fundamental class of noncooperative games possessing pure-strategy Nash equilibria. In the network version, each player wants to route one unit of flow on a path from her origin to her destination at minimum cost, and the cost of using an arc only depends on the total number of players using that arc. A natural extension is to allow for players sending different amounts of flow, which results in so-called weighted congestion games. While examples have been exhibited showing that pure-strategy Nash equilibria need not exist, we prove that it actually is strongly NP-hard to determine whether a given weighted network congestion game has a pure-strategy Nash equilibrium. This is true regardless of whether flow is unsplittable (has to be routed on a single path for each player) or not.

A related family of games are local-effect games, where the disutility of a player taking a particular action depends on the number of players taking the same action and on the number of players choosing related actions. We show that the problem of deciding whether a bidirectional local-effect game has a pure-strategy Nash equilibrium is NP-complete, and that the problem of finding a pure-strategy Nash equilibrium in a bidirectional local-effect game with linear local-effect functions (for which the existence of a pure-strategy Nash equilibrium is guaranteed) is PLS-complete. The latter proof uses a tight PLS-reduction, which implies the existence of instances and initial states for which any sequence of selfish improvement steps needs exponential time to reach a pure-strategy Nash equilibrium.

Juliane Dunkel, Andreas S. Schulz
Strong and Correlated Strong Equilibria in Monotone Congestion Games

The study of congestion games is central to the interplay between computer science and game theory. However, most work in this context does not deal with possible deviations by coalitions of players, a significant issue one may wish to consider. In order to deal with this issue we study the existence of strong and correlated strong equilibria in monotone congestion games. Our study of strong equilibrium deals with monotone-increasing congestion games, complementing the results obtained by Holzman and Law-Yone on monotone-decreasing congestion games. We then present a study of correlated-strong equilibrium for both decreasing and increasing monotone congestion games.

Ola Rozenfeld, Moshe Tennenholtz
The Equilibrium Existence Problem in Finite Network Congestion Games

An open problem is presented regarding the existence of pure strategy Nash equilibrium (PNE) in network congestion games with a finite number of non-identical players, in which the strategy set of each player is the collection of all paths in a given network that link the player’s origin and destination vertices, and congestion increases the costs of edges. A network congestion game in which the players differ only in their origin–destination pairs is a potential game, which implies that, regardless of the exact functional form of the cost functions, it has a PNE. A PNE does not necessarily exist if (i) the dependence of the cost of each edge on the number of users is player- as well as edge-specific or (ii) the (possibly, edge-specific) cost is the same for all players but it is a function (not of the number but) of the total weight of the players using the edge, with each player

i

having a different weight

w

i

. In a parallel two-terminal network, in which the origin and the destination are the only vertices different edges have in common, a PNE always exists even if the players differ in either their cost functions or weights, but not in both. However, for general two-terminal networks this is not so. The problem is to characterize the class of all two-terminal network topologies for which the existence of a PNE is guaranteed even with player-specific costs, and the corresponding class for player-specific weights. Some progress in solving this problem is reported.

Igal Milchtaich
First-Passage Percolation on a Width-2 Strip and the Path Cost in a VCG Auction

We study both the time constant for first-passage percolation, and the Vickery-Clarke-Groves (VCG) payment for the shortest path, on a width-2 strip with random edge costs. These statistics attempt to describe two seemingly unrelated phenomena, arising in physics and economics respectively: the first-passage percolation time predicts how long it takes for a fluid to spread through a random medium, while the VCG payment for the shortest path is the cost of maximizing social welfare among selfish agents. However, our analyses of the two are quite similar, and require solving (slightly different) recursive distributional equations. Using Harris chains, we can characterize distributions, not just expectations.

Abraham Flaxman, David Gamarnik, Gregory B. Sorkin
Optimal Cost-Sharing Mechanisms for Steiner Forest Problems

Könemann, Leonardi, and Schäfer [14] gave a 2-budget-balanced and groupstrategyproof mechanism for Steiner forest cost-sharing problems. We prove that this mechanism also achieves an

O

(log

2

k

)-approximation of the social cost, where

k

is the number of players. As a consequence, the KLS mechanism has the smallest-possible worst-case efficiency loss, up to constant factors, among all

O

(1)-budget-balanced Moulin mechanisms for such cost functions. We also extend our results to a more general network design problem.

Shuchi Chawla, Tim Roughgarden, Mukund Sundararajan
Mechanisms to Induce Random Choice

Media access protocols in wireless networks require each contending node to wait for a

backoff time

chosen randomly from a fixed range, before attempting to transmit on a shared channel. However, nodes acting in their own selfish interest may not follow the protocol. In this paper, we use a

mechanism design approach

to study how nodes might be induced to adhere to the protocol. In particular, a static version of the problem is modeled as a strategic game (the protocol) played by non-cooperating, rational players (the nodes). We present a game which exhibits a

unique

mixed-strategy Nash equilibrium that corresponds to nodes choosing backoff times randomly from a given range of values, according to any

apriori

given distribution. We extend this result to the situation when each player can choose a backoff value from a

different range

, provided there are at least two players choosing from the

largest

range. In contrast, we show that if there are exactly two players with different backoff ranges, then it becomes impossible to design a strategic game with a unique such Nash equilibrium. Finally, we show an impossibility result under certain natural limitations on the network authority.

Antoniy Ganchev, Lata Narayanan, Sunil Shende
Bayesian Optimal No-Deficit Mechanism Design

One of the most fundamental problems in mechanism design is that of designing the auction that gives the optimal profit to the auctioneer. For the case that the probability distributions on the valuations of the bidders are known and independent, Myerson [15] reduces the problem to that of maximizing the common welfare by considering the

virtual valuations

in place of the bidders’ actual valuations. The Myerson auction maximizes the seller’s profit over the class of all mechanisms that are truthful and individually rational for all the bidders; however, the mechanism does not satisfy ex post individual rationality for the seller. In other words, there are examples in which for certain sets of bidder valuations, the mechanism incurs a loss.

We consider the problem of merging the worst case

no-deficit

(or ex post seller individual rationality) condition with this average case

Bayesian expected profit maximization

problem. When restricting our attention to ex post incentive compatible mechanisms for this problem, we find that the Myerson mechanism is the optimal no-deficit mechanism for

supermodular

costs, that Myerson merged with a simple

thresholding mechanism

is optimal for

all-or-nothing

costs, and that neither mechanism is optimal for general

submodular

costs. Addressing the computational side of the problem, we note that for supermodular costs the Myerson mechanism is NP-hard to compute. Furthermore, we show that for all-or-nothing costs the optimal thresholding mechanism is NP-hard to compute. Finally, we consider relaxing the ex post incentive compatibility constraint and show that there is a Bayesian incentive compatible mechanism that achieves the same expected profit as Myerson, but never incurs a loss.

Shuchi Chawla, Jason D. Hartline, Uday Rajan, R. Ravi
Succinct Approximation of Trade-Off Curves

When evaluating different solutions from a design space, it is often the case that more than one criteria come into play. The trade-off between the different criteria is captured by the so-called Pareto curve. The Pareto curve has typically an exponential number of points. However, it turns out that, under general conditions, there is a polynomially succinct curve that approximates the Pareto curve within any desired accuracy.

Mihalis Yannakakis
Game-Theoretic Aspects of Designing Hyperlink Structures

We study the problem of designing the hyperlink structure between the web pages of a web site in order to maximize the revenue generated from the traffic on the web site. We show this problem is equivalent to the well-studied setting of infinite horizon discounted Markov Decision Processes (MDPs). Thus existing results from that literature imply the existence of polynomial-time algorithms for finding the optimal hyperlink structure, as well as a linear program to describe the optimal structure. We use a similar linear program to address our problem (and, by extension all infinite horizon discounted MDPs) from the perspective of cooperative game theory: if each web page is controlled by an autonomous agent, is it possible to give the individuals and coalitions incentive to cooperate and build the optimal hyperlink design? We study this question in the settings of transferrable utility (TU) and non-transferrable utility (NTU) games. In the TU setting, we use linear programming duality to show that the core of the game is non-empty and that the optimal structure is in the core. For the NTU setting, we show that if we allow “mixed” strategies, the core of the game is non-empty, but there are examples that show that the core can be highly inefficient.

Nicole Immorlica, Kamal Jain, Mohammad Mahdian
Competing for Customers in a Social Network: The Quasi-linear Case

There are many situations in which a customer’s proclivity to buy the product of any firm depends not only on the classical attributes of the product such as its price and quality, but also on who else is buying the same product. We model these situations as games in which firms compete for customers located in a “social network”. Nash Equilibrium (NE) in pure strategies exist and are unique. Indeed there are closed-form formulae for the NE in terms of the exogenous parameters of the model, which enables us to compute NE in polynomial time.

An important structural feature of NE is that, if there are no a priori biases between customers and firms, then there is a cut-off level above which high cost firms are blockaded at an NE, while the rest compete

uniformly

throughout the network.

We finally explore the relation between the connectivity of a customer and the money firms spend on him. This relation becomes particularly transparent when externalities are dominant: NE can be characterized in terms of the invariant measures on the recurrent classes of the Markov chain underlying the social network.

Pradeep Dubey, Rahul Garg, Bernard De Meyer
Selfish Service Installation in Networks
(Extended Abstract)

We consider a scenario of distributed service installation in privately owned networks. Our model is a non-cooperative

vertex cover game

for

k

players. Each player owns a set of edges in a graph

G

and strives to cover each edge by an incident vertex. Vertices have costs and must be purchased to be available for the cover. Vertex costs can be shared arbitrarily by players. Once a vertex is bought, it can be used by any player to fulfill the covering requirement of her incident edges. Despite its simplicity, the model exhibits a surprisingly rich set of properties. We present a cumulative set of results including tight characterizations for prices of anarchy and stability, NP-hardness of equilibrium existence, and polynomial time solvability for important subclasses of the game. In addition, we consider the task of finding approximate Nash equilibria purchasing an approximation to the optimum social cost, in which each player can improve her contribution by selfish defection only by at most a certain factor. A variation of the primal-dual algorithm for minimum weighted vertex cover yields a guarantee of 2, which is shown to be tight.

Jean Cardinal, Martin Hoefer
Games of Connectivity

We consider a communications network in which users transmit beneficial information to each other at a cost. We pinpoint conditions under which the induced cooperative game is supermodular (convex). Our analysis is in a lattice-theoretic framework, which is at once simple and able to encompass a wide variety of seemingly disparate models.

JEL Classification:

C71, D82, L96.

Pradeep Dubey, Rahul Garg
Assignment Problems in Rental Markets

Motivated by the dynamics of the ever-popular online movie rental business, we study a range of assignment problems in rental markets. The assignment problems associated with rental markets possess a rich mathematical structure and are closely related to many well-studied one-sided matching problems. We formalize and characterize the assignment problems in rental markets in terms of one-sided matching problems, and consider several solution concepts for these problems. In order to evaluate and compare these solution concepts (and the corresponding algorithms), we define some “value” functions to capture our objectives, which include fairness, efficiency and social welfare. Then, we bound the value of the output of these algorithms in terms of the chosen value functions.

We also consider models of rental markets corresponding to static, online, and dynamic customer valuations. We provide several constant-factor approximation algorithms for the assignment problem, as well as hardness of approximation results for the different models. Finally, we describe some experiments with a discrete event simulator compare the various algorithms in a practical setting, and present some interesting experimental results.

David Abraham, Ning Chen, Vijay Kumar, Vahab S. Mirrokni
On Portfolio’s Default-Risk-Adjusted Duration and Value: Model and Algorithm Based on Copulas

In this paper, we propose a new approach, copulas, to calculating the default-risk-adjusted duration and present value for a portfolio of bonds vulnerable to default risk. A copula function is used to determine the default dependence structure and simulate correlated default time from individual obligor’s default distribution. This approach is verified to be effective and applicable by a numerical example, in which we demonstrate how to calculate the default-risk-adjusted duration and present value for a given portfolio. In the process we take into account of the settlement time when default happens, the choice of copula function and the correlation between obligors, and make a sensitive analysis of the influence of Kendall’s tau and copula functions on the default-risk-adjusted duration and present value. Results show that the duration and present value simulated from Gaussian copula fluctuates larger than that from Clayton and Gumbel copulas when Kendall’s tau varies from zero to one.

Ping Li, Hou-Sheng Chen, Guang-Dong Huang, Xiao-Jun Shi
Price Roll-Backs and Path Auctions: An Approximation Scheme for Computing the Market Equilibrium

In this paper we investigate the structure of prices in approximate solutions to the market equilibrium problem. The bounds achieved allow a scaling approach for computing market equilibrium in the Fisher model. Our algorithm computes an exact solution and improves the complexity of previously known combinatorial algorithms for the problem. It consists of a price roll-back step combined with the auction steps of [11]. Our approach also leads to an efficient polynomial time approximation scheme. We also show a reduction from a flow problem to the market equlibrium problem, illustrating its inherent complexity.

Rahul Garg, Sanjiv Kapoor
New Results on Rationality and Strongly Polynomial Time Solvability in Eisenberg-Gale Markets

We study the structure of EG[2], the class of Eisenberg-Gale markets with two agents. We prove that all markets in this class are rational and they admit strongly polynomial algorithms whenever the polytope containing the set of feasible utilities of the two agents can be described via a combinatorial LP. This helps resolve positively the status of two markets left as open problems by [JV]: the capacity allocation market in a directed graph with two source-sink pairs and the network coding market in a directed network with two sources.

Our algorithms for solving the corresponding nonlinear convex programs are fundamentally different from those obtained by [JV]; whereas they use the primal-dual schema, we use a carefully constructed binary search.

Deeparnab Chakrabarty, Nikhil Devanur, Vijay V. Vazirani
Making Economic Theory Operational

We focus on the opportunity the Internet has provided for a fully operational Economic Theory. We should review a few of the current development of the algorithmic approach in its study and postulate on the important issues that lie ahead of us.

Xiaotie Deng
Sparse Games Are Hard

A two-player game is sparse if most of its payoff entries are zeros. We show that the problem of computing a Nash equilibrium remains

PPAD

-hard to approximate in fully polynomial time for sparse games. On the algorithmic side, we give a simple and polynomial-time algorithm for finding exact Nash equilibria in a class of sparse win-lose games.

Xi Chen, Xiaotie Deng, Shang-Hua Teng
Market Equilibria with Hybrid Linear-Leontief Utilities

We introduce a new family of utility functions for exchange markets. This family provides a natural and “continuous” hybridization of the traditional linear and Leontief utilities and might be useful in understanding the complexity of computing and approximating market equilibria. Because this family of utility functions contains Leontief utility functions as special cases, finding approximate Arrow-Debreu equilibria with hybrid linear-Leontief utilities is

PPAD

-hard in general. In contrast, we show that, when the Leontief components are grouped, finite and well-conditioned, we can efficiently compute an approximate Arrow-Debreu equilibrium.

Xi Chen, Li-Sha Huang, Shang-Hua Teng
Polynomial Algorithms for Approximating Nash Equilibria of Bimatrix Games

We focus on the problem of computing an

ε

-Nash equilibrium of a bimatrix game, when

ε

is an absolute constant. We present a simple algorithm for computing a

$\frac{3}{4}$

-Nash equilibrium for any bimatrix game in strongly polynomial time and we next show how to extend this algorithm so as to obtain a (potentially stronger) parameterized approximation. Namely, we present an algorithm that computes a

$\frac{2+\lambda}{4}$

-Nash equilibrium, where

λ

is the minimum, among all Nash equilibria, expected payoff of either player. The suggested algorithm runs in time polynomial in the number of strategies available to the players.

Spyros C. Kontogiannis, Panagiota N. Panagopoulou, Paul G. Spirakis
A Note on Approximate Nash Equilibria

In view of the intractability of finding a Nash equilibrium, it is important to understand the limits of approximation in this context. A subexponential approximation scheme is known [LMM03], and no approximation better than

$1\over 4$

is possible by any algorithm that examines equilibria involving fewer than log

n

strategies [Alt94]. We give a simple, linear-time algorithm examining just two strategies per player and resulting in a

$1\over 2$

-approximate Nash equilibrium in any 2-player game. For the more demanding notion of

well-supported approximate equilibrium

due to [DGP06] no nontrivial bound is known; we show that the problem can be reduced to the case of win-lose games (games with all utilities 0–1), and that an approximation of

$5\over 6$

is possible contingent upon a graph-theoretic conjecture.

Constantinos Daskalakis, Aranyak Mehta, Christos Papadimitriou
Ranking Sports Teams and the Inverse Equal Paths Problem

The problem of rank aggregation has been studied in contexts varying from sports, to multi-criteria decision making, to machine learning, to academic citations, to ranking web pages, and to descriptive decision theory. Rank aggregation is the mapping of inputs that rank subsets of a set of objects into a consistent ranking that represents in some meaningful way the various inputs. In the ranking of sports competitors, or academic citations or ranking of web pages the inputs are in the form of pairwise comparisons. We present here a new paradigm using an optimization framework that addresses major shortcomings in current models of aggregate ranking. Ranking methods are often criticized for being subjective and ignoring some factors or emphasizing others. In the ranking scheme here subjective considerations can be easily incorporated while their contributions to the overall ranking are made explicit.

The

inverse equal paths

problem is introduced here, and is shown to be tightly linked to the problem of aggregate ranking “optimally”. This framework is useful in making an optimization framework available and by introducing specific performance measures for the quality of the aggregate ranking as per its deviations from the input rankings provided. Presented as inverse equal paths problem we devise for the aggregate ranking problem polynomial time combinatorial algorithms for convex penalty functions of the deviations; and show the NP-hardness of some forms of nonlinear penalty functions. Interestingly, the algorithmic setup of the problem is that of a network flow problem.

We compare the equal paths scheme here to the eigenvector method, Google PageRank for ranking web sites, and the academic citation method for ranking academic papers.

Dorit S. Hochbaum
Price of Anarchy for Polynomial Wardrop Games

In this work, we consider

Wardrop games

where traffic has to be routed through a shared

network

. Traffic is allowed to be split into arbitrary pieces and can be modeled as network flow. For each edge in the network there is a latency function that specifies the time needed to traverse the edge given its congestion. In a

Wardrop equilibrium

, all used paths between a given source-destination pair have equal and minimal latency.

In this paper, we allow for

polynomial latency functions

with an upper bound

d

and a lower bound

s

on the degree of all monomials that appear in the polynomials. For this environment, we prove upper and lower bounds on the

price of anarchy

.

Dominic Dumrauf, Martin Gairing
Wardrop Equilibria and Price of Stability for Bottleneck Games with Splittable Traffic

We look at the scenario of having to route a continuous rate of traffic from a source node to a sink node in a network, where the objective is to maximize throughput. This is of interest, e.g., for providers of streaming content in communication networks. The overall path latency, which was relevant in other non-cooperative network routing games such as the classic Wardrop model, is of lesser concern here.

To that end, we define bottleneck games with splittable traffic where the throughput on a path is inversely proportional to the maximum latency of an edge on that very path—the bottleneck latency. Therefore, we define a Wardrop equilibrium as a traffic distribution where this bottleneck latency is at minimum on all used paths. As a measure for the overall system well-being—called social cost—we take the weighted sum of the bottleneck latencies of all paths.

Our main findings are as follows: First, we prove social cost of Wardrop equilibria on series parallel graphs to be unique. Even more, for any graph whose subgraph induced by all simple start-destination paths is not series parallel, there exist games having equilibria with different social cost. For the price of stability, we give an independence result with regard to the network topology. Finally, our main result is giving a new exact price of stability for Wardrop/bottleneck games on parallel links with M/M/1 latency functions. This result is at the same time the exact price of stability for bottleneck games on general graphs.

Vladimir Mazalov, Burkhard Monien, Florian Schoppmann, Karsten Tiemann
A Worm Propagation Model Based on People’s Email Acquaintance Profiles

One frequently employed way of propagation exploited by worms is through the victim’s contact book. The contact book, which reflects the acquaintance profiles of people, is used as a “hit-list”, to which the worm can send itself in order to spread fast. In this paper we propose a discrete worm propagation model that relies upon a combined email and Instant Messaging (IM) communication behaviour of users. We also model user reaction against infected email as well as the rate at which antivirus software is installed. User acquaintance is perceived as a “network” connecting users based on their contact book links. We then propose a worm propagation formulation based on a token propagation algorithm, further analyzed with a use of a system of continuous differential equations, as dictated by Wormald’s theorem on approximating “well-behaving” random processes with deterministic functions.

T. Komninos, Y. C. Stamatiou, G. Vavitsas
Mixed Strategies in Combinatorial Agency
(Extended Abstract)

We study a setting where a principal needs to motivate a team of agents whose combination of hidden efforts stochastically determines an outcome. In a companion paper we devise and study a basic “combinatorial agency” model for this setting, where the principal is restricted to inducing a pure Nash equilibrium. Here, we show that the principal may possibly gain from inducing a mixed equilibrium, but this gain can be bounded for various families of technologies (in particular if a technology has symmetric combinatorial structure). In addition, we present a sufficient condition under which mixed strategies yield no gain to the principal.

Moshe Babaioff, Michal Feldman, Noam Nisan
The Sound of Silence: Mining Implicit Feedbacks to Compute Reputation

A reliable mechanism for scoring the reputation of sellers is crucial for the development of a successful environment for customer-to-customer e-commerce. Unfortunately, most C2C environments utilize simple feedback-based reputation systems, that not only do not offer sufficient protection from fraud, but tend to overestimate the reputation of sellers by introducing a strong bias toward maximizing the volume of sales at the expense of the quality of service.

In this paper we present a method that avoids the unfavorable phenomenon of overestimating the reputation of sellers by using implicit feedbacks. We introduce the notion of an implicit feedback and we propose two strategies for discovering implicit feedbacks. We perform a twofold evaluation of our proposal. To demonstrate the existence of the implicit feedback and to propose an advanced method of implicit feedback discovery we conduct experiments on a large volume of real-world data acquired from an online auction site. Next, a game-theoretic approach is presented that uses simulation to show that the use of the implicit feedback can improve a simple reputation system such as used by eBay. Both the results of the simulation and the results of experiments prove the validity and importance of using implicit feedbacks in reputation scoring.

Mikołaj Morzy, Adam Wierzbicki
Strongly Polynomial-Time Truthful Mechanisms in One Shot

One of the main challenges in

algorithmic mechanism design

is to turn (existing) efficient algorithmic solutions into efficient

truthful mechanisms

. Building a truthful mechanism is indeed a difficult process since the underlying algorithm must obey certain “monotonicity” properties and suitable

payment functions

need to be computed (this task usually represents the bottleneck in the overall time complexity).

We provide a general technique for building truthful mechanisms that provide optimal solutions in

strongly polynomial time

. We show that the

entire

mechanism can be obtained if one is able to express/write a strongly polynomial-time algorithm (for the corresponding optimization problem) as a “suitable combination” of simpler algorithms. This approach applies to a wide class of

mechanism design graph problems

, where each selfish agent corresponds to a weighted edge in a graph (the weight of the edge is the cost of using that edge). Our technique can be applied to several optimization problems which prior results cannot handle (e.g., MIN-MAX optimization problems).

As an application, we design the first (strongly polynomial-time) truthful mechanism for the

minimum diameter spanning tree

problem, by obtaining it directly from an existing algorithm for solving this problem. For this non-utilitarian MIN-MAX problem, no truthful mechanism was known, even considering those running in exponential time (indeed, exact algorithms do not necessarily yield truthful mechanisms). Also, standard techniques for payment computations may result in a running time which is not polynomial in the size of the input graph. The overall running time of our mechanism, instead, is polynomial in the number

n

of nodes and

m

of edges, and it is only a factor

O

(

(

n

,

n

)) away from the best known canonical centralized algorithm.

Paolo Penna, Guido Proietti, Peter Widmayer
Secretary Problems with Competing Employers

In many decentralized labor markets, job candidates are offered positions at very early stages in the hiring process. It has been argued that these early offers are an effect of the competition between employers for the best candidate. This work studies the timing of offers in a theoretical model based on the classical secretary problem. We consider a secretary problem with multiple employers and study the equilibria of the induced game. Our results confirm the observation of early offers in labor markets: for several classes of strategies based on optimal stopping theory, as the number of employers grows, the timing of the earliest offer decreases.

Nicole Immorlica, Robert Kleinberg, Mohammad Mahdian
Backmatter
Metadaten
Titel
Internet and Network Economics
herausgegeben von
Paul Spirakis
Marios Mavronicolas
Spyros Kontogiannis
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-68141-0
Print ISBN
978-3-540-68138-0
DOI
https://doi.org/10.1007/11944874

Premium Partner