Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 12th International Conference on Web and Internet Economics, WINE 2016, held in Montreal, QC, Canada, in December 2016. The 35 regular papers presented together with 3 invited talks were carefully reviewed and selected from 88 submissions.
The Conference on Web and Internet Economics (WINE) is an interdisciplinary forum for the exchange of ideas and results on incentives and computation arising from the following fields: Theoretical Computer Science, Artificial Intelligence, and Microeconomics.



Computing Equilibria with Partial Commitment

In security games, the solution concept commonly used is that of a Stackelberg equilibrium where the defender gets to commit to a mixed strategy. The motivation for this is that the attacker can repeatedly observe the defender’s actions and learn her distribution over actions, before acting himself. If the actions were not observable, Nash (or perhaps correlated) equilibrium would arguably be a more natural solution concept. But what if some, but not all, aspects of the defender’s actions are observable? In this paper, we introduce solution concepts corresponding to this case, both with and without correlation. We study their basic properties, whether these solutions can be efficiently computed, and the impact of additional observability on the utility obtained.

Vincent Conitzer

Distributed Methods for Computing Approximate Equilibria

We present a new, distributed method to compute approximate Nash equilibria in bimatrix games. In contrast to previous approaches that analyze the two payoff matrices at the same time (for example, by solving a single LP that combines the two players’ payoffs), our algorithm first solves two independent LPs, each of which is derived from one of the two payoff matrices, and then computes an approximate Nash equilibrium using only limited communication between the players. Our method gives improved bounds on the complexity of computing approximate Nash equilibria in a number of different settings. Firstly, it gives a polynomial-time algorithm for computing approximate well supported Nash equilibria (WSNE) that always finds a 0.6528-WSNE, beating the previous best guarantee of 0.6608. Secondly, since our algorithm solves the two LPs separately, it can be applied to give an improved bound in the limited communication setting, giving a randomized expected-polynomial-time algorithm that uses poly-logarithmic communication and finds a 0.6528-WSNE, which beats the previous best known guarantee of 0.732. It can also be applied to the case of approximate Nash equilibria, where we obtain a randomized expected-polynomial-time algorithm that uses poly-logarithmic communication and always finds a 0.382-approximate Nash equilibrium, which improves the previous best guarantee of 0.438. Finally, the method can also be applied in the query complexity setting to give an algorithm that makes $$O(n \log n)$$O(nlogn) payoff queries and always finds a 0.6528-WSNE, which improves the previous best known guarantee of 2/3.

Artur Czumaj, Argyrios Deligkas, Michail Fasoulakis, John Fearnley, Marcin Jurdziński, Rahul Savani

Inapproximability Results for Approximate Nash Equilibria

We study the problem of finding approximate Nash equilibria that satisfy certain conditions, such as providing good social welfare. In particular, we study the problem $$\epsilon $$ϵ-NE $$\delta $$δ-SW: find an $$\epsilon $$ϵ-approximate Nash equilibrium ($$\epsilon $$ϵ-NE) that is within $$\delta $$δ of the best social welfare achievable by an $$\epsilon $$ϵ-NE. Our main result is that, if the randomized exponential-time hypothesis (RETH) is true, then solving $$\left( \frac{1}{8} - \mathrm {O}(\delta )\right) $$18-O(δ)-NE $$\mathrm {O}(\delta )$$O(δ)-SW for an $$n\times n$$n×n bimatrix game requires $$n^{\mathrm {\widetilde{\Omega }}(\delta ^{\varLambda } \log n)}$$nΩ~(δΛlogn) time, where $$\varLambda $$Λ is a constant.Building on this result, we show similar conditional running time lower bounds on a number of decision problems for approximate Nash equilibria that do not involve social welfare, including maximizing or minimizing a certain player’s payoff, or finding approximate equilibria contained in a given pair of supports. We show quasi-polynomial lower bounds for these problems assuming that RETH holds, and these lower bounds apply to $$\epsilon $$ϵ-Nash equilibria for all $$\epsilon < \frac{1}{8}$$ϵ<18. The hardness of these other decision problems has so far only been studied in the context of exact equilibria.

Argyrios Deligkas, John Fearnley, Rahul Savani

Multilinear Games

In many games, players’ decisions consist of multiple sub-decisions, and hence can give rise to an exponential number of pure strategies. However, this set of pure strategies is often structured, allowing it to be represented compactly, as in network congestion games, security games, and extensive form games. Reduction to the standard normal form generally introduces exponential blow-up in the strategy space and therefore are inefficient for computation purposes. Although individual classes of such games have been studied, there currently exists no general purpose algorithms for computing solutions. equilibrium.To address this, we define multilinear games generalizing all. Informally, a game is multilinear if its utility functions are linear in each player’s strategy, while fixing other players’ strategies. Thus, if pure strategies, even if they are exponentially many, are vectors in polynomial dimension, then we show that mixed-strategies have an equivalent representation in terms of marginals forming a polytope in polynomial dimension.The canonical representation for multilinear games can still be exponential in the number of players, a typical obstacle in multi-player games. Therefore, it is necessary to assume additional structure that allows computation of certain sub-problems in polynomial time. Towards this, we identify two key subproblems: computation of utility gradients, and optimizing linear functions over strategy polytope. Given a multilinear game, with polynomial time subroutines for these two tasks, we show the following: (a) We can construct a polynomially computable and continuous fixed-point formulation, and show that its approximate fixed-points are approximate NE. This gives containment of approximate NE computation in PPAD, and settles its complexity to PPAD-complete. (b) Even though a coarse correlated equilibrium can potentially have exponential representation , through LP duality and a carefully designed separation oracle, we provide a polynomial-time algorithm to compute one with polynomial representation. (c) We show existence of an approximate NE with support-size logarithmic in the strategy polytope dimensions.

Hau Chan, Albert Xin Jiang, Kevin Leyton-Brown, Ruta Mehta

Power-Law Distributions in a Two-Sided Market and Net Neutrality

“Net neutrality” often refers to the policy dictating that an Internet service provider (ISP) cannot charge content providers (CPs) for delivering their content to consumers. Many past quantitative models designed to determine whether net neutrality is a good idea have been rather equivocal in their conclusions. Here we propose a very simple two-sided market model, in which the types of the consumers and the CPs are power-law distributed — a kind of distribution known to often arise precisely in connection with Internet-related phenomena. We derive mostly analytical, closed-form results for several regimes: (a) Net neutrality, (b) social optimum, (c) maximum revenue by the ISP, or (d) maximum ISP revenue under quality differentiation. One unexpected conclusion is that (a) and (b) will differ significantly, unless average CP productivity is very high.

Xiaotie Deng, Zhe Feng, Christos H. Papadimitriou

On-Demand or Spot? Selling the Cloud to Risk-Averse Customers

In Amazon EC2, cloud resources are sold through a combination of an on-demand market, in which customers buy resources at a fixed price, and a spot market, in which customers bid for an uncertain supply of excess resources. Standard market environments suggest that an optimal design uses just one type of market. We show the prevalence of a dual market system can be explained by heterogeneous risk attitudes of customers. In our stylized model, we consider unit demand risk-averse bidders. We show the model admits a unique equilibrium, with higher revenue and higher welfare than using only spot markets. Furthermore, as risk aversion increases, the usage of the on-demand market increases. We conclude that risk attitudes are an important factor in cloud resource allocation and should be incorporated into models of cloud markets.

Darrell Hoy, Nicole Immorlica, Brendan Lucier

Buying Data from Privacy-Aware Individuals: The Effect of Negative Payments

We study a market model where a data analyst uses monetary incentives to procure private data from strategic data subjects/individuals. To characterize individuals’ privacy concerns, we consider a local model of differential privacy, where the individuals do not trust the analyst so privacy costs are incurred when the data is reported to the data analyst. We investigate a basic model where the private data are bits that represent the individuals’ knowledge about an underlying state, and the analyst pays each individual according to all the reported data. The data analyst’s goal is to design a payment mechanism such that at an equilibrium, she can learn the state with an accuracy goal met and the corresponding total expected payment minimized. What makes the mechanism design more challenging is that not only the data but also the privacy costs are unknown to the data analyst, where the costs reflect individuals’ valuations of privacy and are modeled by “cost coefficients.” To cope with the uncertainty in the cost coefficients and drive down the data analyst’s cost, we utilize possible negative payments (which penalize individuals with “unacceptably” high valuations of privacy) and explore interim individual rationality. We design a family of payment mechanisms, each of which has a Bayesian Nash equilibrium where the individuals exhibit a threshold behavior: the individuals with cost coefficients above a threshold choose not to participate, and the individuals with cost coefficients below the threshold participate and report data with quality guarantee. By choosing appropriate parameters, we obtain a sequence of mechanisms, as the number of individuals grows large. Each mechanism in this sequence fulfills the accuracy goal at a Bayesian Nash equilibrium. The total expected payment at the equilibrium goes to zero; i.e., this sequence of mechanisms is asymptotically optimal.

Weina Wang, Lei Ying, Junshan Zhang

Bidding Strategies for Fantasy-Sports Auctions

Fantasy sports is a fast-growing, multi-billion dollar industry [10] in which competitors assemble virtual teams of athletes from real professional sports leagues and obtain points based on the statistical performance of those athletes in actual games. Users (team managers) can add, drop, and trade players throughout the season, but the pivotal event is the player draft that initiates the competition. One common drafting mechanism is the so-called auction draft: managers bid on athletes in rounds until all positions on each roster have been filled. Managers start with the same initial virtual budget and take turns successively nominating athletes to be auctioned, with the winner of each round making a virtual payment that diminishes his budget for future rounds. Each manager tries to obtain players that maximize the expected performance of his own team. In this paper we initiate the study of bidding strategies for fantasy sports auction drafts, focusing on the design and analysis of simple strategies that achieve good worst-case performance, obtaining a constant fraction of the best value possible, regardless of competing managers’ bids. Our findings may be useful in guiding bidding behavior of fantasy sports participants, and perhaps more importantly may provide the basis for a competitive auto-draft mechanism to be used as a bidding proxy for participants who are absent from their league’s draft.

Aris Anagnostopoulos, Ruggiero Cavallo, Stefano Leonardi, Maxim Sviridenko

Competitive Equilibria for Non-quasilinear Bidders in Combinatorial Auctions

quasilinearity is a ubiquitous and questionable assumption in the standard study of Walrasian equilibria. Quasilinearity implies that a buyer’s value for goods purchased in a Walrasian equilibrium is always additive with goods purchased with unspent money. It is a particularly suspect assumption in combinatorial auctions, where buyers’ complex preferences over goods would naturally extend beyond the items obtained in the Walrasian equilibrium.We study Walrasian equilibria in combinatorial auctions when quasilinearity is not assumed. We show that existence can be reduced to an Arrow-Debreu style market with one divisible good and many indivisible goods, and that a “fractional” Walrasian equilibrium always exists. We also show that standard integral Walrasian equilibria are related to integral solutions of an induced configuration LP associated with a fractional Walrasian equilibrium, generalizing known results for both quasilinear and non-quasilnear settings.

Rad Niazadeh, Christopher A. Wilkens

Correlated and Coarse Equilibria of Single-Item Auctions

We study correlated equilibria and coarse equilibria of simple first-price single-item auctions in the simplest auction model of full information. Nash equilibria are known to always yield full efficiency and a revenue that is at least the second-highest value. We prove that the same is true for all correlated equilibria, even those in which agents overbid – i.e., bid above their values.Coarse equilibria, in contrast, may yield lower efficiency and revenue. We show that the revenue can be as low as $$26\%$$26% of the second-highest value in a coarse equilibrium, even if agents are assumed not to overbid, and this is tight. We also show that when players do not overbid, the worst-case bound on social welfare at coarse equilibrium improves from $$63\%$$63% of the highest value to $$81\%$$81%, and this bound is tight as well.

Michal Feldman, Brendan Lucier, Noam Nisan

Pricing to Maximize Revenue and Welfare Simultaneously in Large Markets

We study large markets with a single seller who can produce many types of goods, and many multi-minded buyers. The seller chooses posted prices for its many items, and the buyers purchase bundles to maximize their utility. For this setting, we consider the following questions: what fraction of the optimum social welfare does a revenue maximizing solution achieve? Are there pricing mechanisms which achieve both good revenue and good welfare simultaneously? To address these questions, we give envy-free pricing schemes which are guaranteed to result in both good revenue and welfare, as long as the buyer valuations for the goods they desire have a nice (although reasonable) structure, e.g., the aggregate buyer demand has a monotone hazard rate or is not too convex. We also show that our pricing schemes have implications for any solution which achieves high revenue: specifically that in many settings, prices that maximize (approximately) profit also result in high social welfare. Our results holds for general multi-minded buyers in large markets with production costs; we also provide improved guarantees for the important special case of unit-demand buyers.

Elliot Anshelevich, Koushik Kar, Shreyas Sekar

A Prior-Independent Revenue-Maximizing Auction for Multiple Additive Bidders

Recent work by Babaioff et al. [1], Yao [30], and Cai et al. [7] shows how to construct an approximately optimal auction for additive bidders, given access to the priors from which the bidders’ values are drawn. In this paper, building on the single sample approach of Dhangwatnotai et al. [15], we show how the auctioneer can obtain approximately optimal expected revenue in this setting without knowing the priors, as long as the item distributions are regular.

Kira Goldner, Anna R. Karlin

Optimal Mechanism for Selling Two Items to a Single Buyer Having Uniformly Distributed Valuations

We consider the design of an optimal mechanism for a seller selling two items to a single buyer so that the expected revenue to the seller is maximized. The buyer’s valuation of the two items is assumed to be the uniform distribution over an arbitrary rectangle $$[c_1,c_1+b_1]\times [c_2,c_2+b_2]$$[c1,c1+b1]×[c2,c2+b2] in the positive quadrant. The solution to the case when $$(c_1,c_2)=(0,0)$$(c1,c2)=(0,0) was already known. We provide an explicit solution for arbitrary nonnegative values of $$(c_1,c_2,b_1,b_2)$$(c1,c2,b1,b2). We prove that the optimal mechanism is to sell the two items according to one of eight simple menus. We also prove that the solution is deterministic when either $$c_1$$c1 or $$c_2$$c2 is beyond a threshold. Finally, we conjecture that our methodology can be extended to a wider class of distributions. We also provide some preliminary results to support the conjecture.

Thirumulanathan D., Rajesh Sundaresan, Y. Narahari

Anonymous Auctions Maximizing Revenue

Auctions like sponsored search often implicitly or explicitly require that bidders are treated fairly. This may be because large bidders have market power to negotiate equal treatment, because small bidders are difficult to identify, or for many other reasons. We study so-called anonymous auctions to understand the revenue tradeoffs and to develop simple anonymous auctions that are approximately optimal.We begin with the canonical digital goods setting and show that the optimal anonymous, ex-post incentive compatible auction has an intuitive structure — imagine that bidders are randomly permuted before the auction, then infer a posterior belief about bidder i’s valuation from the values of other bidders and set a posted price that maximizes revenue given this posterior.We prove that no anonymous mechanism can guarantee an approximation better than $$\varTheta (n)$$Θ(n) to the optimal revenue in the worst case (or $$\varTheta (\log n)$$Θ(logn) for regular distributions) and that even posted price mechanisms match those guarantees. Understanding that the real power of anonymous mechanisms comes when the auctioneer can infer the bidder identities accurately, we show a tight $$\varTheta (k)$$Θ(k) approximation guarantee when each bidder can be confused with at most k “higher types”. Moreover, we introduce a simple mechanism based on n target prices that is asymptotically optimal. Finally, we return to our original motivation and build on this mechanism to extend our results to m-unit auctions and sponsored search.

Christos Tzamos, Christopher A. Wilkens

Revenue Maximizing Envy-Free Pricing in Matching Markets with Budgets

We study envy-free pricing mechanisms in matching markets with m items and n budget constrained buyers. Each buyer is interested in a subset of the items on sale, and she appraises at some single-value every item in her preference-set. Moreover, each buyer has a budget that constraints the maximum affordable payment, while she aims to obtain as many items as possible of her preference-set. Our goal is to compute an envy-free pricing allocation that maximizes the revenue. This pricing problem is hard to approximate better than $$\varOmega (\mathrm{min} \{n,m\}^{1/2-\epsilon })$$Ω(min{n,m}1/2-ϵ) for any $$\epsilon >0$$ϵ>0, unless $$P=NP$$P=NP [7]. The goal of this paper is to circumvent the hardness result by restricting ourselves to specific settings of valuations and budgets. Two particularly significant scenarios are: each buyer has a budget that is greater than her single-value valuation, and each buyer has a budget that is lower than her single-value valuation. Surprisingly, in both scenarios we are able to achieve a 1/4-approximation to the optimal envy-free revenue.

Riccardo Colini-Baldeschi, Stefano Leonardi, Qiang Zhang

Conference Program Design with Single-Peaked and Single-Crossing Preferences

We consider the Conference Program Design (CPD) problem, a multi-round generalization of (the maximization versions of) q-Facility Location and the Chamberlin-Courant multi-winner election, introduced by (Caragiannis, Gourvès and Monnot, IJCAI 2016). CPD asks for the selection of kq items and their assignment to k disjoint sets of size q each. The agents receive utility only from their best item in each set and we want to maximize the total utility derived by all agents from all sets. Given that CPD is $$\mathbf {NP}$$NP-hard for general utilities, we focus on utility functions that are either single-peaked or single-crossing. For general single-peaked utilities, we show that CPD is solvable in polynomial time and that Percentile Mechanisms are truthful. If the agent utilities are given by distances in the unit interval, we show that a Percentile Mechanism achieves an approximation ratio 1 / 3, if $$q=1$$q=1, and at least $$(2q-3)/(2q-1)$$(2q-3)/(2q-1), for any $$q \ge 2$$q≥2. On the negative side, we show that a generalization of CPD, where some items must be assigned to specific sets in the solution, is $$\mathbf {NP}$$NP-hard for dichotomous single-peaked preferences. For single-crossing preferences, we present a dynamic programming exact algorithm that runs in polynomial time if k is constant.

Dimitris Fotakis, Laurent Gourvès, Jérôme Monnot

Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship

We study the truthful facility assignment problem, where a set of agents with private most-preferred points on a metric space are assigned to facilities that lie on the metric space, under capacity constraints on the facilities. The goal is to produce such an assignment that minimizes the social cost, i.e., the total distance between the most-preferred points of the agents and their corresponding facilities in the assignment, under the constraint of truthfulness, which ensures that agents do not misreport their most-preferred points.We propose a resource augmentation framework, where a truthful mechanism is evaluated by its worst-case performance on an instance with enhanced facility capacities against the optimal mechanism on the same instance with the original capacities. We study a well-known mechanism, Serial Dictatorship, and provide an exact analysis of its performance. Among other results, we prove that Serial Dictatorship has approximation ratio $$g/(g-2)$$g/(g-2) when the capacities are multiplied by any integer $$g \ge 3$$g≥3. Our results suggest that even a limited augmentation of the resources can have wondrous effects on the performance of the mechanism and in particular, the approximation ratio goes to 1 as the augmentation factor becomes large. We complement our results with bounds on the approximation ratio of Random Serial Dictatorship, the randomized version of Serial Dictatorship, when there is no resource augmentation.

Ioannis Caragiannis, Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Kristoffer Arnsfelt Hansen, Zihan Tan

Putting Peer Prediction Under the Micro(economic)scope and Making Truth-Telling Focal

Peer-prediction [19] is a (meta-)mechanism which, given any proper scoring rule, produces a mechanism to elicit prie information from self-interested agents. Formally, truth-telling is a strict Nash equilibrium of the mechanism. Unfortunately, there may be other equilibria as well (including uninformative equilibria where all players simply report the same fixed signal, regardless of their true signal) and, typically, the truth-telling equilibrium does not have the highest expected payoff. The main result of this paper is to show that, in the symmetric binary setting, by tweaking peer-prediction, in part by carefully selecting the proper scoring rule it is based on, we can make the truth-telling equilibrium focal—that is, truth-telling has higher expected payoff than any other equilibrium.Along the way, we prove the following: in the setting where agents receive binary signals we (1) classify all equilibria of the peer-prediction mechanism; (2) introduce a new technical tool for understanding scoring rules, which allows us to make truth-telling pay better than any other informative equilibrium; (3) leverage this tool to provide an optimal version of the previous result; that is, we optimize the gap between the expected payoff of truth-telling and other informative equilibria; and (4) show that with a slight modification to the peer-prediction framework, we can, in general, make the truth-telling equilibrium focal—that is, truth-telling pays more than any other equilibrium (including the uninformative equilibria).

Yuqing Kong, Katrina Ligett, Grant Schoenebeck

Truthful Mechanisms for Matching and Clustering in an Ordinal World

We study truthful mechanisms for matching and related problems in a partial information setting, where the agents’ true utilities are hidden, and the algorithm only has access to ordinal preference information. Our model is motivated by the fact that in many settings, agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. Specifically, we study problems where the ground truth exists in the form of a weighted graph of agent utilities, but the algorithm can only elicit the agents’ private informatison in the form of a preference ordering for each agent induced by the underlying weights. Against this backdrop, we design truthful algorithms to approximate the true optimum solution with respect to the hidden weights. Our techniques yield universally truthful algorithms for a number of graph problems: a 1.76-approximation algorithm for Max-Weight Matching, 2-approximation algorithm for Max k-matching, a 6-approximation algorithm for Densest k-subgraph, and a 2-approximation algorithm for Max Traveling Salesman as long as the hidden weights constitute a metric. Our results are the first non-trivial truthful approximation algorithms for these problems, and indicate that in many situations, we can design robust algorithms even when the agents may lie and only provide ordinal information instead of precise utilities.

Elliot Anshelevich, Shreyas Sekar

Computer-Aided Verification for Mechanism Design

We explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions.

Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Pierre-Yves Strub

Smoothness for Simultaneous Composition of Mechanisms with Admission

We study social welfare of learning outcomes in mechanisms with admission. In our repeated game there are n bidders and m mechanisms, and in each round each mechanism is available for each bidder only with a certain probability. Our scenario is an elementary case of simple mechanism design with incomplete information, where availabilities are bidder types. It captures natural applications in online markets with limited supply and can be used to model access of unreliable channels in wireless networks. If mechanisms satisfy a smoothness guarantee, existing results show that learning outcomes recover a significant fraction of the optimal social welfare. These approaches, however, have serious drawbacks in terms of plausibility and computational complexity. Also, the guarantees apply only when availabilities are stochastically independent among bidders. In contrast, we propose an alternative approach where each bidder uses a single no-regret learning algorithm and applies it in all rounds. This results in what we call availability-oblivious coarse correlated equilibria. It exponentially decreases the learning burden, simplifies implementation (e.g., as a method for channel access in wireless devices), and thereby addresses some of the concerns about Bayes-Nash equilibria and learning outcomes in Bayesian settings. Our main results are general composition theorems for smooth mechanisms when valuation functions of bidders are lattice-submodular. They rely on an interesting connection to the notion of correlation gap of submodular functions over product lattices.

Martin Hoefer, Thomas Kesselheim, Bojana Kodric

Motivating Time-Inconsistent Agents: A Computational Approach

We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model as proposed by Kleinberg and Oren [5]. Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G. Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to guide the agent. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a $$(1+\sqrt{n})$$(1+n)-approximation algorithm and prove NP-hardness for ratios greater than $$\sqrt{n}/3$$n/3. Finally, we argue that the second setting does not permit any efficient approximation unless $$\mathrm{P}=\mathrm{NP}$$P=NP.

Susanne Albers, Dennis Kraft

FPT Approximation Schemes for Maximizing Submodular Functions

We investigate the existence of approximation algorithms for maximization of submodular functions, that run in fixed parameter tractable (FPT) time. Given a non-decreasing submodular set function $$v: 2^X \rightarrow {{\mathbb {R}}}$$v:2X→R the goal is to select a subset S of K elements from X such that v(S) is maximized. We identify two properties of set functions, referred to as p-separability properties, and we argue that many real-life problems can be expressed as maximization of submodular, p-separable functions, with low values of the parameter p. We present FPT approximation schemes for the minimization and maximization variants of the problem, for several parameters that depend on characteristics of the optimized set function, such as p and K. We confirm that our algorithms are applicable to a broad class of problems, in particular to problems from computational social choice, such as item selection or winner determination under several multiwinner election systems.

Piotr Skowron

Bounds for the Convergence Time of Local Search in Scheduling Problems

We study the convergence time of local search for a standard machine scheduling problem in which jobs are assigned to identical or related machines. Local search corresponds to the best response dynamics that arises when jobs selfishly try to minimize their costs. We assume that each machine runs a coordination mechanism that determines the order of execution of jobs assigned to it. We obtain various new polynomial and pseudo-polynomial bounds for the well-studied coordination mechanisms Makespan and Shortest-Job-First, using worst-case and smoothed analysis. We also introduce a natural coordination mechanism FIFO, which takes into account the order in which jobs arrive at a machine, and study both its impact on the convergence time and its price of anarchy.

Tobias Brunsch, Michael Etscheid, Heiko Röglin

On the Price of Stability of Undirected Multicast Games

In multicast network design games, a set of agents choose paths from their source locations to a common sink with the goal of minimizing their individual costs, where the cost of an edge is divided equally among the agents using it. Since the work of Anshelevich et al. (FOCS 2004) that introduced network design games, the main open problem in this field has been the price of stability (PoS) of multicast games. For the special case of broadcast games (every vertex is a terminal, i.e., has an agent), a series of works has culminated in a constant upper bound on the PoS (Bilò et al., FOCS 2013). However, no significantly sub-logarithmic bound is known for multicast games. In this paper, we make progress toward resolving this question by showing a constant upper bound on the PoS of multicast games for quasi-bipartite graphs. These are graphs where all edges are between two terminals (as in broadcast games) or between a terminal and a nonterminal, but there is no edge between nonterminals. This represents a natural class of intermediate generality between broadcast and multicast games. In addition to the result itself, our techniques overcome some of the fundamental difficulties of analyzing the PoS of general multicast games, and are a promising step toward resolving this major open problem.

Rupert Freeman, Samuel Haney, Debmalya Panigrahi

Efficiency and Budget Balance

We study efficiency and budget balance for designing mechanisms in general quasi-linear domains. Green and Laffont [13] proved that one cannot generically achieve both. We consider strategyproof budget-balanced mechanisms that are approximately efficient. For deterministic mechanisms, we show that a strategyproof and budget-balanced mechanism must have a sink agent whose valuation function is ignored in selecting an alternative, and she is compensated with the payments made by the other agents. We assume the valuations of the agents come from a bounded open interval. This result strengthens Green and Laffont’s impossibility result by showing that even in a restricted domain of valuations, there does not exist a mechanism that is strategyproof, budget balanced, and takes every agent’s valuation into consideration—a corollary of which is that it cannot be efficient. Using this result, we find a tight lower bound on the inefficiencies of strategyproof, budget-balanced mechanisms in this domain. The bound shows that the inefficiency asymptotically disappears when the number of agents is large—a result close in spirit to Green and Laffont [13, Theorem 9.4]. However, our results provide worst-case bounds and the best possible rate of convergence. Next, we consider minimizing any convex combination of inefficiency and budget imbalance. We show that if the valuations are unrestricted, no deterministic mechanism can do asymptotically better than minimizing inefficiency alone. Finally, we investigate randomized mechanisms and provide improved lower bounds on expected inefficiency. We give a tight lower bound for an interesting class of strategyproof, budget-balanced, randomized mechanisms. We also use an optimization-based approach—in the spirit of automated mechanism design—to provide a lower bound on the minimum achievable inefficiency of any randomized mechanism. Experiments with real data from two applications show that the inefficiency for a simple randomized mechanism is 5–100 times smaller than the worst case. This relative difference increases with the number of agents.

Swaprava Nath, Tuomas Sandholm

The Core of the Participatory Budgeting Problem

In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating preferences to determine an allocation of funds to projects. We argue that the classic game theoretic notion of core captures fairness in the setting. To compute the core, we first develop a novel characterization of a public goods market equilibrium called the Lindahl equilibrium. We then provide the first polynomial time algorithm for computing such an equilibrium for a broad set of utility functions. We empirically show that the core can be efficiently computed for utility functions that naturally model data from real participatory budgeting instances, and examine the relation of the core with the welfare objective. Finally, we address concerns of incentives and mechanism design by developing a randomized approximately dominant-strategy truthful mechanism building on the Exponential Mechanism from differential privacy.

Brandon Fain, Ashish Goel, Kamesh Munagala

Approximating Gains-from-Trade in Bilateral Trading

We consider the design of platforms that facilitate trade between a single seller and a single buyer. The most efficient mechanisms for such settings are complex and sometimes even intractable, and we therefore aim to design simple mechanisms that perform approximately well. We devise a mechanism that always guarantees at least 1 / e of the optimal expected gain-from-trade for every set of distributions (assuming monotone hazard rate of the buyer’s distribution). Our main mechanism is extremely simple, and achieves this approximation in Bayes-Nash equilibrium. Moreover, our mechanism approximates the optimal gain-from-trade, which is a strictly harder task than approximating efficiency. Our main impossibility result shows that no Bayes-Nash incentive compatible mechanism can achieve better approximation than 2 / e to the optimal gain from trade. We also bound the power of Bayes-Nash incentive compatible mechanisms for approximating the expected efficiency.

Liad Blumrosen, Yehonatan Mizrahi

Coverage, Matching, and Beyond: New Results on Budgeted Mechanism Design

We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a cost, so as not to exceed a given budget and at the same time maximize a given valuation function. This framework captures the budgeted version of several well known optimization problems, and when the resources are owned by strategic agents the goal is to design truthful and budget feasible mechanisms. We first obtain mechanisms with an improved approximation ratio for weighted coverage valuations, a special class of submodular functions. We then provide a general scheme for designing randomized and deterministic polynomial time mechanisms for a class of XOS problems. This class contains problems whose feasible set forms an independence system (a more general structure than matroids), and some representative problems include, among others, finding maximum weighted matchings and maximum weighted matroid members. For most of these problems, only randomized mechanisms with very high approximation ratios were known prior to our results.

Georgios Amanatidis, Georgios Birmpas, Evangelos Markakis

Strategic Network Formation with Attack and Immunization

Strategic network formation arises in settings where agents receive some benefit from their connectedness to other agents, but also incur costs for forming these links. We consider a new network formation game that incorporates an adversarial attack, as well as immunization or protection against the attack. An agent’s network benefit is the expected size of her connected component post-attack, and agents may also choose to immunize themselves from attack at some additional cost. Our framework can be viewed as a stylized model of settings where reachability rather than centrality is the primary interest (as in many technological networks such as the Internet), and vertices may be vulnerable to attacks (such as viruses), but may also reduce risk via potentially costly measures (such as an anti-virus software).Our main theoretical contributions include a strong bound on the edge density at equilibrium. In particular, we show that under a very mild assumption on the adversary’s attack model, every equilibrium network contains at most only $$2n-4$$2n-4 edges for $$n \ge 4$$n≥4, where n denotes the number of agents and this upper bound is tight. We also show that social welfare does not significantly erode: every non-trivial equilibrium with respect to several adversarial attack models asymptotically has social welfare at least as that of any equilibrium in the original attack-free model.We complement our sharp theoretical results by a behavioral experiment on our game with over 100 participants, where despite the complexity of the game, the resulting network was surprisingly close to equilibrium.

Sanjeev Goyal, Shahin Jabbari, Michael Kearns, Sanjeev Khanna, Jamie Morgenstern

Opinion Formation Games with Dynamic Social Influences

We introduce opinion formation games with dynamic social influences, where opinion formation and social relationships co-evolve in a cross-influencing manner. We show that these games always admit an ordinal potential, and so, pure Nash equilibria, and we design a polynomial time algorithm for computing the set of all pure Nash equilibria and the set of all social optima of a given game. We also derive non-tight upper and lower bounds on the price of anarchy and stability which only depend on the players’ stubbornness, that is, on the scaling factor used to counterbalance the cost that a player incurs for disagreeing with the society and the cost she incurs for disagreeing with her innate believes.

Vittorio Bilò, Angelo Fanelli, Luca Moscardelli

Complex Contagions on Configuration Model Graphs with a Power-Law Degree Distribution

In this paper we analyze k-complex contagions (sometimes called bootstrap percolation) on configuration model graphs with a power-law distribution. Our main result is that if the power-law exponent $$\alpha \in (2, 3)$$α∈(2,3), then with high probability, the single seed of the highest degree node will infect a constant fraction of the graph within time $$O\left( \log ^{\frac{\alpha -2}{3-\alpha }}(n)\right) $$Ologα-23-α(n). This complements the prior work which shows that for $$\alpha > 3$$α>3 boot strap percolation does not spread to a constant fraction of the graph unless a constant fraction of nodes are initially infected. This also establishes a threshold at $$\alpha = 3$$α=3.The case where $$\alpha \in (2, 3)$$α∈(2,3) is especially interesting because it captures the exponent parameters often observed in social networks (with approximate power-law degree distribution). Thus, such networks will spread complex contagions even lacking any other structures.We additionally show that our theorem implies that $$\omega (\left( n^{\frac{\alpha -2}{\alpha -1}}\right) $$ω(nα-2α-1 random seeds will infect a constant fraction of the graph within time $$O\left( \log ^{\frac{\alpha -2}{3-\alpha }}(n)\right) $$Ologα-23-α(n) with high probability. This complements prior work which shows that $$o\left( n^{\frac{\alpha -2}{\alpha -1}}\right) $$onα-2α-1 random seeds will have no effect with high probability, and this also establishes a threshold at $$n^{\frac{\alpha -2}{\alpha -1}}$$nα-2α-1.

Grant Schoenebeck, Fang-Yi Yu


Weitere Informationen

Premium Partner