Skip to main content
Top

2005 | Book

Internet and Network Economics

First International Workshop, WINE 2005, Hong Kong, China, December 15-17, 2005. Proceedings

insite
SEARCH

About this book

WINE 2005, the First Workshop on Internet and Network Economics (WINE 2005), took place in Hong Kong, China, December 15-17, 2005. The symposium aims to provide a forum for researchers working in Internet and Network Economic algorithms from all over the world. The final count of electronic submissions was 372, of which 108 were accepted. It consists of the main program of 31 papers, of which the submitter email accounts are: 10 from edu (USA) accounts, 3 from hk (Hong Kong), 2 each from il (Isreal), cn (China), ch (Switzerland), de (Germany), jp (Japan), gr (Greece), 1 each from hp. com, sohu. com, pl (Poland), fr (France), ca (Canada), and in (India). In addition, 77 papers from 20 countries or regions and 6 dot. coms were selected for 16 special focus tracks in the areas of Internet and Algorithmic Economics; E-Commerce Protocols; Security; Collaboration, Reputation and Social Networks; Algorithmic Mechanism; Financial Computing; Auction Algorithms; Online Algorithms; Collective Rationality; Pricing Policies; Web Mining Strategies; Network Economics; Coalition Strategies; Internet Protocols; Price Sequence; Equilibrium. We had one best student paper nomination: “Walrasian Equilibrium: Hardness, Approximations and Tracktable Instances” by Ning Chen and Atri Rudra. We would like to thank Andrew Yao for serving the conference as its Chair, with inspiring encouragement and far-sighted leadership. We would like to thank the International Program Committee for spending their valuable time and effort in the review process.

Table of Contents

Frontmatter
Recent Developments in Equilibria Algorithms

Nash proved in 1951 that every game has a mixed Nash equilibrium [6]; whether such an equilibrium can be found in polynomial time has been open since that time. We review here certain recent results which shed some light to this old problem.

Even though a mixed Nash equilibrium is a continuous object, the problem is essentially combinatorial, since it suffices to identify the

support

of a mixed strategy for each player; however, no algorithm better than exponential for doing so is known. For the case of two players we have a simplex-like pivoting algorithm due to Lemke and Howson that is guaranteed to converge to a Nash equilibrium; this algorithm has an exponential worst case [9]. But even such algorithms seem unlikely for three or more players: in his original paper Nash supplied an example of a 3-player game, an abstraction of poker, with only irrational Nash equilibria.

Christos Papadimitriou
Partially-Specified Large Games

The sensitivity of Nash equilibrium to strategic and informational details presents a difficulty in applying it to games which are not fully specified. Structurally-robust Nash equilibria are less sensitive to such details. Moreover, they arise naturally in important classes of games that have many semi-anonymous players. The paper describes this condition and its implications.

Ehud Kalai
Exchange Market Equilibria with Leontief’s Utility: Freedom of Pricing Leads to Rationality

This paper studies the equilibrium property and algorithmic complexity of the exchange market equilibrium problem with more general utility functions: piece-wise linear functions, which include Leontief’s utility functions. We show that the Fisher model again reduces to the general analytic center problem, and the same linear programming complexity bound applies to approximating its equilibrium. However, the story for the Arrow-Debreu model with Leontief’s utility becomes quite different. We show that, for the first time, that solving this class of Leontief exchange economies is equivalent to solving a known linear complementarity problem whose algorithmic complexity status remains open.

Yinyu Ye
A Primal-Dual Algorithm for Computing Fisher Equilibrium in the Absence of Gross Substitutability Property

We provide the first strongly polynomial time exact combinatorial algorithm to compute Fisher equilibrium for the case when utility functions do not satisfy the Gross substitutability property. The motivation for this comes from the work of Kelly, Maulloo, and Tan [15] and Kelly and Vazirani [16] on rate control in communication networks. We consider a tree like network in which root is the source and all the leaf nodes are the sinks. Each sink has got a fixed amount of money which it can use to buy the capacities of the edges in the network. The edges of the network sell their capacities at certain prices. The objective of each edge is to fix a price which can fetch the maximum money for it and the objective of each sink is to buy capacities on edges in such a way that it can facilitate the sink to pull maximum flow from the source. In this problem, the edges and the sinks play precisely the role of sellers and buyers, respectively, in the Fisher’s market model. The utility of a buyer (or sink) takes the form of Leontief function which is known for not satisfying Gross substitutability property. We develop an

O

(

m

3

) exact combinatorial algorithm for computing equilibrium prices of the edges. The time taken by our algorithm is independent of the values of sink money and edge capacities. A corollary of our algorithm is that equilibrium prices and flows are rational numbers. Although there are algorithms to solve this problem but they are all based on convex programming techniques. To the best of our knowledge, ours is the first strongly polynomial time exact combinatorial algorithm for computing equilibrium prices of Fisher model under the case when buyers’ utility functions do not satisfy Gross substitutability property.

Dinesh Garg, Kamal Jain, Kunal Talwar, Vijay V. Vazirani
Click Fraud Resistant Methods for Learning Click-Through Rates

In pay-per-click online advertising systems like Google, Overture, or MSN, advertisers are charged for their ads

only

when a user clicks on the ad. While these systems have many advantages over other methods of selling online ads, they suffer from one major drawback. They are highly susceptible to a particular style of fraudulent attack called

click fraud

. Click fraud happens when an advertiser or service provider generates clicks on an ad with the sole intent of increasing the payment of the advertiser. Leaders in the pay-per-click marketplace have identified click fraud as the most significant threat to their business model. We demonstrate that a particular class of learning algorithms, called

click-based algorithms

, are resistant to click fraud in some sense. We focus on a simple situation in which there is just one ad slot, and show that fraudulent clicks can not increase the expected payment per impression by more than

o

(1) in a click-based algorithm. Conversely, we show that other common learning algorithms are vulnerable to fraudulent attacks.

Nicole Immorlica, Kamal Jain, Mohammad Mahdian, Kunal Talwar
Experiments with an Economic Model of the Worldwide Web

We present a simple model in which the worldwide web (www) is created by the interaction of selfish agents, namely document authors, users, and search engines. We show experimentally that power law statistics emerge very naturally in this context, and that the efficiency of the system has certain monotonicity properties.

Georgios Kouroupas, Elias Koutsoupias, Christos H. Papadimitriou, Martha Sideri
Coordination Mechanisms for Selfish Scheduling

In machine scheduling, a set of

n

jobs must be scheduled on a set of

m

machines. Each job

i

incurs a processing time of

p

ij

on machine

j

and the goal is to schedule jobs so as to minimize some global objective function, such as the maximum makespan of the schedule considered in this paper. Often in practice, each job is controlled by an independent selfish agent who chooses to schedule his job on machine which minimizes the (expected) completion time of his job. This scenario can be formalized as a game in which the players are job owners; the strategies are machines; and the disutility to each player in a strategy profile is the completion time of his job in the corresponding schedule (a player’s objective is to minimize his disutility). The equilibria of these games may result in larger-than-optimal overall makespan. The ratio of the worst-case equilibrium makespan to the optimal makespan is called the

price of anarchy

of the game. In this paper, we design and analyze scheduling policies, or

coordination mechanisms

, for machines which aim to minimize the price of anarchy (restricted to pure Nash equilibria) of the corresponding game. We study coordination mechanisms for four classes of multiprocessor machine scheduling problems and derive upper and lower bounds for the price of anarchy of these mechanisms. For several of the proposed mechanisms, we also are able to prove that the system converges to a pure Nash equilibrium in a linear number of rounds. Finally, we note that our results are applicable to several practical problems arising in networking.

Nicole Immorlica, Li Li, Vahab S. Mirrokni, Andreas Schulz
Economic Mechanisms for Shortest Path Cooperative Games with Incomplete Information

In this paper we present a cooperative game theoretic interpretation of the shortest path problem. We consider a

buying

agent who has a budget to go from a specified source node

s

to a specified target node

t

in a directed acyclic network. The budget may reflect the level of utility that he associates in going from node

s

to node

t

. The edges in the network are owned by individual utility maximizing agents each of whom incurs some cost in allowing its use. We investigate the design of economic mechanisms to obtain a least cost path from

s

to

t

and to share the surplus (difference between the budget and the cost of the shortest path) generated among the participating agents in a

fair

manner. Previous work related to this problem assumes that cost and budget information is common knowledge. This assumption can be severely restrictive in many common applications. We relax this assumption and allow both budget and cost information to be private, hence known only to the respective agents. We first develop the structure of the shortest path cooperative game with incomplete information. We then show the non-emptiness of the incentive compatible core of this game and the existence of a surplus sharing mechanism that is incentive efficient and individually rational in virtual utilities, and strongly budget balanced.

T. S. Chandrashekar, Yadati Narahari
Truth-Telling Reservations

We present a mechanism for reservations of bursty resources that is both truthful and robust. It consists of option contracts whose pricing structure induces users to reveal the true likelihoods that they will purchase a given resource. Users are also allowed to adjust their options as their likelihood changes. This scheme helps users save cost and the providers to plan ahead so as to reduce the risk of under-utilization and overbooking. The mechanism extracts revenue similar to that of a monopoly provider practicing temporal pricing discrimination with a user population whose preference distribution is known in advance.

Fang Wu, Li Zhang, Bernardo A. Huberman
Inapproximability Results for Combinatorial Auctions with Submodular Utility Functions

We consider the following allocation problem arising in the setting of combinatorial auctions: a set of goods is to be allocated to a set of players so as to maximize the sum of the utilities of the players (i.e., the social welfare). In the case when the utility of each player is a monotone submodular function, we prove that there is no polynomial time approximation algorithm which approximates the maximum social welfare by a factor better than 1–1/

e

≃ 0.632, unless

P

=

NP

. Our result is based on a reduction from a multi-prover proof system for MAX-3-COLORING.

Subhash Khot, Richard J. Lipton, Evangelos Markakis, Aranyak Mehta
An Auction-Based Market Equilibrium Algorithm for a Production Model

We present an auction-based algorithm for the computing market equilibrium prices in a production model, in which producers have a single linear production constraint, and consumers have linear utility functions. We provide algorithms for both the Fisher and Arrow-Debreu versions of the problem.

Sanjiv Kapoor, Aranyak Mehta, Vijay Vazirani
New Algorithms for Mining the Reputation of Participants of Online Auctions

The assessment of credibility and reputation of contractors in online auctions is the key issue in providing reliable environment for customer-to-customer e-commerce. Confident reputation rating system is an important factor in managing risk and building customer satisfaction. Unfortunately, most online auction sites employ a very simple reputation rating scheme that utilizes user feedbacks and comments issued after committed auctions. Such schemes are easy to deceive and do not provide satisfactory protection against several types of fraud. In this paper we propose two novel measures of trustworthiness, namely, credibility and density. We draw inspiration from social network analysis and present two algorithms for reputation rating estimation. Our first algorithm computes the credibility of participants by an iterative search of inter-participant connections. Our second algorithm discovers clusters of participants who are densely connected through committed auctions. We test both measures on a large body of real-world data and we experimentally compare them with existing solutions.

Mikołaj Morzy
A Simple Characterization for Truth-Revealing Single-Item Auctions

We give a simple characterization of all single-item truth-revealing auctions under some mild (and natural) assumptions about the auctions. Our work opens up the possibility of using variational calculus to design auctions having desired properties.

Kamal Jain, Aranyak Mehta, Kunal Talwar, Vijay Vazirani
Prediction Games
(Extended Abstract)

We introduce a new class of mechanism design problems called

prediction games

. There are

n

self interested agents. Each agent

i

has a

private

input

x

i

and a cost of accessing it. Given are a function

f

(

x

1

, ...,

x

n

) which predicts whether a certain event will occur and an independent distribution on the agents’ inputs. Agents can be rewarded in order to motivate them to access their inputs. The goal is to design a mechanism (protocol) which in equilibrium predicts

f

(.) and pays in expectation as little as possible.

We investigate both, exact and approximate versions of the problem and provide several upper and lower bounds. In particular, we construct an optimal mechanism for every anonymous function and show that any function can be approximated with a relatively small expected payment.

Amir Ronen, Lyron Wahrmann
Walrasian Equilibrium: Hardness, Approximations and Tractable Instances

We study the complexity issues for Walrasian equilibrium in a special case of combinatorial auction, called single-minded auction, in which every participant is interested in only one subset of commodities. Chen et al. [5] showed that it is NP-hard to decide the existence of a Walrasian equilibrium for a single-minded auction and proposed a notion of approximate Walrasian equilibrium called relaxed Walrasian equilibrium. We show that every single-minded auction has a

$\frac{2}{3}$

-relaxed Walrasian equilibrium proving a conjecture posed in [5]. Motivated by practical considerations, we introduce another concept of approximate Walrasian equilibrium called weak Walrasian equilibrium. We show it is strongly NP-complete to determine the existence of

δ

-weak Walrasian equilibrium, for any 0<

δ

≤ 1.

In search of positive results, we restrict our attention to the tollbooth problem [15], where every participant is interested in a single path in some underlying graph. We give a polynomial time algorithm to determine the existence of a Walrasian equilibrium and compute one (if it exists), when the graph is a tree. However, the problem is still hard for general graphs.

Ning Chen, Atri Rudra
On the Structure and Complexity of Worst-Case Equilibria

We study an intensively studied resource allocation game introduced by Koutsoupias and Papadimitriou where

n

weighted jobs are allocated to

m

identical machines. It was conjectured by Gairing et al. that the fully mixed Nash equilibrium is the worst Nash equilibrium for this game w. r. t. the expected maximum load over all machines. The known algorithms for approximating the so-called “price of anarchy” rely on this conjecture. We present a counter-example to the conjecture showing that fully mixed equilibria cannot be used to approximate the price of anarchy within reasonable factors. In addition, we present an algorithm that constructs so-called

concentrated equilibria

that approximate the worst-case Nash equilibrium within constant factors.

Simon Fischer, Berthold Vöcking
Club Formation by Rational Sharing: Content, Viability and Community Structure

A sharing community prospers when participation and contribution are both high. We suggest the two, while being related decisions every peer makes, should be given separate rational bases. Considered as such, a basic issue is the viability of club formation, which necessitates the modelling of two major sources of heterogeneity, namely, peers and shared content. This viability perspective clearly explains why rational peers contribute (or free-ride when they don’t) and how their collective action determines viability as well as the size of the club formed. It also exposes another fundamental source of limitation to club formation apart from free-riding, in the community structure in terms of the relation between peers’ interest (demand) and sharing (supply).

W. -Y. Ng, D. M. Chiu, W. K. Lin
Subjective-Cost Policy Routing

We study a model of interdomain routing in which autonomous systems’ (ASes’) routing policies are based on

subjective

cost assessments of alternative routes. The routes are constrained by the requirement that all routes to a given destination must be confluent. We show that it is NP-hard to determine whether there is a set of stable routes. We also show that it is NP-hard to find a set of confluent routes that minimizes the total subjective cost; it is hard even to approximate minimum cost closely. These hardness results hold even for very restricted classes of subjective costs.

We then consider a model in which the subjective costs are based on the relative importance ASes place on a small number of objective cost measures. We show that a small number of confluent routing trees is sufficient for each AS to have a route that nearly minimizes its subjective cost. We show that this scheme is trivially strategyproof and that it can be computed easily with a distributed algorithm that does not require major changes to the Border Gateway Protocol. Furthermore, we prove a lower bound on the number of trees required to contain a (1 +

ε

)-approximately optimal route for each node and show that our scheme is nearly optimal in this respect.

Joan Feigenbaum, David R. Karger, Vahab S. Mirrokni, Rahul Sami
Economic Analysis of Networking Technologies for Rural Developing Regions

Providing network connectivity to rural regions in the developing world is an economically challenging problem especially given the low income levels and low population densities in such regions. Many existing connectivity technologies incur a high deployment cost that limits their affordability. Leveraging several emerging wireless technologies, this paper presents the case for economically viable networks in rural developing regions. We use the Akshaya Network located in Kerala, India as a specific case study. and show that a wireless network using WiFi for the backhaul, CDMA450 for the access network, and shared PCs for end user devices has the lowest deployment cost. However, if we include the expected spectrum licensing cost for CDMA450, a network with lease exempt spectrum using WiFi for the backhaul and WiMax for access is the most economically attractive option. Even with license exemption, regulatory costs comprise nearly half the total cost in the WiFi/WiMax case suggesting the possibility of significant improvement in network economics with more favorable regulatory policies. Finally, we also demonstrate the business case for a WiFi/CDMA450 network with nearly fully subsidized cellular handsets as end user devices.

Shridhar Mubaraq Mishra, John Hwang, Dick Filippini, Reza Moazzami, Lakshminarayanan Subramanian, Tom Du
A Simple Graph-Theoretic Model for Selfish Restricted Scheduling

In this work, we introduce and study a simple, graph-theoretic model for selfish

scheduling

among

m

non-cooperative

users

over a collection of

nmachines

; however, each user is restricted to assign its unsplittable

load

to one from a pair of machines that are allowed for the user. We model these bounded interactions using an

interaction graph,

whose vertices and edges are the machines and the users, respectively. We study the impact of our modeling assumptions on the properties of Nash equilibria in this new model. The main findings of our study are outlined as follows:

– We prove, as our main result, that the

parallel links

graph is the

best-case

interaction graph – the one that minimizes expected

makespan

of the

standard fully mixed Nash equilibrium

– among all

3-regular

interaction graphs. The proof employs a graph-theoretic lemma about

orientations

in 3-regular graphs, which may be of independent interest.

– We prove a lower bound on

Coordination Ratio

[16] – a measure of the cost incurred to the system due to the selfish behavior of the users. In particular, we prove that there is an interaction graph incurring Coordination Ratio

${\it \Omega} \left( \frac{\log n} {\log \log n} \right)$

. This bound is shown for pure Nash equilibria.

– We present counterexample interaction graphs to prove that a

fully mixed Nash equilibrium

may sometimes not exist at all. Moreover, we prove properties of the fully mixed Nash equilibrium for

complete bipartite

graphs and

hypercube

graphs.

Robert Elsässer, Martin Gairing, Thomas Lücking, Marios Mavronicolas, Burkhard Monien
A Cost Mechanism for Fair Pricing of Resource Usage

We propose a simple and intuitive

cost mechanism

which assigns

costs

for the competitive usage of

mresources

by

n

selfish

agents

. Each agent has an individual

demand

; demands are drawn according to some probability distribution. The cost paid by an agent for a resource she chooses is the total demand put on the resource divided by the number of agents who chose that same resource. So, resources charge costs in an equitable, fair way, while each resource makes no

profit

out of the agents.

We call our model the

Fair Pricing

model. Its fair cost mechanism induces a non-cooperative game among the agents. To evaluate the

Nash equilibria

of this game, we introduce the

Diffuse Price of Anarchy

, as an extension of the

Price of Anarchy

that takes into account the probability distribution on the demands. We prove:

Pure Nash equilibria

may not exist, unless all chosen demands are identical; in contrast, a

fully mixed Nash equilibrium

exists for all possible choices of the demands. Further on, the fully mixed Nash equilibrium is the

unique

Nash equilibrium in case there are only two agents.

In the

worst-case

choice of demands, the

Price of Anarchy

is Θ (

n

); for the special case of two agents, the Price of Anarchy is less than

$2 - \frac{1}{m}$

.

Assume now that demands are drawn from a

bounded, independent probability distribution

, where all demands are

identically distributed

and each is at most a (

universal

for the class) constant times its expectation. Then, the Diffuse Price of Anarchy is at most that same constant, which is just 2 when each demand is distributed symmetrically around its expectation.

Marios Mavronicolas, Panagiota N. Panagopoulou, Paul G. Spirakis
A Delay Pricing Scheme for Real-Time Delivery in Deadline-Based Networks

We introduce a novel delay pricing and charging scheme in deadline-based networks to support real-time data delivery. We study the delay performance observed by individual users when different levels of deadline urgency are specified and take a market-based approach to determining a delay price. Our pricing and charging scheme can be used to prevent greedy users from gaining an advantage by specifying arbitrarily urgent deadlines, and can also aid in network load control when equipped with user budget constraints.

Yanni Ellen Liu, Xiao Huan Liu
Game-Theoretic Analysis of Internet Switching with Selfish Users

We consider the problem of Internet switching, where traffic is generated by selfish users. We study a packetized (TCP-like) traffic model, which is more realistic than the widely used fluid model. We assume that routers have First-In-First-Out (FIFO) buffers of bounded capacity managed by the drop-tail policy. The utility of each user depends on its transmission rate and the congestion level. Since selfish users try to maximize their own utility disregarding the system objectives, we study Nash equilibria that correspond to a steady state of the system. We quantify the degradation in the network performance called the price of anarchy resulting from such selfish behavior. We show that for a single bottleneck buffer, the price of anarchy is proportional to the number of users. Then we propose a simple modification of the Random Early Detection (RED) drop policy, which reduces the price of anarchy to a constant.

Alex Kesselman, Stefano Leonardi, Vincenzo Bonifaci
The Price of Anarchy of Cournot Oligopoly

Cournot oligopoly is typically inefficient in maximizing social welfare which is total surplus of consumer and producer. This paper quantifies the inefficiency of Cournot oligopoly with the term “price of anarchy”, i.e. the worst-case ratio of the maximum possible social welfare to the social welfare at equilibrium. With a parameterization of the equilibrium market share distribution, the inefficiency bounds are dependent on equilibrium market shares as well as market demand and number of firms. Equilibrium market share parameters are practically observable and analytically manageable. As a result, the price of anarchy of Cournot oligopoly established in this paper is applicable to both practical estimation and theoretical analysis.

Xiaolei Guo, Hai Yang
Incentives in Some Coalition Formation Games

The idea of using the core as a model for predicting the formation of coalitions and the sharing of benefits to coordinated activities has been studied extensively. Basic to the concept of the core is the idea of group rationality as embodied by the blocking condition. The predictions given by the core may run into difficulties if some individuals or coalitions may benefit from not blocking “truthfully”. This paper investigates this question in games in some games that generalize assignment games. Some positive results are given, and relationships with Vickrey-Clarke-Groves mechanisms are drawn.

Gabrielle Demange
Enforcing Truthful Strategies in Incentive Compatible Reputation Mechanisms

We commonly use the experience of others when taking decisions. Reputation mechanisms aggregate in a formal way the feedback collected from peers and compute the

reputation

of products, services, or providers. The success of reputation mechanisms is however conditioned on obtaining true feedback. Side-payments (i.e. agents get paid for submitting feedback) can make honest reporting rational (i.e. Nash equilibrium). Unfortunately, known schemes also have other Nash equilibria that imply lying. In this paper we analyze the equilibria of two incentive-compatible reputation mechanisms and investigate how undesired equilibrium points can be eliminated by using trusted reports.

Radu Jurca, Boi Faltings
Strategy/False-Name Proof Protocols for Combinatorial Multi-attribute Procurement Auction: Handling Arbitrary Utility of the Buyer

In this paper, we develop new protocols for a combinatorial, multi-attribute procurement auction in which each sales item (task) is defined by several attributes called quality, the buyer is the auctioneer (e.g., a government), and the sellers are the bidders. Furthermore, there exist multiple tasks, and both buyer and sellers can have arbitrary (e.g., complementary/substitutable) preferences on a bundle of tasks.

In this setting, there is a chance that a VCG protocol cannot satisfy Individual Rationality (IR) for the buyer, i.e., the buyer’s utility can be negative. We show that if a surplus function is concave, then the VCG protocol satisfies IR and the protocol is also false-name-proof, i.e., using multiple identifiers provides no advantage. Furthermore, we present a modification of the VCG protocol that satisfies IR even if the concavity condition is not satisfied. The key idea of this protocol is to introduce a special type of bidder called the reference bidder. We assume that the auctioneer knows the upper-bound of the reference bidder’s cost. Introducing such a reference bidder is similar to setting reservation prices in standard combinatorial auctions. Furthermore, we develop a new false-name-proof protocol that is based on the idea of the Leveled Division Set (LDS) protocol.

Takayuki Suyama, Makoto Yokoo
Practical Zero-Knowledge Arguments from Σ-Protocols

Zero-knowledge (ZK) plays a central role in the field of modern cryptography and is a very powerful tool for constructing various cryptographic protocols, especially cryptographic protocols in E-commerce. Unfortunately, most ZK protocols are for general

$\mathcal{NP}$

languages with going through general

$\mathcal{NP}$

-reductions, and thus cannot be directly employed in practice. On the other hand, a large number of protocols, named Σ-protocols, are developed in industry and in the field of applied cryptography for specific number-theoretic languages (e.g. DLP and RSA), which preserves the ZK property only with respect to

honest

verifiers (i.e., they are not real ZK) but are highly practical. In this work, we show a

generic yet practical

transformation from Σ-protocols to practical (real) ZK arguments without general

$\mathcal{NP}$

-reductions under either the DLP or RSA assumptions.

Yunlei Zhao, Robert H. Deng, Binyu Zang, Yiming Zhao
Bayesian Communication Leading to a Nash Equilibrium in Belief

A Bayesian communication in the

p

-belief system is presented which leads to a Nash equilibrium of a strategic form game through messages as a Bayesian updating process. In the communication process each player predicts the other players’ actions under his/her private information with probability at least his/her belief. The players communicate privately their conjectures through message according to the communication graph, where each player receiving the message learns and revises his/her conjecture. The emphasis is on that both any topological assumptions on the communication graph and any common-knowledge assumptions on the structure of communication are not required.

Takashi Matsuhisa, Paul Strokan
The Bahncard Problem with Interest Rate and Risk

This paper investigated a new framework for the competitive analysis of the Bahncard problem. In contrast to the earlier approach we introduce the interest rate

i

and the risk tolerance

t

into the model, in which the traveller can develop the optimal trading strategies based on his risk preference. Set

$\alpha=\frac{1}{1+i}$

. We prove that the Bahncard problem with the interest rate is

$1+(1-\beta)\alpha^{{m^*}+1}\,$

-competitive, where

m

*

is the critical point. Then we further design a t-tolerance strategy and present a surprisingly flexible competitive ratio of

$1+\frac{(1-\beta)\alpha^{m^*}}{tr^*-(1-\beta\alpha^{m^*})}$

, where

r

*

is the optimal competitive ratio for the Bahncard problem with the interest rate and

β

is the percentage of discount.

Lili Ding, Yinfeng Xu, Shuhua Hu
Optimal Starting Price in Online Auctions

Reserve price auctions are one of hot research topics in the traditional auction theory. Here we study the starting price in an online auction, counterpart of the public reserve price in a traditional auction. By considering three features of online auctions, the stochastic entry of bidders (subject to Poisson process), the insertion fee proportional to the starting price, and time discount, we have analyzed the properties of extremum points of the starting price for maximizing seller’s expected revenue, and found that, under certain conditions, the optimal starting price should be at the lowest allowable level, which is contrary to results from the classic auction theory and finds its optimality in reality. We have also developed a general extended model of multistage auction and carried out analysis on its properties. At last, some directions for further research are also put forward.

Hai Yu, Shouyang Wang, Chuangyin Dang
Time Interval-Based Prepaid Charging of QoS-Enabled IP Services

Driven by the analysis of existing prepaid charging solutions with their limitations, a novel approach of prepaid charging for Internet Protocol (IP) services based on time intervals has been developed. This approach combines key advantages of hot billing with a tariff flexibility and the guarantee of minimal fraud for providers. The technical effort of real-time charging is reduced to a min imum and ensures economic efficiency as well as scalability in large-scale envi ronments for providers who achieve a better customer binding in prepaid cases.

Pascal Kurtansky, Burkhard Stiller
Mining Stock Market Tendency Using GA-Based Support Vector Machines

In this study, a hybrid intelligent data mining methodology, genetic algorithm based support vector machine (GASVM) model, is proposed to explore stock market tendency. In this hybrid data mining approach, GA is used for variable selection in order to reduce the model complexity of SVM and improve the speed of SVM, and then the SVM is used to identify stock market movement direction based on the historical data. To evaluate the forecasting ability of GASVM, we compare its performance with that of conventional methods (e.g., statistical models and time series models) and neural network models. The empirical results reveal that GASVM outperforms other forecasting models, implying that the proposed approach is a promising alternative to stock market tendency exploration.

Lean Yu, Shouyang Wang, Kin Keung Lai
Model-Based Analysis of Money Accountability in Electronic Purses

The Common Electronic Purse Specifications (CEPS) define requirements for all components needed by an organization to implement a globally interoperable electronic purse program. In this paper we describe how we model purchase transaction protcol in CEPS using formal specification language. We define and verify the money accountability property of the CEPS, and we address its violation scenario in the presence of communication network failures. Using model checking technique we find that transaction record stored in the trusted-third party plays a essential role in satisfying the accountability property.

Il-Gon Kim, Young-Joo Moon, Inhye Kang, Ji-Yeon Lee, Keun-Hee Han, Jin-Young Choi
Applying Modified Fuzzy Neural Network to Customer Classification of E-Business

With the increasing interest and emphasis on customer demands in e-commerce, customer classification is in a crucial position for the development of e-commerce in response to the growing complexity in Internet commerce logistical markets. As such, it is highly desired to have a systematic system for extracting customer features effectively, and subsequently, analyzing customer orientations quantitatively. This paper presents a new approach that employs a modified fuzzy neural network based on adaptive resonance theory to group users dynamically based on their Web access patterns. Such a customer clustering method should be performed prior to Internet bookstores as the basis to provide personalized service. The experimental results of this clustering technique show the promise of our system.

Yukun Cao, Yunfeng Li, Xiaofeng Liao
Fuzzy Comprehensive Evaluation of E-commerce and Process Improvement

This paper presents an evaluation-factor system of electronic commerce (e-commerce) and provides a fuzzy comprehensive evaluation method to analyze and estimate the development of enterprise e-commerce. Besides, an optimized fuzzy evaluation model utilizing the genetic algorithm is applied to improve the process of the evaluation. Based on a recent survey on e-commerce of some enterprises in Guangzhou, the evaluation results show that the evaluation-factor system is reasonable; the fuzzy comprehensive evaluation model is effective and the optimized fuzzy comprehensive evaluation method makes the evaluation more accurate.

Qin Liao, Huimin Gu, Zhifeng Hao, Zhonghua Tang
Application of Integrated Web Services-Based E-business and Web Services-Based Business Process Monitoring

In this paper, a solution that aims to help develop web services-based e-business and web services-based business process monitoring is provided. Some issues, such as security models and access control policy, asynchronous transaction support, and reliability of transactions in the process of developing e-business, are addressed in this solution. Moreover, we provide a mechanism to define the web services provided by supplier organizations and to record their business objectives to enable performance measurement.

Jichang Dong, Wuyi Yue
A Note on the Cramer-Damgård Identification Scheme

In light of the recent work of Micali and Reyzin on showing the subtleties and complexities of the soundness notions of zero-knowledge (ZK) protocols when the verifier has his public-key, we re-investigate the Cramer-Damgård intended-verifier identification scheme and show two man-in-the-middle attacks in some reasonable settings: one simple replaying attack and one ingenious interleaving attack. Our attacks are independent of the underlying hardness assumptions assumed.

Yunlei Zhao, Shirley H. C. Cheung, Binyu Zang, Bin Zhu
Formal Analysis and Improvement of the State Transition Model for Intrusion Tolerant System

Intrusion tolerance is an emerging network security technique, which enables the victim server systems to continue offering services (or degraded services) after being attacked. A state transition model has been presented to describe the dynamic behaviors of intrusion tolerant systems. In this paper, we build an attack finite state system based on the recent network attacks, and use SMV, a model checking tool, to analyze the intrusion tolerant systems by the interaction of the system model and the attack model. The analysis results demonstrate that not all types of attacks can be mapped to the system model. We improve this state transition model, whose correctness is proved by SMV. In addition, we give two attack instances mapped to our improved model.

Kaile Su, Congxin Zhao, Guanfeng Lv, Han Lin, Qingliang Chen
Secure Fingerprint-Based Remote User Authentication Scheme Using Smartcards

Biometrics and smartcards have the potential to be a very useful combination of technology. Among the various biometrics technological advances in use today, fingerprint recognition seems to be particularly suitable for smartcard systems. In 2002, Lee-Ryu-Yoo proposed a fingerprint-based remote user authentication scheme using smartcards. This scheme, however, was found to be potentially vulnerable to some forgery attacks and is not easily repairable. The current paper presents an improved scheme that is more secure and efficient.

Eun-Jun Yoon, Kee-Young Yoo
New Authentication Protocol Providing User Anonymity in Open Network

User authentication is an operation whereby one user is aware of the identity of an another user involved in a protocol. In 2004, Park presented an authentication protocol providing user anonymity based on the secret-key certificate and error-correcting codes called PA protocol. In this paper, it will be argued that PA protocol is vulnerable to the man-in-the-middle attack and does not provide a sufficient level of security. Then, an improved protocol to fix this problem is proposed.

Woo-Hun Kim, Eun-Jun Yoon, Kee-Young Yoo
Effective Filtering for Collaborative Publishing

In little over the last decade the World Wide Web has established itself as a medium of interaction, communication, content delivery, and collaboration, opening doors of opportunity never before available to humanity, and on a scale unprecedented in human history. At the same time,

information overload

, due to democratization of content creation and delivery, remains a major problem. In this paper, we postulate that the problems of democracy are solved by democracy itself: harnessing the people power of the world wide web through

collaborative filtering

of content is the natural solution to the information overload problem; and we present approaches to promote such collaboration.

We show that the standard PageRank Algorithm, inspired by the effectiveness of citation-structure analysis (“all links are good, and the more the better”) to estimate the relative importance of articles in scientific literature, is becoming less effective in this increasingly democratized world of online content. As long as uniformly edited content produced by media companies and other corporate entities dominated online content, the topological similarity of the web to the world of scientific literature was maintained sufficiently well. The explosion of unedited blogs, discussion fora, and wikis, with their “messier” hyperlink structure, is rapidly reducing this similarity, and also the effectiveness of standard PageRank-based filtering methods.

We assume a slightly modified Web infrastructure in which links have positive and negative

weights

, and show that this enables radically different and more effective approaches to page ranking and collaborative content filtering, leading to a vastly improved environment to incentivize content creation and co-operation on the World Wide Web, helping realize, in essence, a vastly more efficient information economy in today’s online global village.

Arindam Chakrabarti
Experimental Evaluation of an eBay-Style Self-reporting Reputation Mechanism

We experimentally studied the effects of a eBay-style self-reporting reputation mechanism in an double-sided exchange economy in which participants have the option of not fulfilling their contracts. We found that submitted reports quite accurately reflected their transactions and this mechanism maintaining a high contract fulfillment rate. The inaccurate reports, which were about 5% of the total, were heavily biased towards bad ratings when the transaction is successful. This is strong evidence that the inaccurate reports were not results of random errors, but derived from an underlying behavior effect. Our experimental design allowed identifying the effect of reputation mechanism on endogenous market behavior.

Kay-Yut Chen, Tad Hogg
An Architecture for Evolutionary Adaptive Web Systems

This paper present an architecture based on evolutionary genetic algorithms for generating online adaptive services. Online adaptive systems provide flexible services to a mass of clients/users for maximising some system goals, they dynamically adapt the form and the content of the issued services while the population of clients evolve over time. The idea of online genetic algorithms (online GAs) is to use the online clients response behaviour as a fitness function in order to produce the next generation of services. The principle implemented in online GAs, “the application environment is the fitness”, allow to model highly evolutionary domains where both services providers and clients change and evolve over time. The flexibility and the adaptive behaviour of this approach seems to be very relevant and promising for applications characterised by highly dynamical features such as in the web domain (online newspapers, e-markets, websites and advertising engines). Nevertheless the proposed technique has a more general aim for application environments characterised by a massive number of anonymous clients/users which require personalised services, such as in the case of many new IT applications.

Alfredo Milani, Stefano Marcugini
A Collaborative E-learning System Based on Multi-agent

Based on the analysis of the process of the collaborative e-learning system and the characteristics of Multi-agent technology, this paper brings forward a collaborative e-learning system framework founded on Multi-agent. Meanwhile, it provides with the key technology to realize e-learning and the arithmetic to search for the collaborator, carries out further analysis of the system’s operating mechanism, multi-agent’s logic structure, Agent capacity and time complexity. According to this framework, it is easy to realize the Web-based e-learning among several individuals and the knowledge’s intertransferring, which is good to improve the e-learning efficiency and lift the knowledge level of the whole internet organization.

Jun Wang, Yong-Hong Sun, Zhi-Ping Fan, Yan Liu
A Class of Possibilistic Portfolio Selection Models and Algorithms

In this paper, a crisp possibilistic variance and a crisp possibilistic covariance of fuzzy numbers are defined, which is different from the ones introduced by Carlsson and Fullér. The possibilistic portfolio selection model is presented on the basis of the possibilistic mean and variance under the assumption that the returns of assets are fuzzy numbers. Especially, Markowitz’s probabilistic mean-variance model is replaced a linear programming model when the returns of assets are symmetric fuzzy numbers. The possibilistic efficient frontier can be derived explicitly when short sales are not allowed on all risky assets and a risk-free asset.

Wei-Guo Zhang, Wen-An Liu, Ying-Luo Wang
An Empirical Study of Volatility Predictions: Stock Market Analysis Using Neural Networks

Volatility is one of the major factor that causes uncertainty in short term stock market movement. Empirical studies based on stock market data analysis were conducted to forecast the volatility for the implementation and evaluation of statistical models with neural network analysis. The model for prediction of Stock Exchange short term analysis uses neural networks for digital signal processing of filter bank computation. Our study shows that in the set of four stocks monitored, the model based on moving average analysis provides reasonably accurate volatility forecasts for a range of fifteen to twenty trading days.

Bernard Fong, A. C. M. Fong, G. Y. Hong, Louisa Wong
A New Algorithm Based on Copulas for Financial Risk Calculation with Applications to Chinese Stock Markets

This paper concerns the application of copula functions in calculating financial market risk. The copula function is used to model the dependence structure of multivariate financial assets. After introducing the traditional Monte Carlo simulation method and the pure copula method we present a new algorithm named mixture method based on copula’s properties and the dependence measure, Spearman’s rho. This new method is used to simulate daily returns of two stock market indices in China, Shanghai Stock Composite Index and Shenzhen Stock Composite Index and then calculate six risk measures including

VaR

and conditional

VaR

. The results are compared with that derived from the traditional Monte Carlo method and the pure copula method. From the comparison we show that for lower confidence level, the traditional Monte Carlo and pure copula method perform better than mixture method, while for higher confidence level, the mixture method is a better choice.

Ping Li, Peng Shi, Guang-Dong Huang
Design of Anonymity-Preserving User Authentication and Key Agreement Protocol for Ubiquitous Computing Environments

The spread of mobile devices, PDAs and sensors enabled the construction of ubiquitous computing environments, transforming regular physical spaces into “Smart space” augmented with intelligence and enhanced with services. However, the deployment of this computing paradigm in real-life is disturbed by poor security, particularly, the lack or proper authentication, authorization and key management techniques. Also, it is very important not only to find security measures but also to preserve user privacy in ubiquitous computing environment. In this paper, we propose efficient user authentication and key agreement protocol with anonymity for the privacy-preserving for ubiquitous computing environments. Our scheme is suitable for distributed environments with the computational constrained devices by using MAC-based security association token.

Myung-hee Kang, Hwang-bin Ryou, WoongChul Choi
An Efficient Identity-Based Key Exchange Protocol with KGS Forward Secrecy for Low-Power Devices

For an ID-based key exchange (KE) protocol, KGS forward secrecy is about the protection of previously established session keys after the master secret key of the Key Generation Server (KGS) is compromised. This is the strongest notion of forward secrecy that one can provide for an ID-based KE protocol. Among all the comparable protocols, there are only a few of them providing this level of forward secrecy and all of these protocols require expensive bilinear pairing operations and map-to-point hash operations that may not be suitable for implementation on low-power devices such as sensors. In this paper, we propose a new ID-based KE protocol which does not need any pairing or map-to-point hash operation. It also supports the strongest KGS forward secrecy. On its performance, we show that it is faster than previously proposed protocols in this category. Our protocol is signature-based in which the signature scheme is a variant of a scheme proposed by Bellare et al. in Eurocrypt 2004. We show that the variant we proposed is secure and also requires either less storage space or runtime computation than the original scheme.

Robert W. Zhu, Guomin Yang, Duncan S. Wong
To Build a Blocklist Based on the Cost of Spam

We studied the objective of spam based on a financial profit truth – the cost for sending spam against anti-spam techniques should be little than the return from the negligible response from the recipients. Spammers have to leave some real contact information in the spam for the recipients to touch them easily, no matter what methods they use to fight against anti-spam techniques. In this paper, we present a method to automatically identify such contact information entities in spam, and build an online blocklist for the spam filters to classify spam, especial unsolicited commercial email (UCE).

Hung Chim
Total Dominating Set Games

In this paper, we consider cooperative games arising from total domination problem on graphs. We introduce two games, rigid total dominating set game and relaxed total dominating set game, and focus on their cores. First, a common necessary and sufficient condition for the balancedness of the two total dominating set games is obtained. Next, the computational issues on cores are discussed. We prove that in general the problems of testing the balancedness and testing the membership of the core are all

$\cal {NP}$

-hard for both total dominating set games.

Qizhi Fang, Hye Kyung Kim, Dae Sik Lee
Local Flow Betweenness Centrality for Clustering Community Graphs

The problem of information flow is studied to identify de facto communities of practice from tacit knowledge sources that reflect the underlying community structure, using a collection of instant message logs. We characterize and model the community detection problem using a combination of graph theory and ideas of centrality from social network analysis. We propose, validate, and develop a novel algorithm to detect communities based on computation of the Local Flow Betweenness Centrality. Using LFBC, we model the weights on the edges in the graph so we can extract communities. We also present how to compute efficiently LFBC on relevant edges without having to recalculate the measure for each edge in the graph during the process. We validate our algorithms on a corpus of instant messages that we call MLog. Our results demonstrate that MLogs are a useful source for community detection that can augment the study of collaborative behavior.

Franco Salvetti, Savitha Srinivasan
Computerized Collaborative Support for Enhancing Human’s Creativity for Networked Community

The rapid development of computer and Internet enables people who have common interests to build loosely coupled networked communities. Through stimulating individual’s creativity to augment community’s communicative and collaborative abilities for acquiring new ideas and knowledge is becoming highlight research and achieving more and more experts’ attentions. In this paper, we focus on exploring effective computerized collaborative support for enhancing human’s creativity for networked community. Versatile aids are explored, such as visualization of expert opinion structure, clustering of contributed opinions for concept formation and idea/knowledge detecting and growing, etc. all integrated into a group argumentation environment (GAE) with a simple example.

Yijun Liu, Xijin Tang
New Results on Online Replacement Problem

The replacement problems are extensively studied from a number of disciplines including economics, finance, operations research, and decision theory. Much of previous theoretical work is “Bayesian”. In this paper, we restudy the on-line replacement problem by using the competitive analysis. The goal is to minimize the total cost of cumulative payment flow plus changeover costs. Firstly, a refusal strategy is proposed and the competitive ratio for

k

=1 is obtained. Furthermore, a new time-independent strategy

S

new

is presented and we prove that it is

r

-competitive when

M

 ∈ [

c

,

d

]. Finally, weights are introduced to the original model and some results are achieved.

Yinfeng Xu, Chunlin Xin, Fanglei Yi
Online Bin Packing of Fragile Objects with Application in Cellular Networks

We study a specific bin packing problem which arises from the channel assignment problems in cellular networks. In cellular communications, frequency channels are some limited resource which may need to share by various users. However, in order to avoid signal interference among users, a user needs to specify to share the channel with at most how many other users, depending on the user’s application. Under this setting, the problem of minimizing the total channels used to support all users can be modeled as a specific bin packing problem as follows: Given a set of items, each with two attributes, weight and fragility. We need to pack the items into bins such that, for each bin, the sum of weight in the bin must be at most the smallest fragility of all the items packed into the bin. The goal is to minimize the total number of bins (i.e., the channels in the cellular network) used. We consider the on-line version of this problem, where items arrive one by one. The next item arrives only after the current item has been packed, and the decision cannot be changed. We show that the asymptotic competitive ratio is at least 2. We also consider the case where the ratio of maximum fragility and minimum fragility is bounded by a constant. In this case, we present a class of online algorithms with asymptotic competitive ratio at most of 1.7

r

, for any

r

> 1.

Wun-Tat Chan, Francis Y. L. Chin, Deshi Ye, Guochuan Zhang, Yong Zhang
Online Algorithms for the Vehicle Scheduling Problem with Time Objective

Falling across the problem of the unforeseen congested events in the logistic transportation, we build the online vehicle scheduling model with the recoverable congested vertices in this paper. Our optimization objective minimizes the total time of online scheduling. Considering the dynamic occurrence characteristics of the congested vertices one by one, we first introduce three online scheduling strategies, i. e., Greedy Strategy, Reposition Strategy, and Waiting Strategy, and analyze the advantages and disadvantages of competitive performances of the three strategies. Then we propose the Simple Choice Strategy. By competitive performance analysis, we think that the Simple Choice Strategy could be an optimal scheduling scheme for online vehicle transportation.

Maolin Hu, Yinfeng Xu, Weijun Xu
On Solving Coverage Problems in a Wireless Sensor Network Using Voronoi Diagrams

Owing to numerous potential applications, wireless sensor networks have been the focus of a lot of research efforts lately. In this note we study one fundamental issue in such networks, namely the coverage problem, in which we would like to determine whether a region of interest is sufficiently covered by a given set of sensors. This problem is motivated by monitoring applications in sensor networks, as well as robustness concerns and protocol requirements. We show that the coverage problem and some of its variants can be treated in a unified manner using suitable generalizations of the Voronoi diagram. As a result, we are able to give algorithms that have better runtimes than those proposed in previous works (see, e.g., [5, 6]). Our approach also yields efficient algorithms for coverage problems where the sensing region of a sensor is an ellipse or an

L

p

–ball, where

p

≥1.

Anthony Man-Cho So, Yinyu Ye
A Computational Model of Mortgage Prepayment Options

This paper develops a computational model of residential mortgage prepayment over discrete time. The prepayment options at every payment date are embedded in the model by using real options approach and according to Chinese residential mortgage loan payment schedule. The computational approach of the model is shown in the paper. A forecasting for cash flow of the bank and a penalty strategy designed against prepayment behavior under the prepayment option are given in the paper as the applications of the model.

Ming Yang, Renfeng Xiao
Efficient Algorithms for the Electric Power Transaction Problem

We present two efficient algorithms for solving the electric power transaction problem. The electric power transaction problem appears when maximizing the social benefit on electric power transactions among some private companies. The problem is a special case of the minimum cost flow problem defined on a network with many leaves, where each leaf corresponds to a (private) company who wants to sell or buy electric power.

Our first algorithm is based on the minimum mean cycle canceling algorithm and the second algorithm uses a linear time median finding algorithm. The first algorithm finds an optimal solution in O(

n

log

n

k

5

log(

kC

)) time where

n

is the number of leaves,

k

is the number of non-leaf vertices and

C

is the highest electric power price per unit that companies may offer. The time complexity of the second algorithm is bounded by O((

n

 + 

k

3

)2

k

k

!) time, which is linear in

n

. In many practical instances,

k

is small and

n

is very large, hence these algorithms solve the problem more efficiently than the ordinary network flow algorithms.

Masashi Kiyomi, Takeaki Uno, Tomomi Matsui
Looking for Arbitrage or Term Structures in Frictional Markets

In this paper we consider a frictional market with finitely many securities and finite and discrete future times. The frictions under consideration include fixed and proportional transaction costs, bid-ask spreads, and taxes. In such a market, we find that whether there exists an arbitrage opportunity does not dependent on the fixed transaction costs. Under a reasonable assumption, the no-arbitrage is equivalent to the condition that the optimal value of some linear programming problem is zero, and to the existence of a so-called consistent term structure. These results permit us to identify and to find arbitrage and consistent term structures in polynomial time. Two linear programming problems are proposed, each of which can identify and find the arbitrage opportunity or the consistent term structure if either exists.

Zhongfei Li, Kai W. Ng
A Framework on Compound Knowledge Push System Oriented to Organizational Employees

Organizational employees have different knowledge demands and the knowledge is compound. So how to push the right compound knowledge to the right organizational employees becomes important. This paper attempts to propose a framework for compound knowledge push system for organizational employees to solve the problem. Firstly, the compound push mechanism is given out based on the analysis of the knowledge needs of organizational employees. Secondly, after introducing the key IT, the framework is presented with the illumination of every body’s function of the system. Finally, an application case is studied to illustrate the compound knowledge push system’s operation mechanism based on the framework. Applying the system can give the employees all-around knowledge backing, and it will enhance the knowledge management level of organizations.

Zhi-Ping Fan, Yong Feng, Yong-Hong Sun, Bo Feng, Tian-Hui You
Effective Decision Making by Self-evaluation in the Multi-agent Environment

Generally, in multi-agent systems, there are close relations between behavior of each individual agent and the group of agents as a whole, so a certain information about the relative state of each agent in the group may be hided in each agent behavior. If this information can be extracted, each agent has the possibility to improve its state by seeing only its own behavior without seeing other agents’ behaviors. In this paper, we focus on “power-law” which is interesting character seen in the behavior of each node of various kinds of networks as one of such information. Up to now, we have already found that power-law can be seen in the efficiently behaving agents in Minority Game which is the competitive multi-agent simulation environment. So, in this paper we have verified whether it is possible for each agent in the game to improve its state by seeing only its own behavior, and confirmed that the performance gain was actually possible.

Keyword:

Minority game, Indirect coordination, Power law.

Satoshi Kurihara, Kensuke Fukuda, Shihya Sato, Toshiharu Sugawara
Overlay Based Mapping Egress Service Path Between MPLS Domains

It is a critical issue for interdomain service routing across MPLS domains to map egress service path. In spite of being a de facto interdomain routing standard, BGP can not effectively fulfil the interdomain traffic engineering. To solve the problem, we set up an overlay model, and formulate the problem as a Markov Decision Problem. According to the model, we get related network resource state through service nodes measurement on a common measurement platform. Using service state, we design related cost function. To simplify the problem, we decompose the Markov Decision Problem, and design a heuristic proportional mapping algorithm.

Chongying Cao, Jing Yang, Guoqing Zhang
An Adaptive Group-Based Reputation System in Peer-to-Peer Networks

As more and more P2P applications being popular in Internet, one of important problem to be solved is inspiring users to cooperate each other actively and honestly, the reputation mechanism which is a hot spot for P2P research has been proposed to conquer it. Because of the characters of virtuality and anonymous in the network, it is very easy for users with bad reputations to reenter the system with new identities to regain new reputations in the reputation systems. In order to get rid of the impact of whitewashers and improve the system performance and efficiency, we propose a new probability-based adaptive initial reputation mechanism. In this new mechanism, newcomers will be trusted based on system’s trust-probability which can be adjusted according to the actions of the newcomers. To avoid the system fluctuating for actions of a few whitewashers, we realize the new reputation mechanism in system with group-based architecture, which can localize the impact of whitewashers in their own groups. Both performance analysis and simulation show that this new adaptive reputation mechanism is more effective.

Liang Sun, Li Jiao, Yufeng Wang, Shiduan Cheng, Wendong Wang
A Comparative Study on Marketing Mix Models for Digital Products

The rise of the Internet and electronic commerce provide a huge marketspace and unique transaction process for digital products. It is significant to discuss whether established marketing models can be revised for digital products. First, the unique features of digital products are systematically reviewed, and then three typical digital products, including e-books, anti-virus, and online translation services, are analyzed and compared utilizing three established marketing models, including 4P, 4C, and 4S. We find that these marketing mix models have different suitability for three typical digital products. The intention of this paper is to provide a reference for enterprises in selecting marketing mix model according to product’s categories and to provide a marketing strategy tool kit.

KanLiang Wang, Yuan Wang, JingTao Yao
Optimal Pricing for Web Search Engines

The research and developments on web search engines have been raised to be one of the hottest topics since the popular usage of the internet. Accordingly, how to price the software becomes an important problem and is still unsolved with satisfactions. Using the Principle-Agent method in economics, the pricing model for web search engines based on the theory of Brown Motion with drifts is established. The stopping time is defined for the model, and the expected benefit of the web-search-engine provider over the rental horizon is derived, with a special case for the outright sale. By maximizing the benefit, the optimal price for the outright sale, the optimal monthly rental and the optimal selling price after renting the search engine for a period of time are discussed. Finally, given sets of parameters, simulations are implemented, and the results show that the optimal prices are reasonable.

Xun Liang, Yang-Bo He
Pricing Strategy of Mixed Traditional and Online Distribution Channels Based on Stackelberg Game

Strategies of pricing based on Stackelberg competition between manufacturer and distributor are discussed under the condition that both traditional and online channels exist. A system consisting of a manufacturer, a distributor and customers is considered, in which the manufacturer can sell products to the distributor, who, in turn, sells the products to customers through traditional channel, or the manufacturer can transact directly with the customers in an electronic manner. The manufacturer and the distributor establish models with the respective objection of maximizing expected profit. The manufacturer regards the price of products sold to distributor through traditional and online channels as decision variables, while the distributor’s decision variable is the price of products sold to customers, and the distributor’s decision variable is the function of the manufacturer’s decision variable. At last, the numerical examples are used to show the application of pricing models.

Li-jun Li, Xiao-yuan Huang, Li-ping Yu, Bo Xu
Packing Trees in Communication Networks

Given an undirected edge-capacitated graph and a collection of subsets of vertices, we consider the problem of selecting a maximum (weighted) set of Steiner trees, each spanning a given subset of vertices without violating the capacity constraints. We give an integer linear programming (ILP) formulation, and observe that its linear programming (LP-) relaxation is a fractional packing problem with exponentially many variables and with a block (sub-)problem that cannot be solved in polynomial time. To this end, we take an

r

-approximate block solver to develop a (1 − 

ε

)/

r

approximation algorithm for the LP-relaxation. The algorithm has a polynomial coordination complexity for any

ε

 ∈ (0,1). To the best of our knowledge, this is the first approximation result for fractional packing problems with only approximate block solvers and a coordination complexity that is polynomial in the input size and

ε

− 1

. This leads to an approximation algorithm for the underlying tree packing problem. Finally, we extend our results to an important multicast routing and wavelength assignment problem in optical networks, where each Steiner tree is also to be assigned one of a limited set of given wavelengths, so that trees crossing the same fiber are assigned different wavelengths.

Mohamed Saad, Tamás Terlaky, Anthony Vannelli, Hu Zhang
Truthful Algorithms for Scheduling Selfish Tasks on Parallel Machines

We consider the problem of designing truthful mechanisms for scheduling

selfish tasks (or agents)

–whose objective is the minimization of their completion times– on parallel identical machines in order to minimize the

makespan

. A truthful mechanism can be easily obtained in this context (if we, of course, assume that the tasks cannot shrink their lengths) by scheduling the tasks following the increasing order of their lengths. The quality of a mechanism is measured by its approximation factor (price of anarchy, in a distributed system) w.r.t. the social optimum. The previous mechanism, known as SPT, produces a (2–1/

m

)-approximate schedule, where

m

is the number of machines. The central question in this paper is the following:

“Are there other truthful mechanisms with better approximation guarantee (price of anarchy) for the considered scheduling problem?”

This question has been raised by Christodoulou et al [1] in the context of coordination mechanisms, but it is also relevant in centrally controlled systems. We present (randomized) truthful mechanisms for both the centralized and the distributed settings that improve the (expected) approximation guarantee (price of anarchy) of the SPT mechanism. Our centralized mechanism holds for any number of machines and arbitrary schedule lengths, while the coordination mechanism holds only for two machines and schedule lengths that are powers of a certain constant.

Eric Angel, Evripidis Bampis, Fanny Pascual
Incentive Compatible Multiagent Constraint Optimization

We present in this paper an incentive-compatible distributed optimization method applied to social choice problems. The method works by computing and collecting VCG taxes in a distributed fashion. This introduces a certain resilience to manipulation from the problem solving agents. An extension of this method sacrifices Pareto-optimality in favor of budget-balance: the solutions chosen are not optimal anymore, but the advantage is that the self interested agents pay the taxes between themselves, thus producing no tax surplus. This eliminates unwanted incentives for the problem solving agents, ensuring their faithfulness.

Adrian Petcu, Boi Faltings
Design of Incentive Compatible Mechanisms for Stackelberg Problems

This paper takes the first steps towards designing incentive compatible mechanisms for hierarchical decision making problems involving selfish agents. We call these Stackelberg problems. These are problems where the decisions or actions in successive layers of the hierarchy are taken in a sequential way while decisions or actions within each layer are taken in a simultaneous manner. There are many immediate applications of these problems in distributed computing, grid computing, network routing, ad hoc networks, electronic commerce, and distributed artificial intelligence. We consider a special class of Stackelberg problems called SLRF (Single Leader Rest Followers) problems and investigate the design of incentive compatible mechanisms for these problems. In developing our approach, we are guided by the classical theory of mechanism design. To illustrate the design of incentive compatible mechanisms for Stackelberg problems, we consider first-price and second-price electronic procurement auctions with reserve prices. Using the proposed framework, we derive some interesting results regarding incentive compatibility of these two mechanisms.

Dinesh Garg, Yadati Narahari
Proportional QoS in Differentiated Services Networks: Capacity Management, Equilibrium Analysis and Elastic Demands

The Differentiated Services (Diffserv) architecture is a scalable solution for providing Quality of Service (QoS) over packet switched networks. By its very definition, Diffserv is not intended to provide strict performance guarantees to its subscribers. We purpose in this paper a particular form of

relative

performance guarantees. Specifically, the network manager’s goal is to maintain pre-defined ratios between common congestion measures over the different service classes. We assume that each service class is advertised with a constant price. Thus, in order to induce its goal, the manager dynamically allocates available capacity between the service classes. This scheme is studied within a network flow model, with self-optimizing users, where each user can choose the amount of flow to ship on each service class according to its service utility and QoS requirements. We pose the entire problem as a

non-cooperative game

. Concentrating on a simplified single-link model with multiple service classes, we establish the existence and uniqueness of the Nash equilibrium where the relative performance goal is obtained. Accordingly, we show how to compute and sustain the required capacity assignment. The extension to a general network topology is briefly outlined.

Ishai Menache, Nahum Shimkin
Can “Bill-and-Keep” Peering Be Mutually Beneficial?

We analyze “Bill-and-Keep” peering between two providers, where no money exchanges hands. We assume that each provider incurs costs from its traffic traversing its as well as the peer’s links, and compute the traffic levels in Nash equilibrium. We show that Nash strategies are not blind, i.e., they are neither pure hot-potato nor pure cold-potato strategies. Rather, the Nash strategies involve strategically splitting traffic between a provider’s own links and its peer’s. We derive necessary and sufficient conditions for both the providers to be better (or worse) off in Nash equilibrium compared to the blind strategies. We also analyze society’s performance as a whole and derive necessary and sufficient conditions for the society to be better (or worse) off. In particular we establish that, under Bill-and-Keep peering, while it is not possible for two asymmetric providers to be both worse off, it is certainly possible for both to be better off.

Gireesh Shrimali, Sunil Kumar
Design of P2P Grid Networking Architecture Using k-Redundancy Scheme Based Group Peer Concept

Currently, there are many system requirements to guarantee the scalability, fault-tolerance, and high performance in the rapidly evolving research fields such as Grid and P2P technologies. Due to the dynamic nature of the Grid computing and P2P in networks, the behavior of system might be very unpredictable in respect of reliability. In this paper, we propose the Group Peer based P2P Grid system architecture with

k-

Redundancy Scheme that can satisfy these requirements. With respect to load balancing, this P2P Grid system clearly shows that redundancy scheme of group peer guarantees the reliability for resource or service discovery. Through the theoretical analysis and simulation, we discuss the performance of the proposed scheme.

Yong-Hyuk Moon, Byung-Sang Kim, Chan-Hyun Youn
An Integrated Classification Method: Combination of LP and LDA

Behavior analysis of credit cardholders is one of the main research topics in credit card portfolio management. Usually, the cardholder’s behavior, especially bankruptcy, is measured by a score of aggregate attributes that describe cardholder’s spending history. In the real-life practice, statistics and neural networks are the major players to calculate such a score system for prediction. Recently, various multiple linear programming based classification methods have been promoted for analyzing credit cardholders’ behaviors. As a continuation of this research direction, this paper proposes an integrated classification method by using the fuzzy linear programming (FLP) with moving boundary and Fisher Linear Discriminant analysis(LDA). This method can improve the accurate rate in theory. In the end, a real-life credit database from a major US bank is used for explaining the idea as an example.

Aihua Li, Yong Shi
Cooperation Evolution of Indirect Reciprocity by Discrimination

Cooperation is an important speciality of human behaviour. Unlike most creatures, people frequently cooperate with genetically unrelated strangers, often in large groups, with people they will never meet again. These patterns of cooperation can be explained by the theory of indirect reciprocity. That is to say, cooperation appears because it confers the image of a valuable community member. In a sense, to discriminate a valuable member is prerequisite for cooperation. By analytic model and computer simulations in this paper, we show the essence of cooperation mechanism consists of discriminators and punishment. In particular, we show that discriminators of different grades have dissimilar effects.

Peng Du, Yanmei Li, Weixuan Xu
Fixed-Point Model and Schedule Reliability of Morning Commuting in Stochastic and Time-Dependent Transport Networks

This paper presents a fixed-point model for studying the morning commuting behavior in stochastic and time-dependent transport networks. The model considers the network uncertainty caused by supply and demand variations as well as the commuters’ perception errors on travel disutility. A new performance index called schedule reliability, i.e., the probability that commuters can arrive at destination on time or within a given time threshold early or late, is introduced to evaluate the network service level. The solution algorithm which combines the Monte Carlo simulation approach with the method of successive average is proposed together with a numerical example for demonstration.

Zhi-Chun Li, Hai-Jun Huang
Computing Equilibria in a Fisher Market with Linear Single-Constraint Production Units

We study the problem of computing equilibrium prices in a Fisher market with linear utilities and linear single-constraint production units. This setting naturally appears in ad pricing where the sum of the lengths of the displayed ads is constrained not to exceed the available ad space. There are three approaches to solve market equilibrium problems: convex programming, auction-based algorithms, and primal-dual. Jain, Vazirani, and Ye recently proposed a solution using convex programming for the problem with an arbitrary number of production constraints. A recent paper by Kapoor, Mehta, and Vazirani proposes an auction-based solution. No primal-dual algorithm is proposed for this problem.

In this paper we propose a simple reduction from this problem to the classical Fisher setting with linear utilities and without any production units. Our reduction not only imports the primal-dual algorithm of Devanur et al. to the single-constraint production setting, but also: i) imports other simple algorithms, like the auction-based algorithm of Garg and Kapoor, thereby providing a simple insight behind the recent sophisticated algorithm of Kapoor, Mehta, and Vazirani, and ii) imports all the nice properties of the Fisher setting, for example, the existence of an equilibrium in rational numbers, and the uniqueness of the utilities of the agents at the equilibrium.

Kamal Jain, Mohammad Mahdian
Majority Equilibrium of Distribution Centers Allocation in Supply Chain Management

In this paper, we consider the distribution center allocation problem decided through an optimal utility value under the majority rule in supply chain management. A location of the distribution center is a majority rule winner with optimal utility value if no other location in the network where more than half of the retailers would have, is with better utility value than the winner. We define a weight function and established the network model for the cases with one or even more than one distribution centers to be located. We show that there exists a modified weak quasi-Condorcet winner if the distribution center allocation graph model is a tree. Based on above discussion we proposed an practical majority equilibrium method for general distribution center allocation problems.

Lihua Chen, Qiwen Wang
Traversal Pattern Mining in Web Environment

There have been researches about analyzing the information retrieval patterns of log file to obtain users’ information search patterns in web environment. Algorithms that find the frequently traversed path pattern from search path inputs are suggested mainly. But one of the existing works’ problems is to provide inadequate solution for complex, that is, general topological patterns. This paper suggests an efficient algorithm for deriving the maximal frequent traversal pattern from general paths.

MinSeok Jang, Weon-Goo Kim, Yon-sik Lee, Jongwook Woo
An Analysis of Search Engine Switching Behavior Using Click Streams

In this paper, we propose a simple framework to characterize the switching behavior between search engines based on user click stream data. We cluster users into a number of categories based on their search engine usage pattern during two adjacent time periods and construct the transition probability matrix across these usage categories. The principal eigenvector of the transposed transition probability matrix represents the limiting probabilities, which are proportions of users in each usage category at steady state. We experiment with this framework using real click stream data focusing on two search engines: one with a large market share and another with a small market share. The results offer interesting insights into search engine switching. The limiting probabilities provide empirical evidence that small engines can still retain its fair share of users over time.

Yun-Fang Juan, Chi-Chao Chang
Study on Improving Efficiency of Knowledge Sharing in Knowledge-Intensive Organization

Knowledge sharing is an important part in knowledge management. The value of knowledge only can be seen in its transfer, sharing and utilizing. The transfer of individual tacit knowledge to organizational capacity can improve the competitiveness of a organization. However, there are a lot of barriers in knowledge sharing, especially in those knowledge-intensive organizations, where knowledge is very important for individuals to keep advantage, so they are usually unwilling to share it with others, or ‘contribute’ their personal knowledge to the organization. At present, most studies are emphasizing on how to create a good physical environment and platform to improve it. In this paper, we’ll analyze it using game theory. We think that to improve the efficiency of knowledge sharing efficiency, motivation mechanism and physical platform must be improved simultaneously, especially in knowledge-intensive organization, motivation mechanism is more important. Then, to problem of ‘prisoner’ dilemma’ between individuals knowledge sharing, this paper will show how to design motivation method to improve efficiency of it.

Lingling Zhang, Jun Li, Yong Shi
Traffic Models for Community-Based Ranking and Navigation

We investigate multinomial mixture traffic models for community based ranking and navigation. A “highway” model of source-destination traffic is formulated which aggregates the traffic through an underlying network of highways, onramps and offramps. The model extracts community structure from source-destination traffic information, but in addition captures the aggregate “highway” traffic between the communities. This important distinction extends the highway traffic analysis beyond clustering. The analysis discovers communities, rankings of the communities, rankings of destinations within each community, and transition probabilities between the communities. The model can be used for community-based collaborative filtering when applied to similarity graphs for music or movies.

Juan K. Lin
On the Efficacy of Detecting and Punishing Selfish Peers

We study the phenomenon of free-riding in peer-to-peer (P2P) systems via an abstract model of a public good provision game with incomplete information. Each user in the model has an intrinsic contribution cost and decides whether to contribute or free-ride based on the expected benefit derived from the system. We first consider the impact of positive network externalities—common in P2P settings—on the equilibrium level of free riding and show that such network effects reduce free riding slightly but are insufficient to prevent it. We then consider the use of an incentive mechanism based on the detection and punishment of selfish peers, explicitly modelling key design tradeoffs inherent in any realistic detection mechanism. We show that detection and punishment can reduce free riding, but that the risk of falsely accusing cooperative peers can diminish their effectiveness. Finally, we consider what level of detection would maximize the social welfare of the network. We find that a surprisingly low level of detection can be optimal and that the residual level of free riding at optimum depends critically on the overhead of detecting selfishness and the probability of falsely identifying cooperative peers.

Byung-Cheol Kim, Jonathan K. Shapiro
Semantic Web Recommender System Based Personalization Service for User XQuery Pattern

Semantic Web Recommender Systems is more complex than traditional Recommender System in that it raises many new issues such as user profiling, navigation pattern. Semantic Web based Recommender Service aims at combining the two fast-developing research areas Semantic Web and User XQuery. Nevertheless, as the number of web pages increases rapidity, the problem of the information overload becomes increasingly severe when browsing and searching the World Wide Web. To solve this problem, personalization becomes a popular solution to customize the World Wide Web environment towards a user’s preference. The idea is to improve by analyze of user query pattern for recommender service in the Web and to make use for building up the Semantic Web. In this paper, we present a user XQuery method for personalization Service using Semantic Web.

JinHong Kim, EunSeok Lee
Multi-unit Combinatorial Reverse Auctions with Transformability Relationships Among Goods

In this paper we extend the notion of multi-unit combinatorial reverse auction by adding a new dimension to the goods at auction. In such a new type of combinatorial auction a buyer can express transformability relationships among goods: some goods can be transformed into others at a transformation cost. Transformability relationships allow a buyer to introduce his information as to whether it is more convenient to buy some goods or others. We introduce such information in the winner determination problem (WDP) so that not only does the auction help allocate the optimal set of offers—taking into account transformability relationships—, but also assesses the transformability relationships that apply. In this way, the buyer finds out what goods to buy, to whom, and what transformations to apply to the acquired goods in order to obtain the required ones.

Andrea Giovannucci, Juan A. Rodríguez-Aguilar, Jesús Cerquides
Winner Determination in Discount Auctions

Discount auctions is a market mechanism for buying heterogeneous items in a single auction. The bidders are suppliers and a bid consists of individual cost for each of the items and a non-decreasing discount function defined over the number of items. The winner determination problem faced by the buyer is to determine the winning suppliers and their corresponding winning items. We show that this problem is

${\cal NP}$

-hard upon reduction from the set covering problem. The problem has an embedded network structure, which is exploited to develop heuristics and an exact branch and bound algorithm. Computational experiments were performed to evaluate the proposed algorithms.

Sampath Kameshwaran, L. Benyoucef, X. Xie
On the Competitive Ratio of the Random Sampling Auction

We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worst-case competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture that random sampling auction is in fact 4-competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal.

Uriel Feige, Abraham Flaxman, Jason D. Hartline, Robert Kleinberg
The Pricing Strategies for Agents in Real E-commerce

The paper presents the pricing strategies for automatic agent, which is design for e-commerce in the reality. Given the characters of e-commerce in reality, the paper firstly model the bargaining process; Secondly classify the bargaining scenarios and present the optimal bargaining strategy under sub-incomplete information for corresponding bargaining scenarios; then extend them to the optimal pricing strategies under incomplete information condition; Thirdly, discuss the conditions for convergence of optimal strategy; Finally, the analysis shows that the bargaining strategic profiles form the sequential equilibrium and the agreement is unique under certain conditions.

Wang Xianjia, Chen Yalin, Zhu Wenqi
Why Do Information Gatekeepers Charge Zero Subscription Fees?

In the Internet market, the revenue of an information gatekeeper comes in two forms: advertising fees paid by firms who post their prices at the gatekeeper’s site, and subscription fees paid by consumers who obtain the list of prices charged by different firms from the gatekeeper’s site. By extending Varian’s (1980) Model of Sales, this paper establishes conditions for an information gatekeeper to offer totally free subscriptions to consumers.

Qianqin Chen, Wei-Guo Zhang, Guoliang Kuang
Cost-Driven Web Service Selection Using Genetic Algorithm

Web services composition has been one of the hottest research topics. But with the ever increasing number of functional similar web services being made available on the Internet, there is a need to be able to distinguish them using a set of well-defined Quality of Service (QoS) criteria. The cost is the primary concern of many business processes. In this paper, we propose a new solution using Genetic Algorithm (GA) in cost-driven web service selection. GA is utilized to optimize business process composed of many service agents (SAg). Each SAg corresponds to a collection of available web services provided by multiple service providers to perform a specific function. Service selection is an optimization process with taking into account the relationships among the services. Better performance has been gotten using GA in the paper than using local service selection strategy. The global optimal solution might also be achieved with proper GA parameters.

Lei Cao, Minglu Li, Jian Cao
On Modeling Internet QoS Provisioning from Economic Models

The modeling of Internet quality of service (QoS) provisioning is a multidisciplinary research subject. From the viewpoint of game theory, we propose a model that combines QoS index with price factor. We use the MultiNomial Logit (MNL) to model the choice behaviors of users. Each service class is considered an independent competing entity, which aims at maximizing its own utility. Based on noncooperative game, we demonstrate the existence and uniqueness of equilibria between QoS levels and prices among various service classes, and demonstrate the properties of equilibria. We also verified these results via numerical analysis.

Yufeng Wang, Wendong Wang
Metadata and Information Asset for Infomediary Business Model on Primary Product Market

On a marketplace, classic intermediaries based on products become less profitable, but infomediary, an intermediary based on information, plays a new important role. To intermediate meaningful and timely information among participants, it has to maintain a data warehouse with an information asset based on a new attribute, which is metadata of physical transaction, as an alternative of conventional attribute on relational data model. The metadata aggregates the raw transactions more pertinently than current attribute, and the corresponding information asset serves substantially for decision making. In this paper, we describe the constitution of metadata on primary product market and the value of multidimensional information asset for market participants. The application shows that each participant can increase its profit by using the information asset, and so can the infomediary by vending it.

Taehan Kim, Sang Chan Park
On Protection of Threatened Unstructured Overlays: An Economic Defense Model and Its Applications

The presence of power-law connectivity distributions and small-world characteristics in current unstructured overlay networks, so useful to speed up the communication process, ironically, also exposes some fatal topological weaknesses, e.g., being extremely vulnerable under intentional targeted intrusions, which seriously reduces their intrusion survivability. As a remedy, we in this paper propose a novel generalized and practical analytical formulation called

Economic Defense Model

, to characterize the intrusion spreading in these networks and provide guidelines for controlling the epidemic outbreaks. Based on (but much different from) currently existing methods, our model focuses on two key concepts of efficiency and cost, by giving deep insight into the role of topological properties, like the scale free behaviors, the small-world-like phenomena, the statistical significance of both nodes and links during dynamic topology evolution over time. Moreover, we propose a novel economic defense strategy and then perform a case study to examine how efficiency and economy principles combine up to shape the epidemics and immunization in these overlays.

Xinli Huang, Fanyuan Ma, Wenju Zhang, Yin Li
Outsourcing Internet Security: Economic Analysis of Incentives for Managed Security Service Providers

Firms hesitate to outsource their network security to outside security providers (called Managed Security Service Providers or MSSPs) because an MSSP may shirk secretly to increase profits. In economics this secret shirking behavior is commonly referred to as the Moral Hazard problem. There is a counter argument that this moral hazard problem is not as significant for the Internet security outsourcing market because MSSPs work hard to build and maintain their reputations which are crucial to surviving competition. Both arguments make sense and should be considered to write a successful contract. This paper studies the characteristics of optimal contracts (payment to MSSPs) for security outsourcing market by setting up an economic framework that combines both effects. It is shown that an optimal contract should be performance-based. The degree of performance dependence decreases if the reputation effect becomes more significant. We also show that if serving a large group of customers helps the provider to improve service quality significantly (which is observed in the internet security outsourcing market), an optimal contract should always be performance-based even if a strong reputation effect exists.

Wen Ding, William Yurcik, Xiaoxin Yin
Secure Construction of Virtual Organizations in Grid Computing Systems

Virtual organization (VO) is an important abstraction for designing large-scale distributed applications involving extensive resource-sharing. Existing works on VO mostly assumes that the VO already exists or is created by mechanisms outside of their system model. The VO construction is challenging and critical due to its dynamic and distributed nature. This paper presents a VO Construction Model and an implementation algorithm which is based on a threshold approach and is secure and robust in that events such as member admission, member revocation, VO splitting and merging etc. can be handled without centralized administration. Also authentication and communications among VO members are efficient and without tedious key exchanges and management usually needed in VO built upon the Grid Security Infrastructure (GSI).

Bangyu Wu, Kwok-Yan Lam, Siu-Leung Chung, Huaxiong Wang
A Graph-Theoretic Network Security Game

Consider a network vulnerable to viral infection. The system security software can guarantee safety only to a limited part of the network. We model this practical network scenario as a non-cooperative multi-player game on a graph, with two kinds of players, a set of

attackers

and a

protector

player, representing the viruses and the system security software, respectively. Each attacker player chooses a node of the graph (or a set of them, via a probability distribution) to infect. The protector player chooses independently, in a basic case of the problem, a simple path or an edge of the graph (or a set of them, via a probability distribution) and cleans this part of the network from attackers. Each attacker wishes to maximize the probability of escaping its cleaning by the protector. In contrast, the protector aims at maximizing the expected number of cleaned attackers. We call the two games obtained from the two basic cases considered, as the

Path

and the

Edge

model, respectively.

We are interested in the associated

Nash equilibria

on them, where no network entity can unilaterally improve its local objective. We obtain the following results:

– The problem of existence of a pure Nash equilibrium is

$\cal NP$

-complete for the Path model. This opposed to that, no instance of the Edge model possesses a pure Nash equilibrium, proved in [4].

– We compute, in polynomial time, mixed Nash equilibria on corresponding graph instances. These graph families include, regular graphs, graphs that can be decomposed, in polynomially time, into vertex disjoint

r

-regular subgraphs, graphs with perfect matchings and trees.

– We utilize the notion of

social cost

[3] for measuring system performance on such scenario; here is defined to be the utility of the protector. We prove that the corresponding

Price of Anarchy

in any mixed Nash equilibria of the game is upper and lower bounded by a linear function of the number of vertices of the graph.

Marios Mavronicolas, Vicky Papadopoulou, Anna Philippou, Paul Spirakis
Nash Equilibria and Dominant Strategies in Routing

Nash equilibria and dominant strategies are two of the major approaches to deal with selfishness in an automated system (AS), where each agent is a selfish entity.

In this paper, we consider the scenario when the receiver(s) and the relay links are both selfish, which generalizes the previous scenario in which either the relay links are selfish or the receivers are selfish. This also advances all previous studying in routing by taking into account the budget balance ratio. We prove that no mechanism can achieve budget balance ratio greater than

$\frac{1}{n}$

when truthful revealing is a dominant strategy for each of the relay links and receivers. Here,

n

is the number of vertices in the network. In the meanwhile, we also present a mechanism that achieves the budget balance ratio

$\frac{1}{n}$

and is truthful for both the receivers and relay links, which closes the bounds. When we relax the truthful revealing requirement to Nash Equilibrium for relay links, we present a mechanism that achieves an asymptotically optimal budget balance ratio.

Weizhao Wang, Xiang-Yang Li, Xiaowen Chu
Atomic Selfish Routing in Networks: A Survey

In this survey we present some recent advances in the literature of atomic (mainly network) congestion games. The algorithmic questions that we are interested in have to do with the

existence

of pure Nash equilibria, the

efficiency

of their construction when they exist, as well as the

gap

of the best/worst (mixed in general) Nash equilibria from the social optima in such games, typically called the

Price of Anarchy

and the

Price of Stability

respectively.

Spyros Kontogiannis, Paul Spirakis
Heuristic Approaches to Service Level Agreements in Packet Networks

Real-time multimedia applications on the Internet such as video and audio streaming, video conferencing, and online collaborations are expected to become increasingly popular. In order to guarantee effective support of these applications, we must be able to provide Quality of Service (QoS) guarantees such as network bandwidth and end-to-end delay, by incorporating session routing and resource reservation in admission control. In this work, we present the Utility Model, which provides an optimal allocation of resources for admission control while meeting the QoS requirements of admitted users’ sessions. We describe previous heuristics to solve the Utility Model. These heuristics, though, are not suitable for larger networks due to their computation complexity, resulting in real-time decision-making and scalability issues. We are proposing a new concept to solve the Utility Model problem using a modified version of the Multicommodity Flow algorithm. This heuristic has improved computational complexity, and hence is capable of solving the Utility Model problem in lower overall execution time. This implies that admission control, which requires real-time solution of the Model, can be extended to larger networks.

Louis L. Yu, Eric G. Manning, Kin F. Li
A Fixed Point Approach for the Computation of Market Equilibria

In proposing an open problem, Codenotti et al.[3, 5] conjectured that the

welfare adjustment scheme

can approximate the general market equilibrium by iteratively using an oracle for the Fisher’s model. In this work, we analyze the scheme for a large class of market models. We show that the iterative step is in fact a Lipschitz continuous function and the residue approximation of its fixed point is a good approximation of the market equilibrium price.

Li-Sha Huang
New Results on the Complexity of Uniformly Mixed Nash Equilibria

We are interested in the complexity of finding Nash equilibria with one uniformly mixed strategy (that is, equilibria in which at least one of the players plays a uniform probability distribution over some set of pure strategies). We show that, even in imitation bimatrix games, where one player has a positive payoff if he plays the same pure strategy as the opponent, deciding the existence of such an equilibrium is an

NP

-complete problem. We derive this result from the

NP

-completeness of graph-theoretical problems strictly related to this class of equilibria.

Vincenzo Bonifaci, Ugo Di Iorio, Luigi Laura
Nash Equilibria in All-Optical Networks
(Extended Abstract)

We consider the problem of routing a number of communication requests in WDM (wavelength division multiplexing) all-optical networks from the standpoint of game theory. If we view each routing request (pair of source-target nodes) as a player, then a strategy consists of a path from the source to the target and a frequency (color). To reflect the restriction that two requests must not use the same frequency on the same edge, conflicting strategies are assigned a prohibitively high cost.

Under this formulation, we consider several natural cost functions focusing on the existence of Nash equilibria and on the complexity of recognizing and computing them.

George F. Georgakopoulos, Dimitris J. Kavvadias, Leonidas G. Sioutis
Price of Anarchy, Locality Gap, and a Network Service Provider Game

In this paper, we define a network service provider game. We show that the price of anarchy of the defined game can be bounded by analyzing a local search heuristic for a related facility location problem called the

k

-facility location problem. As a result, we show that the

k

-facility location problem has a locality gap of 5. This result is of interest on its own. Our result gives evidence to the belief that the price of anarchy of certain games are related to analysis of local search heuristics.

Nikhil Devanur, Naveen Garg, Rohit Khandekar, Vinayaka Pandit, Amin Saberi, Vijay Vazirani
Network Traffic Analysis and Modeling for Games

A number of papers have been proposed for the purpose of analyzing the traffic data of network game and providing the models recently. The traffic characteristics of network games are different according to the game genres, and the traffic is affected by player number and the in-game behaviors. In this paper, we develop the dedicated tool

NetGame Sniffer

for measurement and analysis of network game traffic, and measure the traffic of

Quake

and

World of Warcraft

. We analyze and compare the traffic characteristics according to the number of players and user actions, and propose the game traffic models and practical uses of the models. The analysis and models of network game traffic are can be used for effective network simulation, performance evaluation and the design of network games.

HyoJoo Park, TaeYong Kim, SaJoong Kim
Price of Anarchy of Network Routing Games with Incomplete Information

We consider a class of networks where

n

agents need to send their traffic from a given source to a given destination over

m

identical, non-intersecting, and parallel links. For such networks, our interest is in computing the worst case loss in social welfare when a distributed routing scheme is used instead of a centralized one. For this, we use a noncooperative game model with price of anarchy as the index of comparison. Previous work in this area makes the complete information assumption, that is, every agent knows deterministically the amount of traffic injected by every other agent. Our work relaxes this by assuming that the amount of traffic each agent wishes to send is known to the agent itself but not to the rest of the agents; each agent has a belief about the traffic loads of all other agents, expressed in terms of a probability distribution. In this paper, we first set up a model for such network situations; the model is a noncooperative Bayesian game with incomplete information. We study the resulting games using the solution concept of

Bayesian Nash equilibrium

and a representation called the

type agent representation

. We derive an upper bound on price of anarchy for these games, assuming the total expected delay experienced by all the agents as the social cost. It turns out that these bounds are independent of the belief probability distributions of the agents. This fact, in particular, implies that the same bounds must hold for the complete information case, which is vindicated by the existing results in the literature for complete information routing games.

Dinesh Garg, Yadati Narahari
General Equilibrium for Economies with Harmful Overconsumption

This paper studies general equilibrium models for economies with overconsumption goods. We consider two kinds of good: usual commodity (for usual consumption) and harmful overconsumption commodity. The consumption of the usual commodity always increases an agent’s utility. But for the harmful overconsumption commodity, once the consumption reaches a critical point, there is disutility of consuming more. Overconsumption of this commodity is harmful. The utility function is no longer monotonically increasing when overconsumption happens. The existence of general equilibrium is solved from truncated economies and Arrow-Debreu economies with usual utility function. We provide a few examples to show general equilibria for various cases.

Hui Huang, Shunming Zhang
Expectations, Asymmetries, and Contributions

This paper analyzes activity mechanism designers with. asymmetric agents of voluntary contributions to a project between several agents. He equilibrium pattern of contributions is one in which each agent makes may explain why the completion of a joint project is made step by step. We show that whenever the project is desirable, the project is completed, and in equilibrium, each agent makes large contribution. We recursive expected utility allows an agent to care about the timing of the resolution of uncertainty. Recursive expected utility achieves this flexibility by reduction of preferences are build up preferences depend on unrealized contingencies. We show that if an agent prefers resolution of uncertainty, then the forms of the most commonly used dependence, each collapsed to recursive expected utility. The agent’s preferences must fail to conform to either the between be quite inconsistent in her preferences. We refers to a class of problems in which a individual, the agent, controls a decision that has consequences for many individuals with preferences. The mechanism designers, influence the agent’s decisions by actions chosen under imperfect information.

Jaroslav Zajac
A Quantile-Data Mapping Model for Value-at-Risk Based on BP and Support Vector Regression

A novel “Quantile predicting Return” method (Q → R) is presented to predict VaR (Value-at-Risk). In the paper, a nonlinear mapping between quintile and VaR is constructed by using neural networks instead of traditional statistical methods. Two related models are proposed. One is QDMN (Quantile- Data Mapping Network) based on BP and the other is SVR-QD (Support Vector Regression Quantile- Data mapping) based on SVM. There is no assumption for distribution in both proposed models. The operation mode and the reasonableness of measuring VaR using the two models are analyzed. QDMN and SVR-QD are applied to predict VaR in Shanghai Stock Exchange. Comparisons of the proposed methods with Monte Carlo approach are performed. Numerical experiments based on Basle traffic lights, Proportion of Failures and exception computing show that the two new models are superior to Monte Carlo approach based on a certain assumptions in accuracy.

Xiaodao Wu, Yanfeng Sun, Yanchun Liang
Backmatter
Metadata
Title
Internet and Network Economics
Editors
Xiaotie Deng
Yinyu Ye
Copyright Year
2005
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32293-1
Print ISBN
978-3-540-30900-0
DOI
https://doi.org/10.1007/11600930

Premium Partner