Skip to main content
main-content

Über dieses Buch

The papers in this volume were presented at the 12th International Sym- sium on Stabilization, Safety, and Security of Distributed Systems (SSS), held September 20–22, 2010 at Columbia University, NYC, USA. The SSS symposium is an international forum for researchersand practiti- ers in the design and development of distributed systems with self-* properties: (theclassical)self-stabilizing,self-con?guring,self-organizing,self-managing,se- repairing,self-healing,self-optimizing,self-adaptive,andself-protecting. Research in distributed systems is now at a crucial point in its evolution, marked by the importance of dynamic systems such as peer-to-peer networks, large-scale wi- lesssensornetworks,mobileadhocnetworks,cloudcomputing,roboticnetworks, etc. Moreover, new applications such as grid and web services, banking and- commerce, e-health and robotics, aerospaceand avionics, automotive, industrial process control, etc. , have joined the traditional applications of distributed s- tems. SSS started as the Workshop on Self-Stabilizing Systems (WSS), the ?rst two of which were held in Austin in 1989 and in Las Vegas in 1995. Starting in 1995, the workshop began to be held biennially; it was held in Santa Barbara (1997), Austin (1999), and Lisbon (2001). As interest grew and the community expanded, the title of the forum was changed in 2003 to the Symposium on Self- Stabilizing Systems (SSS). SSS was organized in San Francisco in 2003 and in Barcelona in 2005. As SSS broadened its scope and attracted researchers from other communities, a couple of changes were made in 2006. It became an - nual event, and the name of the conference was changed to the International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS).

Inhaltsverzeichnis

Frontmatter

Invited Talks Abstracts

Arcane Information, Solving Relations, and Church Censorship

Church-Turing Thesis fails for problems that allow multiple answers: many easily solvable problems allow only non-recursive solutions. Its corrected version is: Physical and Mathematical Sequences have Little Common Information. This requires extending Kolmogorov’s concept of mutual information to infinite strings. This is tricky; the talk will survey these and other related issues. Related Information can found at:

http://arxiv.org/abs/cs.CC/0203029

.

Leonid A. Levin

Computation of Equilibria and Stable Solutions

Many models from a variety of areas involve the computation of an equilibrium or stable solution of some kind. Examples include Nash equilibria in games; price equilibria in markets; optimal strategies and the values of competitive dynamic games (stochastic and other games); stable configurations of neural networks; analysis of stochastic models like branching processes and recursive Markov chains. It is not known whether these problems can be solved in polynomial time. Despite their broad diversity, there are certain common computational principles that underlie different types of equilibria and connect many of these problems to each other. In this talk we will discuss some of these common principles, the corresponding complexity classes that capture them, and their relationship with other open questions in computation.

Mihalis Yannakakis

A Geometry of Networks

This presentation will describe a coordinate geometry of networks and its applications to mobility, security, overlays and traffic management. Given a network graph, one can construct an abstract group and associate elements of the group with graph nodes to provide “coordinates”. Much like Cartesian coordinates, a route may be simply computed from the coordinates of the source and destination. Given a metric of link “length” (e.g., delay, utilization), one may easily select shortest-distance routes. Furthermore, this route selection may vary for each source-destination stream, or even for each packet. For example, unlike current networks, one may pursue dispersion-routing where a stream of packets is routed over multiple paths to load-balance traffic. The coordinates geometry admits dynamic topology changes and may thus be used to support various forms of mobility, including mobile ad-hoc networks, or dynamic deployment of virtual-machines through cloud infrastructures. A given network may admit multiple independent geometry overlays. Each such overlay can serve as a VPN to protect access to respective nodes. Finally, geometry overlays may be flexibly layered over each other, permitting simple scalability, applications-specific-networks and flexible private networks.

Yechiam Yemini

Contributed Papers

Systematic Correct Construction of Self-stabilizing Systems: A Case Study

Design and implementation of distributed algorithms often involve many subtleties due to their complex structure, non-determinism, and low atomicity as well as occurrence of unanticipated physical events such as faults. Thus, constructing correct distributed systems has always been a challenge and often subject to serious errors. We present a methodology for component-based modeling, verification, and performance evaluation of self-stabilizing systems based on the BIP framework. In BIP, a system is modeled as the composition of a set of atomic components by using two types of operators: interactions describing synchronization constraints between components, and priorities to specify scheduling constraints. The methodology involves three steps illustrated using the

distributed reset

algorithm due to Arora and Gouda. First, a high-level model of the algorithm is built in BIP from the set of its processes by using powerful primitives for multi-party interactions and scheduling. Then, we use this model for verification of properties of a self-stabilizing algorithm. Finally, a distributed model which is observationally equivalent to the high-level model is generated.

Ananda Basu, Borzoo Bonakdarpour, Marius Bozga, Joseph Sifakis

A Fault-Resistant Asynchronous Clock Function

Consider an asynchronous network in a shared-memory environment consisting of

n

nodes. Assume that up to

f

of the nodes might be

Byzantin

(

n

 > 12

f

), where the adversary is full-information and dynamic (sometimes called adaptive). In addition, the non-

Byzantin

nodes may undergo transient failures. Nodes advance in atomic steps, which consist of reading all registers, performing some calculation and writing to all registers.

The three main contributions of the paper are: first, the clock-function problem is defined, which is a generalization of the clock synchronization problem. This generalization encapsulates previous clock synchronization problem definitions while extending them to the current paper’s model. Second, a randomized asynchronous self-stabilizing

Byzantin

tolerant clock synchronization algorithm is presented.

In the construction of the clock synchronization algorithm, a building block that ensures different nodes advance at similar rates is developed. This feature is the third contribution of the paper. It is self-stabilizing and

Byzantin

tolerant and can be used as a building block for different algorithms that operate in an asynchronous self-stabilizing

Byzantin

model.

The convergence time of the presented algorithm is exponential. Observe that in the asynchronous setting the best known full-information dynamic

Byzantin

agreement also has an expected exponential convergence time.

Ezra N. Hoch, Michael Ben-Or, Danny Dolev

Self-stabilizing Leader Election in Dynamic Networks

Three silent self-stabilizing asynchronous distributed algorithms are given for the leader election problem in a dynamic network with unique IDs, using the composite model of computation. A leader is elected for each connected component of the network. A BFS tree is also constructed in each component, rooted at the leader. This election takes

O

(

Diam

) rounds, where

Diam

is the maximum diameter of any component. Links and processes can be added or deleted, and data can be corrupted. After each such topological change or data corruption, the leader and BFS tree are recomputed if necessary. All three algorithms work under the unfair daemon.

The three algorithms differ in their

leadership stability

. The first algorithm, which is the fastest in the worst case, chooses an arbitrary process as the leader. The second algorithm chooses the process of highest priority in each component, where priority can be defined in a variety of ways. The third algorithm has the strictest leadership stability. If the configuration is legitimate, and then any number of topological faults occur at the same time but no variables are corrupted, the third algorithm will converge to a new legitimate state in such a manner that no process changes its choice of leader more than once, and each component will elect a process which was a leader before the fault, provided there is at least one former leader in that component.

Ajoy K. Datta, Lawrence L. Larmore, Hema Piniganti

Loop-Free Super-Stabilizing Spanning Tree Construction

We propose an univesal scheme to design loop-free and super-stabilizing protocols for constructing spanning trees optimizing any tree metrics (not only those that are isomorphic to a shortest path tree).

Our scheme combines a novel super-stabilizing loop-free BFS with an existing self-stabilizing spanning tree that optimizes a given metric. The composition result preserves the best properties of both worlds: super-stabilization, loop-freedom, and optimization of the original metric without any stabilization time penalty. As case study we apply our composition mechanism to two well known metric-dependent spanning trees: the maximum-flow tree and the minimum degree spanning tree.

Lélia Blin, Maria Gradinariu Potop-Butucaru, Stephane Rovedakis, Sébastien Tixeuil

A New Technique for Proving Self-stabilizing under the Distributed Scheduler

Proving stabilization of a complex algorithm under the distributed scheduler is a non-trivial task. This paper introduces a new method which allows to extend proofs for the central scheduler to the distributed scheduler. The practicability of the method is shown by applying it to two existing algorithms, for which stabilization under the distributed scheduler was an open problem.

Sven Köhler, Volker Turau

A Tranformational Approach for Designing Scheduler-Oblivious Self-stabilizing Algorithms

The complexity of designing self-stabilizing systems is often compounded by the assumptions about the underlying schedulers. This paper presents a method to transform a self-stabilizing algorithm working under a given arbitrary, but potentially very restrictive, scheduler to a self-stabilizing algorithm under any weakly fair scheduler. The method presented here implements a

progress monitor

by exploiting the knowledge of a ranking function –used for proving convergence under the original scheduler– to carry out the transformation.

Abhishek Dhama, Oliver Theel

On Byzantine Containment Properties of the min + 1 Protocol

Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors.

We consider the well known problem of constructing a breadth-first spanning tree in this context. Combining these two properties prove difficult: we demonstrate that it is impossible to contain the impact of Byzantine processes in a strictly or strongly stabilizing manner. We then adopt the weaker scheme of

topology-aware strict stabilization

and we present a similar weakening of strong stabilization. We prove that the classical

min

 + 1 protocol has optimal Byzantine containment properties with respect to these criteria.

Swan Dubois, Toshimitsu Masuzawa, Sébastien Tixeuil

Efficient Self-stabilizing Graph Searching in Tree Networks

The graph search problem asks for a strategy that enables a minimum sized team of searchers to capture a “fugitive” while it evades and potentially multiplies through a network. It is motivated by the need to eliminate fast spreading viruses and other malicious software agents in computer networks.

The current work improves on previous results with a self-stabilizing algorithm that clears an

n

node tree network using only 1 + log

n

searchers and

O

(

n

log

n

) moves after initialization. Since Θ( log

n

) searchers are required to clear some tree networks even in the sequential case, this is the best that any self-stabilizing algorithm can do. The algorithm is based on a novel multi-layer traversal of the network.

Jean Blair, Fredrik Manne, Rodica Mihai

Adaptive Containment of Time-Bounded Byzantine Faults

In this paper, we introduce a novel Byzantine fault model called

time-bounded Byzantine fault

that imposes an upper bound on the number of malicious actions of a Byzantine faulty process. We also propose a new method for adaptive fault-containment against time-bounded Byzantine faults that guarantees that the number of perturbed processes depends on the number of malicious actions at Byzantine processes. The proposed information diffusion method imposes

k

consecutive state changes on a process so that the process diffuses information to processes at distance

k

. We present an example of a leader election protocol to show the adaptive containment of the proposed method.

Yukiko Yamauchi, Toshimitsu Masuzawa, Doina Bein

Brief Announcement: Fast Convergence in Route-Preservation

Optimal-routing in a computer network consists of building a spanning-tree such that two conditions hold: a) the root of the tree is a distinguished node, and b) weights are assigned to the network links, and each path along the tree to the root is optimal with respect to these weights [4]. This differs from spanning-tree protocols used in leader election, in which any node can be the root, and usually the tree need not be optimal with respect to link weights.

Jorge A. Cobb

Authenticated Broadcast with a Partially Compromised Public-Key Infrastructure

Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Almost all existing protocols, however, do not distinguish between

corrupted

parties (who do not follow the protocol), and

honest

parties whose secret (signing) keys have been compromised (but who continue to behave honestly). We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.

Consider a network of

n

parties, where an adversary has compromised the secret keys of up to

t

c

honest parties and, in addition, fully controls the behavior of up to

t

a

other parties. We show that for any fixed

t

c

 > 0, and any fixed

t

a

, there exists an efficient protocol for broadcast if and only if 2

t

a

 + min (

t

a

,

t

c

) < 

n

. (When

t

c

 = 0, standard results imply feasibility.) We also show that if

t

c

,

t

a

are not fixed, but are only guaranteed to satisfy the bound above, then broadcast is impossible to achieve except for a few specific values of

n

; for these “exceptional” values of

n

, we demonstrate a broadcast protocol. Taken together, our results give a complete characterization of this problem.

S. Dov Gordon, Jonathan Katz, Ranjit Kumaresan, Arkady Yerukhimovich

On Applicability of Random Graphs for Modeling Random Key Predistribution for Wireless Sensor Networks

We study the applicability of random graph theory in modeling secure connectivity of wireless sensor networks. Specifically, our work focuses on the highly influential random key predistribution scheme by Eschenauer and Gligor to examine the appropriateness of the modeling in finding system parameters for desired connectivity. We use extensive simulation and theoretical results to identify ranges of the parameters where i) random graph theory is not applicable, ii) random graph theory may lead to estimates with excessive errors, and iii) random graph theory gives very accurate results. We also investigate the similarities and dissimilarities in the structure of random graphs and key graphs (i.e., graphs describing key sharing information between sensor nodes). Our results provide insights into research relying on random graph modeling to examine behaviors of key graphs.

Tuan Manh Vu, Reihaneh Safavi-Naini, Carey Williamson

“Slow Is Fast” for Wireless Sensor Networks in the Presence of Message Losses

Transformations from shared memory model to wireless sensor networks (WSNs) quickly become inefficient in the presence of prevalent message losses in WSNs, and this prohibits their wider adoption. To address this problem, we propose a variation of the shared memory model, the

SF

shared memory model, where the actions of each node are partitioned into

slow

actions and

fast

actions. The traditional shared memory model consists only of fast actions and a lost message can disable the nodes from execution. Slow actions, on the other hand, enable the nodes to use slightly stale state from other nodes, so a message loss does not prevent the nodes from execution. We quantify over the advantages of using slow actions under environments with varying message loss probabilities, and find that a slow action has asymptotically better chance of getting executed than a fast action when the message loss probability increases. We also present guidelines for helping the protocol designer identify which actions can be marked as slow so as to enable the transformed program to be more loosely-coupled, and tolerate communication problems (latency, loss) better.

Mahesh Arumugam, Murat Demirbas, Sandeep S. Kulkarni

Modeling and Analyzing Periodic Distributed Computations

The earlier work on predicate detection has assumed that the given computation is finite. Detecting violation of a liveness predicate requires that the predicate be evaluated on an infinite computation. In this work, we develop the theory and associated algorithms for predicate detection in infinite runs. In practice, an infinite run can be determined in finite time only if it consists of a recurrent behavior with some finite prefix. Therefore, our study is restricted to such runs. We introduce the concept of

d-diagram

, which is a finite representation of infinite directed graphs. Given a d-diagram that represents an infinite distributed computation, we solve the problem of determining if a global predicate ever became true in the computation. The crucial aspect of this problem is the stopping rule that tells us when to conclude that the predicate can never become true in future. We also provide an algorithm to provide vector timestamps to events in the computation for determining the dependency relationship between any two events in the infinite run.

Anurag Agarwal, Vijay K. Garg, Vinit Ogale

Complexity Issues in Automated Model Revision without Explicit Legitimate State

Existing algorithms for the automated model revision incur an impediment that the designers have to identify the legitimate states of original model. Experience suggests that of the inputs required for model revision, identifying such legitimate states is the most difficult. In this paper, we consider the problem of automated model revision without explicit legitimate states. We note that without the explicit legitimate states, in some instances, the complexity of model revision increases substantially (from P to NP-hard). In spite of this, we find that this formulation is relatively complete, i.e., if it was possible to perform model revision

with

explicit legitimate states then it is also possible to do so

without

the explicit identification of the legitimate states. Finally, we show if the problem of model revision can be solved with explicit legitimate states then the increased cost of solving it without explicit legitimate states is very small.

In summary, the results in this paper identify instances of model revision where the explicit knowledge of legitimate state is beneficial and where it is not very crucial.

Fuad Abujarad, Sandeep S. Kulkarni

Algorithmic Verification of Population Protocols

In this work, we study the Population Protocol model of Angluin

et al.

from the perspective of protocol verification. In particular, we are interested in algorithmically solving the problem of determining whether a given population protocol conforms to its specifications. Since this is the first work on verification of population protocols, we redefine most notions of population protocols in order to make them suitable for algorithmic verification. Moreover, we formally define the general verification problem and some interesting special cases. All these problems are shown to be NP-hard. We next propose some first algorithmic solutions for a natural special case. Finally, we conduct experiments and algorithmic engineering in order to improve our verifiers’ running times.

Ioannis Chatzigiannakis, Othon Michail, Paul G. Spirakis

Energy Management for Time-Critical Energy Harvesting Wireless Sensor Networks

As Cyber-Physical Systems (CPSs) evolve they will be increasingly relied on to support time-critical monitoring and control activities. Further, many CPSs that utilize Wireless Sensor Networking (WSN) technologies require energy harvesting methods to extend their lifetimes. For this important system class, there are currently no effective approaches that balance system lifetime with system performance under both normal and emergency situations. To address this problem, we present a set of Harvesting Aware Speed Selection (HASS) algorithms. We use an epoch-based architecture to dynamically adjust CPU frequencies and radio transmit speeds of sensor nodes, hence regulate their power consumption. The objective is to maximize the minimum energy reserve over any node in the network, while meeting application’s end-to-end deadlines. Through this objective we ensures highly resilient performance under emergency or fault-driven situation. Through extensive simulations, we show that our algorithms yield significantly higher energy reserves than the approaches without speed and power control.

Bo Zhang, Robert Simon, Hakan Aydin

Stably Decidable Graph Languages by Mediated Population Protocols

We work on an extension of the Population Protocol model of Angluin

et al.

that allows edges of the communication graph,

G

, to have

states

that belong to a constant size set. In this extension, the so called Mediated Population Protocol model (MPP), both

uniformity

and

anonymity

are preserved. We study here a simplified version of MPP in order to capture MPP’s ability to stably compute

graph properties

. To understand properties of the communication graph is an important step in almost any distributed system. We prove that any graph property is not computable if we allow disconnected communication graphs. As a result, we focus on studying (at least)

weakly connected

communication graphs only and give several examples of computable properties in this case. To do so, we also prove that the class of computable properties is

closed under complement

,

union

and

intersection

operations. Node and edge parity, bounded out-degree by a constant, existence of a node with more incoming than outgoing neighbors, and existence of some directed path of length at least

$k=\mathcal{O}(1)$

are some examples of properties whose computability is proven. Finally, we prove the existence of symmetry in two specific communication graphs and, by exploiting this, we prove that there exists no protocol, whose states eventually stabilize, to determine whether

G

contains some directed cycle of length 2.

Ioannis Chatzigiannakis, Othon Michail, Paul G. Spirakis

Broadcasting in Sensor Networks of Unknown Topology in the Presence of Swamping

In this paper, we address the problem of broadcasting in a wireless sensor network under a novel communication model: the

swamping

communication model. In this model, nodes communicate only with those nodes at distance greater than

s

and at most

r

from them. We consider networks of unknown topology of diameter

D

, with a lower bound

α

on the geometric distance between nodes and a parameter

g

 = 1/

α

(

granularity

). We present broadcast algorithms for networks of nodes placed on the line and on the plane with respective time complexities

O

(

D

/

l

 + 

g

2

) and

O

(

Dg

/

l

 + 

g

4

), where

l

 ∈ Θ( max {(1 − 

s

),

α

}).

Evangelos Kranakis, Michel Paquette

Brief Announcement: Configuration of Actuated Camera Networks for Multi-target Coverage

In various domains, including public safety, first-responder, and security applications, an important task is monitoring a public space for events of interest, using sensors of various types, including e.g. networks of cameras installed on the ground or unmanned aerial vehicles (UAVs), which may be either autonomous or not. In the settings we consider here, cameras are characterized by a family of parameters, some fixed, some settable, including location, viewing range, and so on. The task is to deploy the sensors, i.e., set the applicable parameters, in order to optimize an objective function capturing the quality with which we observe a set of targets. In a dynamic scenario, we may wish to observe a large set of targets (or a continuous region) with observation quality passing some low threshold, sufficient for surveillance; upon request (i.e., when events are detected), we may then be obliged to observe a small number of distinguished targets to a higher level of quality, sufficient for identification and localization. An example objective function might maximize the total observation quality of all targets, conditioned on the hard constraint that each target is observed to the appropriate minimum quality threshold.

Matthew P. Johnson, Amotz Bar-Noy, Mani Srivastava

Brief Announcement: On the Hardness of Topology Inference

Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders. Consequently, many systems have been devised to reconstruct the topology of networks on the Internet from publicly observable data. Such systems rely on traceroute to provide path information, and attempt to compute the network topology from these paths. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. We prove a stronger result. There exists no algorithm that, given an arbitrary trace set with anonymous nodes, can determine the topology of the network that generated the trace set. Even the weak version of the problem, which allows an algorithm to output a “small” set of topologies such that the correct topology is included in the solution set, is not solvable: there exist trace sets such that any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. We show how to construct such a pathological case even when the network is known to have exactly two anonymous nodes.

H. B. Acharya, Mohamed Gouda

Self-stabilizing Algorithm of Two-Hop Conflict Resolution

Ad hoc networks are increasingly used by the civil protection forces to coordinate their actions in emergency situations. To enable them to work properly, the satisfaction of the demanded quality of service (QoS) is crucial. One of the most effective methods of assuring the QoS is to use multiplexing modes based on a resource reservation like TDMA or FDMA. The principal demands in terms of QoS concern a guarantee of connectivity and resource availability. Our idea consists in the separation of the interference detection mechanism in order to make it independent of the

pure

resource allocation algorithm. We present an algorithm which detects and corrects conflicts of assignment. Our algorithm is proved to be self-stabilizing and reaches a stable state in up to five rounds.

Stéphane Pomportes, Joanna Tomasik, Anthony Busson, Véronique Vèque

Low Memory Distributed Protocols for 2-Coloring

In this paper we present new distributed protocols to color even rings and general bipartite graphs. Our motivation is to provide algorithmic explanation for human subject experiments that show human subjects can achieve distributed coordination in the form of 2-coloring over networks with a simple communication protocol. All our protocols use low (often constant) memory and reach a solution in feasible (polynomial rounds) and sometimes optimal time. All the protocols also have short message length and use a broadcast communication strategy. Our contributions include two simple protocols

RingElect

and

GraphCoalescing

for rings and general bipartite graphs, which can be viewed as candidates for natural human strategies. We present two other protocols

RingElect

and

GraphElect

which are optimal or nearly optimal in terms of the number of rounds (proportional to the diameter of the graph) but require somewhat more complex strategies. The question of finding simple protocols in the style of

RingElect

and

GraphCoalescing

that run in time proportional to diameter is open.

Amos Israeli, Mathew D. McCubbins, Ramamohan Paturi, Andrea Vattani

Connectivity-Preserving Scattering of Mobile Robots with Limited Visibility

The

scattering

problem is a fundamental task for mobile robots, which requires that no two robots share the same position. We investigate the scattering problem in the limited-visibility model. In particular, we focus on

connectivity-preservation

property. That is, the scattering must be achieved so that the disconnection of the visibility graph never occurs (in the visibility graph robots are the nodes of the graph and the edges are their visibility relationship). The algorithm we propose assumes ATOM (

i.e.

semi-synchronous) model. In these settings our algorithm guarantees the connectivity-preserving property, and reaches a scattered configuration within

O

( min {

n

,

D

2

 + log

n

}) asynchronous rounds in expectation, where

D

is the diameter of the initial visibility graph. Note that the complexity analysis is

adaptive

since it depends on

D

. This implies that our algorithm quickly scatters all robots crowded in a small-diameter visibility graph. We also provide a lower bound of Ω(

n

) for connectivity-preserving scattering. It follows that our algorithm is optimal in the sense of the non-adaptive analysis.

Taisuke Izumi, Maria Gradinariu Potop-Butucaru, Sébastien Tixeuil

Computing in Social Networks

This paper defines the problem of Scalable Secure Computing in a Social network: we call it the

S

3

problem. In short, nodes, directly reflecting on associated users, need to compute a function

$f:\ V \rightarrow U$

of their inputs in a set of constant size, in a

scalable

and

secure

way. Scalability means that the message and computational complexity of the distributed computation is at most

$\mathcal{O}(\sqrt{n}\cdot {\rm polylog}{n})$

. Security encompasses (1) accuracy and (2) privacy: accuracy holds when the distance from the output to the ideal result is negligible with respect to the maximum distance between any two possible results; privacy is characterized by how the information disclosed by the computation helps faulty nodes infer inputs of non-faulty nodes.

We present AG-S3, a protocol that

S

3

-computes a class of aggregation functions, that is that can be expressed as a commutative monoid operation on

U

:

f

(

x

1

,...,

x

n

) = 

x

1

 ⊕ ... ⊕ 

x

n

, assuming the number of faulty participants is at most

$\sqrt{n}/{\rm log}^2n$

. Key to our protocol is a dedicated overlay structure that enables secret sharing and distributed verifications which leverage the social aspect of the network: nodes care about their reputation and do not want to be tagged as misbehaving.

Andrei Giurgiu, Rachid Guerraoui, Kévin Huguenin, Anne-Marie Kermarrec

On Transactional Scheduling in Distributed Transactional Memory Systems

We present a distributed transactional memory (TM) scheduler called

Bi-interval

that optimizes the execution order of transactional operations to minimize conflicts.

Bi-interval

categorizes concurrent requests for a shared object into read and write intervals to maximize the parallelism of reading transactions. This allows an object to be simultaneously sent to nodes of reading transactions (in a data flow TM model), improving transactional makespan. We show that

Bi-interval

improves the makespan competitive ratio of the Relay distributed TM cache coherence protocol to

O

(log(

n

)) for the worst-case and Θlog(

n

 − 

k

) for the average-case, for

n

nodes and

k

reading transactions. Our implementation studies confirm

Bi-interval

’s throughput improvement by as much as 200%

$\thicksim$

30%, over cache-coherence protocol-only distributed TM.

Junwhan Kim, Binoy Ravindran

Recursion in Distributed Computing

The benefits of developing algorithms via recursion are well known. However, little use of recursion has been done in distributed algorithms, in spite of the fact that recursive structuring principles for distributed systems have been advocated since the beginning of the field. We present several distributed algorithms in a recursive form, which makes them easier to understand and analyze. Also, we expose several interesting issues arising in recursive distributed algorithms. Our goal is to promote the use and study of recursion in distributed algorithms.

Eli Gafni, Sergio Rajsbaum

On Adaptive Renaming under Eventually Limited Contention

The adaptive

M

-renaming problem consists in designing an algorithm that allows a set of

p

 ≤ 

n

participating asynchronous processes (where

n

is the total number of processes) not known in advance to acquire pair-wise different new names in a name space whose size

M

depends on

p

(and not on

n

). Adaptive (2

p

 − 1)-renaming algorithms for read/write shared memory systems have been designed. These algorithms, which are optimal with respect to the value of

M

, consider the wait-freedom progress condition, which means that any correct participant has to acquire a new name whatever the behavior of the other processes (that can be very slow or even crashed).

This paper addresses the design of an adaptive

M

-renaming algorithm when considering the

k

-obstruction-freedom progress condition. This condition, that is weaker than wait-freedom, requires that every correct participating process acquires a new name in all runs where during “long enough periods” at most

k

processes execute steps (

p

-obstruction-freedom and wait-freedom are actually equivalent). The paper presents an optimal adaptive (

p

 + 

k

 − 1)-renaming algorithm and, consequently, contributes to a better understanding of synchronization and concurrency by showing that weakening the liveness condition from wait-freedom to

k

-obstruction-freedom allows the new name space to be reduced from 2

p

 − 1 to min (2

p

 − 1,

p

 + 

k

 − 1). Last but not least, the proposed algorithm is particularly simple, a first class property. This establishes an interesting tradeoff linking progress conditions with the size of the new name space.

Damien Imbs, Michel Raynal

RobuSTM: A Robust Software Transactional Memory

For software transactional memory (STM) to be usable in large applications such as databases, it needs to be

robust

, i.e., live, efficient, tolerant of crashed and non-terminating transactions, and practical. In this paper, we study the question of whether one can implement a robust software transactional memory in an asynchronous system. To that end, we introduce a system model – the

multicore system model

(MSM) – which captures the properties provided by mainstream multicore systems. We show how to implement a

robust software transactional memory

(

RobuSTM

) in MSM. Our experimental evaluation indicates that

RobuSTM

compares well against existing blocking and nonblocking software transactional memories in terms of performance while providing a much higher degree of robustness.

Jons-Tobias Wamhoff, Torvald Riegel, Christof Fetzer, Pascal Felber

A Provably Starvation-Free Distributed Directory Protocol

This paper presents

Combine

, a distributed directory protocol for shared objects, designed for large-scale distributed systems. Directory protocols support

move

requests, allowing to write the object locally, as well as

lookup

requests, providing a read-only copy of the object. They have been used in distributed shared memory implementations and in data-flow implementations of distributed software transactional memory in large-scale systems.

The protocol runs on an overlay tree, whose leaves are the nodes of the system; it ensures that the cost of serving a request is proportional to the cost of the shortest path between the requesting node and the serving node, in the overlay tree. The correctness of the protocol, including starvation freedom, is proved, despite asynchrony and concurrent requests. The protocol avoids race conditions by

combining

requests that overtake each other as they pass through the same node. Using an overlay tree with a good stretch factor yields an efficient protocol, even when requests are concurrent.

Hagit Attiya, Vincent Gramoli, Alessia Milani

Lightweight Live Migration for High Availability Cluster Service

High availability is a critical feature for service clusters and cloud computing, and is often considered more valuable than performance. One commonly used technique to enhance the availability is live migration, which replicates services based on virtualization technology. However, continuous live migration with checkpointing will introduce significant overhead. In this paper, we present a lightweight live migration (LLM) mechanism to integrate whole-system migration and input replay efforts, which aims at reducing the overhead while providing comparable availability. LLM migrates service requests from network clients at high frequency during the interval of checkpointing system updates. Once a failure happens to the primary machine, the backup machine will continue the service based on the virtual machine image and network inputs at their respective last migration rounds. We implemented LLM based on Xen and compared it with Remus—a state-of-the-art effort that enhances the availability by checkpointing system status updates. Our experimental evaluations show that LLM clearly outperforms Remus in terms of network delay and overhead. For certain types of applications, LLM may also be a better alternative in terms of downtime than Remus. In addition, LLM achieves transaction level consistency like Remus.

Bo Jiang, Binoy Ravindran, Changsoo Kim

Approximation of δ-Timeliness

In asynchronous message-passing distributed systems prone to process crashes, a communication link is said

δ-timely

if the communication delay on this link is bounded by some constant

δ

. We study here in which way processes may approximate and find structural properties based on

δ

-timeliness (

e.g.

, find

δ

-timely paths between processes or build a ring between correct processes using only

δ

-timely links).

To that end, we define a notion of approximation of predicates. Then, with help of such approximations, we give a general algorithm that enables to choose and eventually agree on one of these predicates. Finally, applying this approach to

δ

-timeliness, we give conditions and algorithms to approximate

δ

-timeliness and dynamically find structural properties using

δ

-timeliness.

Carole Delporte-Gallet, Stéphane Devismes, Hugues Fauconnier

A Framework for Adaptive Optimization of Remote Synchronous CSCW in the Cloud Computing Era

The systems for remote synchronous computer supported cooperative work (CSCW) are significant to facilitate people’s communication and promote productivity. However, in the Internet, such systems often suffer from the problems of relatively large latency, low bandwidth and relatively high cost of wide-area networking. Previous works tried to improve various mechanisms of communication, but till now we still cannot get rid of these problems due to the nature of the Internet data transmission mechanism. Rather than making optimizations based on the traditional CSCW computing style as previous work did, this paper proposes an idea of moving appropriate collaborative instances to the proper computing nodes which are just born in the emerging Cloud computing environments. Moreover, the paper presents a formal framework AORS to optimally organize the collaborative computing upon the emerging computational resources from the perspectives of both performance and cost. The formulization of the framework is proposed, and an analytic theory is developed. Directly solving the modeled problem has to refer to the exhaustive search, which is of exponential computational complexity; so we develop two heuristics. The experimental evaluations demonstrate the high efficiency and effectiveness of the heuristics. Furthermore, we conduct extensive simulation experiments on the current collaborative computing style and AORS. They illustrate that AORS brings the CSCW applications better communication quality and lower cost.

Ji Lu, Yaoxue Zhang, Yuezhi Zhou

Chameleon-MAC: Adaptive and Self-⋆ Algorithms for Media Access Control in Mobile Ad Hoc Networks

In mobile ad hoc networks (MANETs) mobile nodes do not have access to a fixed network infrastructure and they set up a communication network by themselves. MANETs require implementation of a wireless Medium Access Control (MAC) layer. Existing MAC algorithms that consider no mobility, solve the problem of eventually guaranteeing every node with a share of the communications bandwidth. In the context of MANETs, we ask: Is there an efficient MAC algorithm when mobility is considered?

MANETs are subject to transient faults, from which self-stabilizing systems can recover. The self-stabilization design criteria, and related concepts of self-⋆, liberate the application designer from dealing with low-level complications, and provide an important level of abstraction. Whereas stabilization criteria are important for the development of autonomous systems, adaptation is imperative for coping with a variable environment. Adapting to a variable environment requires dealing with a wide range of practical issues, such as relocation of mobile nodes and changes to the motion patterns.

This work proposes the design and proof of concept implementation of an adapted MAC algorithm named

Chameleon-MAC

, which is based on a self-stabilizing algorithm by Leone et al., and uses self-⋆ methods in order to further adapt its behavior according to the mobility characteristics of the environment. Moreover, we give an extensive treatment of the aspects and parameters that can bring the algorithm into the practical realm and we demonstrate documented behavior on real network studies (MICAz 2.4 GHz motes) as well as using simulation (TOSSIM [32]), showing improved overhead and fault-recovery periods than existing algorithms.

We expect that these advantages, besides the contribution in the algorithmic front of research, can enable quicker adoption by practitioners and faster deployment.

Pierre Leone, Marina Papatriantafilou, Elad M. Schiller, Gongxi Zhu

A Comparative Study of Rateless Codes for P2P Persistent Storage

This paper evaluates the performance of two seminal rateless erasure codes, LT Codes and Online Codes. Their properties make them appropriate for coping with communication channels having an unbounded loss rate. They are therefore very well suited to peer-to-peer systems. This evaluation targets two goals. First, it compares the performance of both codes in different adversarial environments and in different application contexts. Second, it helps understanding how the parameters driving the behavior of the coding impact its complexity. To the best of our knowledge, this is the first comprehensive study facilitating application designers in setting the optimal values for the coding parameters to best fit their P2P context.

Heverson B. Ribeiro, Emmanuelle Anceaume

Dynamically Reconfigurable Filtering Architectures

Distributed R-trees (DR-trees) are appealing infrastructures for implementing range queries, content based filtering or k-NN structures since they inherit the features of R-trees such as logarithmic height, bounded number of neighbors and balanced shape. Interestingly, the mapping between the DR-tree logical nodes and the physical nodes has not yet received sufficient attention. In previous works, this mapping was naively defined either by the order physical nodes join/leave the system or by their semantics. Therefore, an important gap in terms of load and latency can be observed while comparing the theoretical work and the simulation/experimental results. This gap is partially due to the placement of virtual nodes. A naive placement that totally ignores the heterogeneity of the network may generate an unbalanced load of the physical system. In order to improve the overall system performances, this paper proposes mechanisms for placement and dynamic migration of virtual nodes that balances the load of the network

without

modifying the DR-tree virtual structure. That is, we reduce the gap between the theoretical results and the practical ones by injecting (at the middleware level) placement and migration strategies for virtual nodes that directly exploit the physical characteristics of the network. Extensive simulation results show that significant performance gain can be obtained with our mechanisms. Moreover, due to its generality, our approach can be easily extended to other overlays or P2P applications (e.g. multi-layer overlays or efficient P2P streaming).

Mathieu Valero, Luciana Arantes, Maria Gradinariu, Pierre Sens

A Quantitative Analysis of Redundancy Schemes for Peer-to-Peer Storage Systems

Fully decentralized peer-to-peer (P2P) storage systems lack the reliability guarantees that centralized systems can give. They need to rely on the system’s statistical properties, only. Nevertheless, such probabilistic guarantees can lead to highly reliable systems. Moreover, their statistical nature makes P2P storage systems an ideal supplement to centralized storage systems, because they fail in entirely different circumstances than centralized systems.

In this paper, we investigate the behavior of different replication and erasure code schemes as peers fail. We calculate the data loss probability and the repairing delay, which is caused by the peers’ limited bandwidth. Using a Weibull model to describe peer behavior, we show that there are four different loss processes that affect the availability and durability of the data: initial loss, diurnal loss, early loss, and longterm loss. They need to be treated differently to obtain optimal results. Based on this insight we give general recommendations for the design of redundancy schemes in P2P storage systems.

Yaser Houri, Bernhard Amann, Thomas Fuhrmann

A Framework for Secure and Private P2P Publish/Subscribe

We propose a novel and totally decentralized strategy for private and secure data exchange in peer-to-peer systems. Our scheme is particularly appealing for point-to-point exchanges and use zero-knowledge mechanisms to preserve privacy. Furthermore, we show how to plug our private and secure data exchange module in existing publish/subscribe architectures. Our proposal enriches the original system with security and privacy making it resilient to a broad class of attacks (e.g. brute-force, eavesdroppers, man-in-the middle or malicious insiders). Additionally, the original properties of the publish/subscribe system are preserved without any degradation. A nice feature of our proposal is the reduce message cost: only one extra message is sent for every message sent in the original system. Note that our contribution is more conceptual than experimental and can be easily exploited by new emergent areas such as P2P Internet Games or Social Networks where a major trend is to achieve a secure and private communication without relying on any fixed infrastructure or centralized authority.

Samuel Bernard, Maria Gradinariu Potop-Butucaru, Sébastien Tixeuil

Snap-Stabilizing Linear Message Forwarding

In this paper, we present the first snap-stabilizing message forwarding protocol that uses a number of buffers per node being independent of any global parameter, that is 4 buffers per link. The protocol works on a linear chain of nodes, that is possibly an overlay on a large-scale and dynamic system,

e.g.,

Peer-to-Peer systems, Grids, etc. Provided that the topology remains a linear chain and that nodes join and leave “neatly”, the protocol tolerates topology changes. We expect that this protocol will be the base to get similar results on more general topologies.

Alain Cournier, Swan Dubois, Anissa Lamani, Franck Petit, Vincent Villain

Vulnerability Analysis of High Dimensional Complex Systems

Complex systems experience dramatic changes in behavior and can undergo transitions from functional to dysfunctional states. An unstable system is prone to dysfunctional collective cascades that result from self-reinforcing behaviors within the system. Because many human and technological civilian and military systems today are complex systems, understanding their susceptibility to collective failure is a critical problem. Understanding vulnerability in complex systems requires an approach that characterizes the coupled behaviors at multiple scales of cascading failures. We used neuromorphic methods, which are modeled on the pattern-recognition circuitry of the brain and can find patterns in high-dimensional data at multiple scales, to develop a procedure for identifying the vulnerabilities of complex systems. This procedure was tested on microdynamic Internet2 network data. The result was a generic pipeline for identifying extreme events in high dimensional datasets.

Vedant Misra, Dion Harmon, Yaneer Bar-Yam

Storage Capacity of Labeled Graphs

We consider the question of how much information can be stored by labeling the vertices of a connected undirected graph

G

using a constant-size set of labels, when isomorphic labelings are not distinguishable. An exact information-theoretic bound is easily obtained by counting the number of isomorphism classes of labelings of

G

, which we call the

information-theoretic capacity

of the graph. More interesting is the

effective capacity

of members of some class of graphs, the number of states distinguishable by a Turing machine that uses the labeled graph itself in place of the usual linear tape. We show that the effective capacity equals the information-theoretic capacity up to constant factors for trees, random graphs with polynomial edge probabilities, and bounded-degree graphs.

Dana Angluin, James Aspnes, Rida A. Bazzi, Jiang Chen, David Eisenstat, Goran Konjevod

Safe Flocking in Spite of Actuator Faults

The safe flocking problem requires a collection of

N

mobile agents to (a) converge to and maintain an equi-spaced lattice formation, (b) arrive at a destination, and (c) always maintain a minimum safe separation. Safe flocking in Euclidean spaces is a well-studied and difficult coordination problem. Motivated by real-world deployment of multi-agent systems, this paper studies one-dimensional safe flocking, where agents are afflicted by

actuator faults

. An actuator fault is a new type of failure that causes an affected agent to be stuck moving with an arbitrary velocity. In this setting, first, a self-stabilizing solution for the problem is presented. This relies on a failure detector for actuator faults. Next, it is shown that certain actuator faults cannot be detected, while others may require

O

(

N

) time for detection. Finally, a simple failure detector that achieves the latter bound is presented. Several simulation results are presented for illustrating the effects of failures on the progress towards flocking.

Taylor Johnson, Sayan Mitra

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise