Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Conference on Distributed Computing and Networking, ICDCN 2013, held in Mumbai, India, during January 3-6, 2013. The 27 revised full papers, 5 short papers presented together with 7 poster papers were carefully reviewed and selected from 149 submissions. The papers cover topics such as distributed algorithms and concurrent data structures; integration of heterogeneous wireless and wired networks; distributed operating systems; internetworking protocols and internet applications; distributed database systems; mobile and pervasive computing, context-aware distributed systems; embedded distributed systems; next generation and converged network architectures; experiments and performance evaluation of distributed systems; overlay and peer-to-peer networks and services; fault-tolerance, reliability, and availability; home networking and services; multiprocessor and multi-core architectures and algorithms; resource management and quality of service; self-organization, self-stabilization, and autonomic computing; network security and privacy; high performance computing, grid computing, and cloud computing; energy-efficient networking and smart grids; security, cryptography, and game theory in distributed systems; sensor, PAN and ad-hoc networks; and traffic engineering, pricing, network management.

Inhaltsverzeichnis

Frontmatter

Verifying High-Confidence Interactive Systems: Electronic Voting and Beyond

Human interaction is central to many computing systems that require a high level of assurance. We term such systems as

high-confidence interactive systems

. Examples of such systems include aircraft control systems (interacting with a pilot), automobiles with self-driving features (interacting with a driver), medical devices (interacting with a doctor), and electronic voting machines (interacting with a voter). A major challenge to verifying the correct operation of such systems is that it is difficult to formally specify the human user’s view of correct operation and perception of the input/output interface. In this paper, we describe a promising approach towards addressing this challenge that combines formal verification with systematic testing by humans. We describe an illustrative application of this approach to electronic voting in the U.S., and outline directions for future work.

Sanjit A. Seshia

Fast Distributed PageRank Computation

Over the last decade, PageRank has gained importance in a wide range of applications and domains, ever since it first proved to be effective in determining node importance in large graphs (and was a pioneering idea behind Google’s search engine). In distributed computing alone, PageRank vectors, or more generally random walk based quantities have been used for several different applications ranging from determining important nodes, load balancing, search, and identifying connectivity structures. Surprisingly, however, there has been little work towards designing provably efficient fully-distributed algorithms for computing PageRank. The difficulty is that traditional matrix-vector multiplication style iterative methods may not always adapt well to the distributed setting owing to communication bandwidth restrictions and convergence rates.

In this paper, we present fast random walk-based distributed algorithms for computing PageRank in general graphs and prove strong bounds on the round complexity. We first present an algorithm that takes

O

(log

n

/

ε

) rounds with high probability on any graph (directed or undirected), where

n

is the network size and

ε

is the reset probability used in the PageRank computation (typically

ε

is a fixed constant). We then present a faster algorithm that takes

$O(\sqrt{\log n}/{\epsilon})$

rounds in undirected graphs. Both of the above algorithms are scalable, as each node processes and sends only small (polylogarithmic in

n

, the network size) number of bits per round and hence work in the

CONGEST

distributed computing model. For directed graphs, we present an algorithm that has a running time of

$O(\sqrt{\log n/{\epsilon}})$

, but it requires a polynomial number of bits to processed and sent per node in a round. To the best of our knowledge, these are the first fully distributed algorithms for computing PageRank vectors with provably efficient running time.

Atish Das Sarma, Anisur Rahaman Molla, Gopal Pandurangan, Eli Upfal

Dealing with Undependable Workers in Decentralized Network Supercomputing

Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. This paper presents a new algorithm for the problem of using network supercomputing to perform a large collection of independent tasks, while dealing with undependable processors. The adversary may cause the processors to return bogus results for tasks with certain probabilities, and may cause a subset

F

of the initial set of processors

P

to crash. The adversary is constrained in two ways. First, for the set of non-crashed processors

P

 − 

F

, the

average

probability of a processor returning a bogus result is inferior to

$\frac{1}{2}$

. Second, the adversary may crash a subset of processors

F

, provided the size of

P

 − 

F

is bounded from below. We consider two models: the first bounds the size of

P

 − 

F

by a fractional polynomial, the second bounds this size by a poly-logarithm. Both models yield adversaries that are much stronger than previously studied. Our randomized synchronous algorithm is formulated for

n

processors and

t

tasks, with

n

 ≤ 

t

, where depending on the number of crashes each live processor is able to terminate dynamically with the knowledge that the problem is solved with high probability. For the adversary constrained by a fractional polynomial, the time complexity of the algorithm is

$O(\frac{t}{n^\varepsilon}\log{n}\log{\log{n}})$

, its work is

O

(

t

log

n

loglog

n

) and message complexity is

O

(

n

log

n

loglog

n

). For the poly-log constrained adversary, the time complexity is

O

(

n

), work is

O

(

t

poly

log

n

), and message complexity is

O

(

n

poly

log

n

). All bounds are shown to hold with high probability.

Seda Davtyan, Kishori Konwar, Alexander Russell, Alexander Shvartsman

Decentralized Erasure Coding for Efficient Data Archival in Distributed Storage Systems

Distributed storage systems usually achieve fault tolerance by replicating data across different nodes. However, redundancy schemes based on

erasure codes

can provide a storage-efficient alternative to replication. This is particularly suited for

data archival

since archived data is rarely accessed. Typically, the migration to erasure-encoded storage does not leverage on the existing replication based redundancy, and simply discards (garbage collects) the excessive replicas. In this paper we propose a new

decentralized erasure coding process

that achieves the migration in a

network-efficient

manner in contrast to the traditional coding processes. The proposed approach exploits the presence of data that is already replicated across the system and distributes the redundancy generation among those nodes that store part of this replicated data, which in turn reduces the overall amount of data transferred during the encoding process. By storing additional replicated blocks at nodes executing the distributed encoding tasks, the necessary network traffic for archiving can be further reduced. We analyze the problem using symbolic computation and show that the proposed decentralized encoding process can reduce the traffic by up to 56% for typical system configurations.

Lluis Pamies-Juarez, Frederique Oggier, Anwitaman Datta

Transport Protocol with Acknowledgement-Assisted Storage Management for Intermittently Connected Wireless Sensor Networks

The benefits of hop-by-hop transport protocols and in-network storage in intermittently connected networks are well known. However, due to the extreme limitation on the storage capability of wireless sensor networks (WSNs), the hop-by-hop transport protocols that are based on in-network storage without storage management are inappropriate to apply directly to intermittently connected WSNs. The lack of storage management leads to mote storage overflow when using in-network storage, which in turn degrades the performance of the network.

In this paper, we propose a hop-by-hop transport protocol that provides not only end-to-end reliability and congestion control but also an innovative storage management mechanism. The proposed protocol enhances the network delivery rate without sacrificing the delivery speed even in high contention scenarios.

Ying Li, Radim Bartos, James Swan

Iterative Approximate Byzantine Consensus under a Generalized Fault Model

In this work, we consider a

generalized

fault model [7,9,5] that can be used to represent a wide range of failure scenarios, including correlated failures and non-uniform node reliabilities. Under the generalized fault model, we explore iterative approximate Byzantine consensus (IABC) algorithms [15] in arbitrary directed networks. We prove a

tight

necessary and sufficient condition on the underlying communication graph for the existence of IABC algorithms.

We propose a new IABC algorithm for the generalized fault model, and present a

transition matrix

-based proof to show the correctness of the proposed algorithm. While transition matrices have been used to prove correctness of non-fault-tolerant consensus [6], this paper is the first to extend the technique to Byzantine fault-tolerant algorithms.

Lewis Tseng, Nitin Vaidya

A Scalable Byzantine Grid

Modern networks assemble an ever growing number of nodes. However, it remains difficult to increase the number of channels per node, thus the maximal degree of the network may be bounded. This is typically the case in grid topology networks, where each node has at most four neighbors. In this paper, we address the following issue: if each node is likely to fail in an unpredictable manner, how can we preserve some global reliability guarantees when the number of nodes keeps increasing unboundedly ?

To be more specific, we consider the problem or reliably broadcasting information on an asynchronous grid in the presence of Byzantine failures – that is, some nodes may have an arbitrary and potentially malicious behavior. Our requirement is that a constant fraction of correct nodes remain able to achieve reliable communication. Existing solutions can only tolerate a fixed number of Byzantine failures if they adopt a worst-case placement scheme. Besides, if we assume a constant Byzantine ratio (each node has the same probability to be Byzantine), the probability to have a fatal placement approaches 1 when the number of nodes increases, and reliability guarantees collapse.

In this paper, we propose the first broadcast protocol that overcomes these difficulties. First, the number of Byzantine failures that can be tolerated (if they adopt the worst-case placement) now increases with the number of nodes. Second, we are able to tolerate a constant Byzantine ratio, however large the grid may be. In other words, the grid becomes scalable. This result has important security applications in ultra-large networks, where each node has a given probability to misbehave.

Alexandre Maurer, Sébastien Tixeuil

Collaborative Detection of Coordinated Port Scans

In this paper we analyze the coordinated port scan attack where a single adversary coordinates a Group of Attackers (GoA) in order to obtain information on a set of target networks. Such orchestration aims at avoiding Local Intrusion Detection Systems checks allowing each host of the GoA to send a very few number of probes to hosts of the target network. In order to detect this complex attack we propose a collaborative architecture where each target network deploys local sensors that send alarms to a collaborative layer. This, in turn, correlates this data with the aim of (i) identifying coordinated attacks while (ii) reducing false positive alarms and (iii) correctly separating GoAs that act concurrently on overlapping targets. The soundness of our approach is tested on real network traces. Tests show that collaboration among networks domains is mandatory to achieve accurate detection of coordinated attacks and sharp separation between GoAs that execute concurrent attacks on the same targets.

Roberto Baldoni, Giuseppe Antonio Di Luna, Leonardo Querzoni

Exploiting Partial-Packet Information for Reactive Jamming Detection: Studies in UWSN Environment

Reactive jamming in an underwater sensor network (UWSN) environment is a realistic and very harmful threat. It, typically, affects only a small part of a packet (not the entire one), in order to maintain a low detection probability. Prior works on reactive jamming detection were focused on terrestrial wireless sensor networks (TWSNs), and are limited in their ability to (a) detect it correctly, (b) distinguish the small corrupted part from the uncorrupted part of a packet, and (c) be adaptive with dynamic environment. Further, there is currently a need for a generalized framework for jamming detection that outlines the basic operations governing it. In this paper, we address these research lacunae by broadly designing such a

framework

for jamming detection, and specifically a

detection scheme

for reactive jamming. A key characteristic of this work is introducing the concept of

partial-packet

(PP) in jamming detection. The introduction of such an approach is unique – the existing works rely on holistic packet analysis, which degrades their performance – a fundamental issue that would substantially affect achieving real-time performance. We estimate the probability of high deviation in received signal strength (RSS) using a

weak estimation learning

scheme, which helps in absorbing the impact of dynamic environment. Finally, we perform CUSUM-test for reactive jamming detection. We evaluate the performance of our proposed scheme through simulation studies in UWSN environment. Results show that, as envisioned, the proposed scheme is capable of accurately detecting reactive jamming in UWSNs, with an accuracy of 100% true detection, while the average detection delay is substantially less.

Manas Khatua, Sudip Misra

Fault-Tolerant Design of Wireless Sensor Networks with Directional Antennas

A tree structure is often used in wireless sensor networks to deliver collected sensor data to a sink node. Such a tree can be built using directional antennas as they offer considerable advantage over the omni-directional ones. A tree is adequate for data gathering from all sensor nodes as long as no node in the tree fails. Since the connectivity of the tree is one, failure of any one node disconnects the tree and may disable the sink node from collecting data from some of the sensor nodes. In this paper we study the problem of enhancing the fault tolerance capability of a data gathering tree by adding a few additional links so that the failure of any one sensor would not disconnect the tree. Assuming that the addition of each link to the tree involves some cost, we study the problem of least-cost augmentation of the tree, so that even after failure of a single node, all the surviving nodes will remain connected to the sink node. We prove that the least-cost tree augmentation problem is NP-complete. Moreover, we provide an approximation algorithm with performance bound of two. The experimental evaluations of the algorithm demonstrate that the approximation algorithm performs even better in practice and almost always produces near-optimal solution.

Shahrzad Shirazipourazad, Arunabha Sen, Subir Bandyopadhyay

Self-stabilizing Silent Disjunction in an Anonymous Network

Given a fixed

input bit

to each process of a connected network of processes, the

disjunction problem

is for each process to compute an

output bit

, whose value is 0 if all input bits in the network are 0, and 1 if there is at least one input bit in the network which is 1. A uniform asynchronous distributed algorithm DISJ is given for the disjunction problem in an anonymous network. DISJ is self-stabilizing, meaning that the correct output is computed from an arbitrary initial configuration, and is silent, meaning that every computation of DISJ is finite. The time complexity of DISJ is

O

(

n

) rounds, where

n

is the size of the network. DISJ works under the

unfair daemon

.

Ajoy K. Datta, Stéphane Devismes, Lawrence L. Larmore

Uniform Consensus with Homonyms and Omission Failures

In synchronous message passing models in which some processes may be homonyms, i.e. may share the same id, we consider the consensus problem. Many results have already been proved concerning Byzantine failures in models with homonyms [10], we complete here the picture with crash and omission failures.

Let

n

be the number of processes,

t

the number of processes that may be faulty (

t

 < 

n

) and

l

(1 ≤ 

l

 ≤ 

n

) the number of identifiers. We prove that for crash failures and send-omission failures, uniform consensus is solvable even if

l

 = 1, that is with fully anonymous processes for any number of faulty processes.

Concerning omission failures, when the processes are

numerate

, i.e. are able to count the number of copies of identical messages they received in each round, uniform consensus is solvable even for fully anonymous processes for

n

 > 2

t

. If processes are not numerate, uniform consensus is solvable if and only if

l

 > 2

t

.

All the proposed protocols are optimal both in the number of communication steps needed, and in the number of processes that can be faulty.

All these results show, (1) that identifiers are not useful for crash and send-omission failures or when processes are numerate, (2) for general omission or for Byzantine failures the number of different ids becomes significant.

Carole Delporte-Gallet, Hugues Fauconnier, Hung Tran-The

Democratic Elections in Faulty Distributed Systems

In this paper, we show that for elections in distributed systems the conversion from non-binary choices to binary choices does not always provide optimal results when the preferences of nodes are not identical. With this observation, we study the problem of conducting democratic elections in distributed systems in the form of social choice and social welfare functions with three or more candidates. We present some impossibility and possibility results for distributed democratic elections in presence of Byzantine behavior. We also discuss some existing election schemes, and present a new approach that attempts to mitigate the effects of Byzantine votes. We analyze the performance of these schemes through simulations to compare their efficacy in producing the most desirable social welfare rankings.

Himanshu Chauhan, Vijay K. Garg

Robust Deployment of Wireless Sensor Networks Using Gene Regulatory Networks

Sensor nodes in a Wireless Sensor Network (WSN) are responsible for sensing the environment and propagating the collected data in the network. The communication between sensor nodes may fail due to different factors, such as hardware failures, energy depletion, temporal variations of the wireless channel and interference. To maximize efficiency, the sensor network deployment must be robust and resilient to such failures. One effective solution to this problem has been inspired by Gene Regulatory Networks (GRNs). Owing to millions of years of evolution, GRNs display intrinsic properties of adaptation and robustness, thus making them suitable for dynamic network environments. In this paper, we exploit real biological gene structures to deploy wireless sensor networks, called bio-inspired WSNs. Exhaustive structural analysis of the network and experimental results demonstrate that the topology of bio-inspired WSNs is robust, energy-efficient, and resilient to node and link failures.

Azade Nazi, Mayank Raj, Mario Di Francesco, Preetam Ghosh, Sajal K. Das

Cellular Pulse Switching: An Architecture for Event Sensing and Localization in Sensor Networks

This paper presents a novel energy-efficient pulse switching protocol for ultra-light-weight wireless cellular network applications. The key idea of pulse switching is to abstract a single pulse, as opposed to multi-bit packets, as the information exchange mechanism. Event monitoring with conventional packet transport can be prohibitively energy-inefficient due to the communication, processing, and buffering overheads of the large number of bits within a packet’s data, header, and preambles. Pulse switching, on the other hand, is shown to be sufficient for event monitoring applications that require binary sensing. This paper presents a joint MAC and Routing architecture for pulse switching with a novel cellular event localization framework. Through analytical modeling and simulation experiments, it is shown that pulse switching can be an effective means for event based networking, which can potentially replace packet transport when the information is binary in nature.

Qiong Huo, Bo Dong, Subir Biswas

Asynchrony from Synchrony

A synchronous message passing complete network with an adversary that may purge messages is used to precisely model tasks that are read-write wait-free computable.

In the past, adversaries that reduce the computational power of a system as they purge messages were studied in the context of their ability to foil consensus. This paper considers the other extreme. It characterizes the limits on the power of message-adversary so that it cannot foil the solution of tasks which are read-write wait-free solvable but can foil the solution of any task that is not read-write wait-free solvable. Put another way, we study the weakest message-adversary which allows for solving any task that is solvable wait-free in the read-write model.

A remarkable side-benefit of this characterization is a simple, as simple as can be, derivation of the Herlihy-Shavit condition that equates the wait-free read-write model with a subdivided-simplex. We show how each step in the computation inductively takes a subdivided-simplex and further subdivides it in the simplest way possible, making the characterization of read-write wait-free widely accessible.

Yehuda Afek, Eli Gafni

Maximal Antichain Lattice Algorithms for Distributed Computations

The lattice of maximal antichains of a distributed computation is generally much smaller than its lattice of consistent global states. We show that a useful class of predicates can be detected on the lattice of maximal antichains instead of the lattice of consistent cuts obtaining significant (exponential for many cases) savings. We then propose new online and offline algorithms to construct and enumerate the lattice of maximal antichains. Previously known algorithm by Nourine and Raynoud [NR99, NR02] to construct the lattice takes

O

(

n

2

m

) time where

n

is the number of events in the computation, and

m

is the size of the lattice of maximal antichains. The algorithm by Jourdan, Rampon and Jard [JRJ94] takes

O

((

n

 + 

w

2

)

wm

) time where

w

is the width of the computation. All these algorithms assume as input the lattice of maximal antichains prior to the arrival of a new event. We present a new online incremental algorithm, OLMA, that computes the newly added elements to the lattice without requiring the prior lattice. Since the lattice may be exponential in the size of the computation, we get a significant reduction in the space complexity. The OLMA algorithm takes

O

(

mw

2

log

w

L

) time and

O

(

w

L

w

log

n

) space where

w

L

is the width of the lattice of maximal antichains. The lower space complexity makes our algorithm applicable for online global predicate detection in a distributed system. For the purposes of analyzing offline traces, we also propose new enumeration algorithms to traverse the lattice.

Vijay K. Garg

On the Analysis of a Label Propagation Algorithm for Community Detection

This paper initiates formal analysis of a simple, distributed algorithm for community detection on networks. We analyze an algorithm that we call

Max-LPA

, both in terms of its convergence time and in terms of the “quality” of the communities detected.

Max-LPA

is an instance of a class of community detection algorithms called

label propagation

algorithms. As far as we know, most analysis of label propagation algorithms thus far has been empirical in nature and in this paper we seek a theoretical understanding of label propagation algorithms. In our main result, we define a clustered version of Erdös-Rényi random graphs with clusters

V

1

,

V

2

, …,

V

k

where the probability

p

, of an edge connecting nodes within a cluster

V

i

is higher than

p

′, the probability of an edge connecting nodes in distinct clusters. We show that even with fairly general restrictions on

p

and

p

′ (

$p = \Omega\left(\frac{1}{n^{1/4-\epsilon}}\right)$

for any

ε

 > 0,

p

′ = 

O

(

p

2

), where

n

is the number of nodes),

Max-LPA

detects the clusters

V

1

,

V

2

, …,

V

n

in just two rounds. Based on this and on empirical results, we conjecture that

Max-LPA

can correctly and quickly identify communities on clustered Erdös-Rényi graphs even when the clusters are much sparser, i.e., with

$p = \frac{c\log n}{n}$

for some

c

 > 1.

Kishore Kothapalli, Sriram V. Pemmaraju, Vivek Sardeshmukh

How to Survive and Thrive in a Private BitTorrent Community

Many private BitTorrent communities employ Sharing Ratio Enforcement (SRE) schemes to incentivize users to contribute. It has been demonstrated that communities that adopt SRE are greatly oversupplied, i.e., they have much higher seeder-to-leecher ratios than communities in which SRE is not employed. Most previous studies focus on showing the positive effect of SRE in achieving high downloading speed. However, in this paper we show through measurements that SRE also induces severe side-effects. Under SRE, users are forced to seed for excessively long times to maintain adequate sharing ratios to be able to start new downloads, though most of the time their seedings are not very productive (in terms of low upload speed). We also observe that many users who seed for very long times still have low sharing ratios. We find that this is due to the counter-intuitive phenomenon that long seeding times do not necessarily lead to large upload amounts. Based on these observations, we discuss possible strategies for users to gain sharing ratios efficiently, which help them to survive and thrive in private communities.

Adele Lu Jia, Xiaowei Chen, Xiaowen Chu, Johan A. Pouwelse, Dick H. J. Epema

Optimal Migration Contracts in Virtual Networks: Pay-as-You-Come vs Pay-as-You-Go Pricing

Network virtualization realizes the vision of an Internet where resources offered by different stakeholders are used and shared by multiple co-existing virtual networks. The abstraction introduced by network virtualization opens new business opportunities. We expect that in the near future, infrastructure providers (or resource brokers and resellers) will offer flexibly specifiable and on-demand virtual networks over the Internet, similarly to the elastic resources in today’s clouds.

This paper initiates the discussion on the optimal resource allocations in such an economic environment. We attend to a scenario where a flexible service (such as a web service or an SAP database) is implemented over a virtual network. This service can be seamlessly migrated closer to the current locations of the (mobile) users. We assume that a virtual network provider offers different contracts to the service provider, and we distinguish between two fundamentally different pricing models: (1) a

Pay-as-You-Come

model where the service provider needs to decide in advance which time-based contracts to buy in order to implement the service, and a (2)

Pay-as-You-Go-model

where the service provider is charged only when the service terminates and only for the amount of resources actually used. In both cases, the virtual network provider may offer a

discount

if more resources are bought, e.g., buying a resource contract of double duration or of twice as much bandwidth only costs fifty percent more than a simple contract. We describe two optimal migration algorithms

PAYC

(for the Pay-as-You-Come model) and

PAYG

(for the Pay-as-You-Go model), provide a quantitative comparison of the two pricing models, and discuss their implications. Finally, extensions to online algorithms are discussed.

Xinhui Hu, Stefan Schmid, Andrea Richa, Anja Feldmann

Parallel Scalar Multiplication on Elliptic Curves in Wireless Sensor Networks

In event-driven sensor networks, when a critical event occurs, sensors should transmit quickly and securely messages back to base station. We choose Elliptic Curve Cryptography to secure the network since it offers faster computation and good security using shorter keys than RSA. In order to minimize the computation time, we propose to distribute the computation of scalar multiplications on elliptic curves by involving neighbor nodes in this operation. The results of performance tests show that parallel computing certainly consumes much more resources, however it reduces considerably the computation time of scalar multiplications. This method is applicable in event-driven applications when execution time is the most critical factor.

Yanbo Shou, Herve Guyennet, Mohamed Lehsaini

PeerVault: A Distributed Peer-to-Peer Platform for Reliable Data Backup

Large scale peer-to-peer (P2P) systems are envisioned as a way to provide online storage service. For a reliable storage service, the participating peers are required to maintain strict commitments for their online duration. On the other hand, recent results show that users participating in volunteer computing collectively exhibit certain patterns in terms of their long-term availability, a metric that denotes periodic online durations for a considerably long time interval. In this article we introduce PeerVault, a P2P platform that leverages the long-term availability of the computer users to form a distributed reliable storage service, targeted to backup of personal data. We further present a distributed monitoring scheme that assists PeerVault to detect peer churns and ensure the reliability of the proposed backup service. To the best of our knowledge, this is the first effort to describe the architecture of a reliable P2P backup service exploiting the long-term availability and idle resources of computing devices. We conduct experiments based on the availability traces of hundreds of thousands of hosts from the SETI@home computing project. The obtained results show that the proposed approach is effective in terms of availability as well as reliability of the offered backup service.

Adnan Khan, Mehrab Shahriar, Sk Kajal Arefin Imon, Mario Di Francesco, Sajal K. Das

Distributed Verification Using Mobile Agents

We study the problem of distributed verification in the mobile agent model. The problem of distributed verification in a network using local checking has been studied previously. In the local verification model, each node of the network must decide on a yes or no answer based on the knowledge of its immediate neighborhood and the global answer is obtained by a conjunction of the local answers. The efficiency of such a verification process is determined by the sizes of the proofs i.e. labels that must be assigned to the nodes to enable local verification of some global property. On the other hand, in the mobile agent model, verification is performed by an agent that is allowed to move from node to node of the graph, reading the labels of visited nodes in order to verify the required property. In this case, minimizing the memory of the agent is the primary objective. We study the space complexity of performing mobile verification in terms of memory of each agent as well as the number of agents required globally in networks of size

n

. In the case of a solitary agent, logarithmic memory is both necessary and sufficient for solving certain graph-based verification problems (even in the family of trees). For a team of at least two agents, the space complexity of most verification problems (including the well-studied MST verification) is reduced to

O

(loglog

n

), while a team of at least three agents even with constant size memory each, is sufficient to solve all graph-based verification problems. We also study the effect of randomization and show that one agent with

O

(loglog

n

) bits of memory and the ability to flip coins is as powerful as two deterministic agent having the similar memory limitations.

Shantanu Das, Shay Kutten, Zvi Lotker

Sublinear Bounds for Randomized Leader Election

This paper concerns

randomized

leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete

n

-node networks that runs in

O

(1) rounds and (with high probability) takes only

$O(\sqrt{n}\log^{3/2} n)$

messages to elect a unique leader (with high probability). This algorithm is then extended to solve leader election on any connected non-bipartite

n

-node graph

G

in

O

(

τ

(

G

)) time and

$O(\tau(G)\sqrt{n}\log^{3/2} n)$

messages, where

τ

(

G

) is the mixing time of a random walk on

G

. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient

deterministic

leader election algorithms. Finally, an almost-tight lower bound is presented for randomized leader election, showing that

$\Omega(\sqrt n)$

messages are needed for any

O

(1) time leader election algorithm which succeeds with high probability. It is also shown that Ω(

n

1/3

) messages are needed by any leader election algorithm that succeeds with high probability, regardless of the number of the rounds. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.

Shay Kutten, Gopal Pandurangan, David Peleg, Peter Robinson, Amitabh Trehan

Linear Space Bootstrap Communication Schemes

We consider a system of

n

processes with ids not a priori known, that are drawn from a large space, potentially unbounded. How can these

n

processes communicate to solve a task? We show that

n

a priori allocated Multi-Writer Multi-Reader (MWMR) registers are both needed and sufficient to solve any read-write wait free solvable task. This contrasts with the existing possible solution borrowed from adaptive algorithms that require Θ(

n

2

) MWMR registers.

To obtain these results, the paper shows how the processes can

non blocking

emulate a system of

n

Single-Writer Multi-Reader (SWMR) registers on top of

n

MWMR registers. It is impossible to do such an emulation with

n

 − 1 MWMR registers.

Furthermore, we want to solve a sequence of tasks (potentially infinite) that are sequentially dependent (processes need the previous task’s outputs in order to proceed to the next task). A non blocking emulation might starve a process forever. By doubling the space complexity, using 2

n

 − 1 rather than just

n

registers, the computation is wait free rather than non blocking.

Carole Delporte-Gallet, Hugues Fauconnier, Eli Gafni, Sergio Rajsbaum

An Analysis Framework for Distributed Hierarchical Directories

We provide a novel analysis framework for distributed hierarchical directories for an arbitrary set of dynamic (online) requests. We prove a general

${\cal O}(\eta\cdot \varphi \cdot \sigma^3 \cdot h)$

competitive ratio for any distributed hierarchical directory, where

η

is a write set size related parameter,

ϕ

and

σ

are stretch and growth related parameters, and

h

is the number of levels in the hierarchy. Through this framework, we give bounds for several known distributed directory protocols. In general network topologies, we obtain

${\cal O}(\log^2 n\cdot\log D)$

competitive ratio, where

n

and

D

are the number of nodes and the diameter, respectively, of the network. Moreover, we obtain

${\cal O}(\log D)$

competitive ratio in constant-doubling metric topologies. To the best of our knowledge, this is the first (competitive) dynamic analysis for distributed hierarchical directories.

Gokarna Sharma, Costas Busch

SMT-Based Model Checking for Stabilizing Programs,

We focus on the verification of stabilizing programs using SMT solvers. SMT solvers have the potential to convert the verification problem into a satisfiability problem of a Boolean formula and utilize efficient techniques to determine whether it is satisfiable. We focus on utilizing techniques from bounded model checking to determine whether the given program is stabilizing. We illustrate our approach using three case studies. We also identify tradeoffs between verification with SMT solvers and existing approaches.

Jingshu Chen, Sandeep Kulkarni

Deployment and Evaluation of a Decentralised Runtime for Concurrent Rule-Based Programming Models

With the emergence of highly heterogeneous, dynamic and large distributed platforms, declarative programming, whose goal is to ease the programmer’s task by separating the control from the logic of a computation, has regained a lot of interest recently, as a means of

programming

such platforms. In particular, rule-based programming is regarded as a promising model in this quest for adequate programming abstractions for these platforms. However, while these models are gaining a lot of attention, there is a demand for generic tools able to run such models at large scale.

The chemical programming model, which was designed following the chemical metaphor, is a rule-based programming model, with a non-deterministic execution model, where rules are applied concurrently on a multiset of data. In this paper, we explore the experimental side of concurrent rule-based models, by deploying a distributed chemical runtime at large scale.

The architecture proposed combines a peer-to-peer communication layer with an adaptive protocol for atomically capturing objects on which rules should be applied, and an efficient termination-detection scheme. We describe the software prototype implementing this architecture. Based on its deployment over a real-world test-bed, we present its performance results, which confirm analytically obtained complexities, and experimentally show the sustainability of such a programming model.

Marko Obrovac, Cédric Tedeschi

Weak Read/Write Registers

In [2], Lamport has defined three classes of shared registers which support read and write operations, called —safe, regular and atomic—depending on their properties when several reads and/or writes are executed concurrently. We consider generalizations of Lamport’s notions, called

k

-safe,

k

-regular and

k

-atomic. First, we provide constructions for implementing 1-atomic registers (the strongest type) in terms of

k

-safe registers (the weakest type). Then, we demonstrate how the constructions enable to easily and efficiently solve classical synchronization problems, such as mutual exclusion and ℓ-exclusion, using single-writer multi-reader

k

-safe bits, for any

k

 ≥ 1. We also explain how, by using

k

-registers, it is possible to provide some level of resiliency against memory reordering.

Gadi Taubenfeld

Fast Leader (Full) Recovery Despite Dynamic Faults

We give a leader recovery protocol that recovers a legitimate configuration where a single leader exists, after at most

k

arbitrary memory corruptions hit the system. That is, if a leader is elected before state corruptions, the

same

leader is elected after recovery. Our protocol works in any anonymous bidirectional, yet oriented, ring of size

n

, and does

not

require that processes know

n

, although the knowledge of

k

is assumed. If

n

 ≥ 18

k

 + 1, our protocol recovers the leader in

$O(k^{{{\scriptscriptstyle2}}})$

rounds using

O

(log

k

) bits per process, assuming unfair scheduling. Our protocol handles

dynamic

faults in the sense that memory corruption may still occur while the network has started recovering the leader.

Ajoy K. Datta, Stéphane Devismes, Lawrence L. Larmore, Sébastien Tixeuil

Addressing the ZooKeeper Synchronization Inefficiency

ZooKeeper provides an event like synchronization mechanism, which notifies the clients upon state change on the server. This mechanism leads to very inefficient implementation of synchronization objects. We propose a new solution to this problem. The solution is to handle a sequence of client operations completely on the server through a generic API. We have developed a prototype that allows very efficient implementation of synchronization objects. The solution requires a deterministic multi-threaded server. Experiments show the significant gain in efficiency of our solution on producer-consumer queues and synchronization barriers.

Babak Kalantari, André Schiper

Compact TCAM: Flow Entry Compaction in TCAM for Power Aware SDN

Low latency lookup (typically single cycle) has made Ternary Content Addressable Memory (TCAM) indispensable in high performance network switching devices. However, high power dissipation of TCAM makes it incongruous in switches for today’s power sensitive emerging network framework, viz., Software Defined Network (SDN). In this paper we propose

Compact TCAM

, an approach that reduces the size of the flow entries in TCAM. We use shorter tags for identifying flows than the original number of bits used to store the flow entries for SDN switches. We leverage the dynamic programming capability of SDN to route the packets using these tags. We show that our approach can be easily implemented using the new SDN framework while optimizing the TCAM space. Our experiments with real world and synthetic traffic show average reduction of TCAM power by 80% in SDN switching devices for a given number of flows.

Kalapriya Kannan, Subhasis Banerjee

A Media Access and Feedback Protocol for Reliable Multicast over Wireless Channel

A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. In this paper, we propose a virtual token based channel access and feedback protocol (VTCAF), which can trade off between reliability and access delay. The protocol uses the virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision of feedback messages. We have simulated our protocol using Castalia network simulator to evaluate the performance parameters. Simulation results show that our protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.

Ashutosh Bhatia, R. C. Hansdah

POSTER: Distributed Lagrangean Clustering Protocol

Heterogeneity in sensor nodes may be caused because of differences in transmission capabilities, different terrains or different distribution of event occurrences. A distributed, energy efficient clustering protocol,

Distributed Lagrangean Clustering Protocol (DLCP)

based on Lagrangean Surrogate optimization is proposed for heterogenous networks. DLCP uses residual energy and position of the sensor nodes for the election of cluster heads. Cluster head election is modeled as a facility location problem. Simulation study reveals that DLCP forms better clusters than HEED (Hybrid Energy Efficient Distributed Clustering) and LEACH. DLCP prolongs network lifetime by distributing energy usage in a more uniform manner.

Ravi Tandon, Biswanath Dey, Sukumar Nandi

POSTER: Broadcasting in Delay Tolerant Networks Using Periodic Contacts

Delay Tolerant Networks

(DTN) are characterized by intermittent connectivity between nodes causing parts of an end-to-end path to be formed at different times, though a complete end-to-end path may not exist at any time. In such networks, messages are buffered at nodes, and forwarded as and when a contact with the next hop is made. Typically, broadcasting in DTNs uses some variation of flooding. Such schemes generate a large number of extra messages that may need to be buffered and may not work well in resource-constrained scenarios such as low buffer size at nodes or nodes with energy constraints.

Prosenjit Dhole, Arobinda Gupta, Arindam Sharma

POSTER: Cryptanalysis and Security Enhancement of Anil K Sarje’s Authentication Scheme Using Smart Cards

In 2010 Anil k Sarje et al. proposed an improved remote user authentication scheme based on Wang et al. authentication scheme using smart cards. In this paper, we will show that in Anil k Sarje scheme, ID and Password can be computed by adversary. Hence it is vulnerable to all traditional attacks. We propose an efficient and secure authentication scheme with smart cards which requires minimum computational cost.

Chandra Sekhar Vorugunti, Mrudula Sarvabhatla

POSTER: A New Approach to Impairment-Aware Static RWA in Optical WDM Networks

In order to set up a number of lightpaths to satisfy user requirements for data communication, the

static

(also called

offline

)

Route

and

Wavelength Assignment

(RWA) problem must be solved. The quality of transmission (QoT) of an optical signal propagating through an optical network degrades, due to physical layer considerations such as

optical noise, chromatic and polarization mode dispersion, four wave mixing, cross-phase modulation and cross-talk

[4]. This leads to an increase in the

Bit Error Rate

(BER) of the optical signal and the corresponding lightpath becomes infeasible for communication if the BER value crosses a certain threshold limit. We have used an analytical model proposed by Pereira et al [3] to estimate the BER. The interdependence between the physical and the network layers makes the RWA problem in the presence of impairments a

cross-layer optimization

problem [1]. To address this problem, a number of approaches are emerging, usually referred to as

impairment-aware

-RWA (or IARWA) algorithms that take into account the interaction between the network and the physical layers. Our objective is to design a

transparent network

, where the IA-RWA algorithm must provision lightpaths so that the BER value of each lightpath never exceeds a given threshold.

Sebastian Zawada, Shrestharth Ghosh, Fangyun Luo, Sriharsha Varanasi, Arunita Jaekel, Subir Bandyopadhyay

POSTER: Using Directional Antennas for Epidemic Routing in DTNs in Practice

In this paper, we investigate the effect of using directional antennas (DA) for routing in Delay Tolerant Networks (DTNs) where agents move following some realistic mobility patterns. In particular, the performance of classical SIRS epidemic dynamics [1] for routing in DTNs using a combination of omnidirectional antennas (OAs) and DAs is investigated using the

SLAW (Selfsimilar Least Action Walk)

model [2] for human daily life mobility patterns. We analyze the performance by placing DAs on randomly chosen agents and orient the DAs in randomly chosen directions. The broad goal of this paper is to initiate a study of using directional antennas for routing in DTN in practice.

Rajib Ranjan Maiti, Niloy Ganguly, Arobinda Gupta

POSTER: A Secure and Efficient Cross Authentication Protocol in VANET Hierarchical Model

In 2011, Abhijith Das et al. [1] proposed a protocol based on hierarchical model for node authentication in group communication in VANETs and claimed that their protocol is robust against conventional security attacks. In this paper we will show that Abhijith Das et al. [1] scheme cannot withstand to various conventional security attacks and fails to provide authentication. We then present our improved scheme.

Chandra Sekhar Vorugunti, Mrudula Sarvabhatla

POSTER: Approximation Algorithm for Minimizing the Size of Coverage Hole in Wireless Sensor Networks

Covering a bounded region with minimum number of homogeneous sensor nodes is an NP-complete problem [1]. In this article we have introduced a variation of area coverage problem in which the boundary sensor nodes of a coverage hole are allowed to move towards the hole for minimizing the size of the hole. We have shown that this problem is NP-hard. A

ρ

 −approximation algorithm is proposed to solve this problem, where 2 ≤ 

ρ

 ≤ 3 and

O

(ΔlogΔ + 

k

2

) is the time complexity.

Barun Gorain, Partha Sarathi Mandal, Sandip Das

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise