Skip to main content

About this book

This book constitutes the thoroughly refereed conference proceedings of the 6th International Conference on Networked Systems, NETYS 2018, held in Essaouira, Morocco, in May 2018.

The 22 full and 6 short papers presented together with 11 keynotes and 2 invited papers were carefully reviewed and selected from 85 submissions. They are organized in the following topics: distribution; concurrency; verification; networking; self-stabilization; security; graph; and middleware.

Table of Contents


Invited Paper


Program Analyses Using Newton’s Method (Invited Paper)

Esparza et al. generalized Newton’s method—a numerical-analysis algorithm for finding roots of real-valued functions—to a method for finding fixed-points of systems of equations over semirings. Their method provides a new way to solve interprocedural dataflow-analysis problems. As in its real-valued counterpart, each iteration of their method solves a simpler “linearized” problem.
Because essentially all fast iterative numerical methods are forms of Newton’s method, this advance is exciting because it may provide the key to creating faster program-analysis algorithms. However, there is an important difference between the dataflow-analysis and numerical-analysis contexts: when Newton’s method is used in numerical problems, commutativity of multiplication is relied on to rearrange an expression of the form “\(a * Y * b + c * Y * d\)” into “\((a * b + c * d) * Y\).” Equations with such expressions correspond to path problems described by regular languages. In contrast, when Newton’s method is used for interprocedural dataflow analysis, the “multiplication” operation involves function composition, and hence is non-commutative: “\(a * Y * b + c * Y * d\)” cannot be rearranged into “\((a * b + c * d) * Y\).” Equations with the former expressions correspond to path problems described by linear context-free languages (LCFLs).
The invited talk that this paper accompanies presented a method that we developed in 2015 for solving the LCFL sub-problems produced during successive rounds of Newton’s method. It uses some algebraic slight-of-hand to turn a class of LCFL path problems into regular-language path problems. This result is surprising because a reasonable sanity check—formal-language theory—suggests that it should be impossible: after all, the LCFL languages are a strict superset of the regular languages.
The talk summarized several concepts and prior results on which that result is based. The method described applies to predicate abstraction, on which most of today’s software model checkers rely, as well as to other abstract domains used in program analysis.
Thomas Reps



Formalizing and Implementing Distributed Ledger Objects

Despite the hype about blockchains and distributed ledgers, no formal abstraction of these objects has been proposed (This observation was also pointed out by Maurice Herlihy in his PODC2017 keynote talk). To face this issue, in this paper we provide a proper formulation of a distributed ledger object. In brief, we define a ledger object as a sequence of records, and we provide the operations and the properties that such an object should support. Implementation of a ledger object on top of multiple (possibly geographically dispersed) computing devices gives rise to the distributed ledger object. In contrast to the centralized object, distribution allows operations to be applied concurrently on the ledger, introducing challenges on the consistency of the ledger in each participant. We provide the definitions of three well known consistency guarantees in terms of the operations supported by the ledger object: (1) atomic consistency (linearizability), (2) sequential consistency, and (3) eventual consistency. We then provide implementations of distributed ledgers on asynchronous message passing crash-prone systems using an Atomic Broadcast service, and show that they provide eventual, sequential or atomic consistency semantics. We conclude with a variation of the ledger – the validated ledger – which requires that each record in the ledger satisfies a particular validation rule.
Antonio Fernández Anta, Chryssis Georgiou, Kishori Konwar, Nicolas Nicolaou

On the Unfairness of Blockchain

The success of Bitcoin relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system.
We prove that blockchain is unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is actually lower bounded by a term exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.
Rachid Guerraoui, Jingjing Wang

Weak Failures: Definitions, Algorithms and Impossibility Results

The notion of weak failures, which should be viewed as fractions of traditional failures, is introduced and studied. It is known that there is no consensus algorithm using registers that can tolerate even a single crash failure. Is there a consensus algorithm using registers that can tolerate a “fraction” of a crash failure, i.e., a weak failure? It is known that there is no k-set consensus algorithm for \(n>k\) processes using registers that can tolerate k crash failures. How many weak failures can a k-set consensus algorithm which uses registers tolerate? Answers to these questions follow from our general possibility and impossibility results regarding the ability to tolerate weak failures.
Gadi Taubenfeld

Complete Visibility for Oblivious Robots in Time

We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move cycles following the classic oblivious robots model. We study the fundamental problem where starting from an arbitrary initial configuration, N autonomous robots reposition themselves to a convex hull formation on the plane where each robot is visible to all others (the Complete Visibility problem). We assume obstructed visibility, where a robot cannot see another robot if a third robot is positioned between them on the straight line connecting them. We provide the first \(\mathcal{O}(N)\) time algorithm for this problem in the fully synchronous setting. Our contribution is a significant improvement over the runtime of the only previously known algorithm for this problem which has a lower bound of \(\varOmega (N^2)\) in the fully synchronous setting. The proposed algorithm is collision-free – robots do not share positions and their paths do not cross.
Gokarna Sharma, Costas Busch, Supratik Mukhopadhyay

Gathering of Mobile Agents in Asynchronous Byzantine Environments with Authenticated Whiteboards

We propose two algorithms for the gathering problem of k mobile agents in asynchronous Byzantine environments. For both algorithms, we assume that graph topology is arbitrary, each node is equipped with an authenticated whiteboard, agents have unique IDs, and at most f weakly Byzantine agents exist. Under these assumptions, the first algorithm achieves the gathering without termination in \(O(m+fn)\) moves per agent (m is the number of edges and n is the number of nodes). The second algorithm achieves the gathering with termination in \(O(m+fn)\) moves per agent by additionally assuming that agents on the same node are synchronized, \(f<\lceil \frac{1}{3} k \rceil \) holds, and agents know k. To the best of our knowledge, this is the first work to address the gathering problem of mobile agents for arbitrary topology networks in asynchronous Byzantine environments.
Masashi Tsuchida, Fukuhito Ooshita, Michiko Inoue

Short Paper: BPMN Process Analysis: A Formal Validation and Verification Eclipse Plugin for BPMN Process Models

Process models analysis is a critical step in Business Process Management life cycle. Its main goal is to detect technical and functional errors made in the process models. Since the latter are widely used for the software specification, the quality of the produced software will depend on the soundness and correctness of these process models. In this paper we present the “BPMN Process Analysis”: a formal Validation and Verification Eclipse Plugin for BPMN Process Models. It allows us to perform three types of formal analyses, namely, the control flow, the data flow and the business rules analyses. Each analysis generates a certain amount of errors and violations. These anomalies are diagnosed and corrected in order to get the BPMN model free of certain control flow errors, data flow anomalies, as well as Business rules violations.
Anass Rachdi, Abdeslam En-Nouaary, Mohamed Dahchour



On Helping and Stacks

A concurrent algorithm exhibits helping when one process performs work on behalf of other processes. More formally, helping is observed when the order of some operation in a linearization is fixed by a step of another process. In this paper, we show that no wait-free linearizable implementation of a stack using read, write, compare&swap and fetch&add operations can be help-free, correcting a mistake in an earlier proof by Censor-Hillel et al.
Vitaly Aksenov, Petr Kuznetsov, Anatoly Shalyto

Anonymity in Distributed Read/Write Systems: An Introductory Survey

This paper is an algorithmic introduction to anonymity in asynchronous systems where processes communicate by reading and writing atomic read/write registers. Two types of anonymity are investigated: process-anonymity and memory-anonymity. Process-anonymity is when the processes cannot be distinguished the ones from the others (among other features, they do not have identifiers). Memory-anonymity is when the same memory locations can have different names at different processes (e.g., the location name A used by process \(p_i\) and the location name B used by another process \(p_j\) can correspond to the very same memory location X, and similarly for the names B at \(p_i\) and A at \(p_j\) which correspond to the same memory location \(Y\ne X\)). The paper shows how algorithms can cope with the uncertainty created by these two types of anonymity. To this end, taking examples from the literature, it presents anonymity-tolerant implementations of several concurrent objects, such as snapshot, consensus, and lock, each implementation satisfying a well-defined progress condition (obstruction-freedom, non-blocking, or wait-freedom). This paper must be considered as a short example-driven introductory tutorial on anonymous asynchronous read/write systems.
Michel Raynal, Jiannong Cao

An Anonymous Wait-Free Weak-Set Object Implementation

We consider a system of n anonymous processes communicating through multi-writer/multi-reader (MWMR) registers. A weak-set object is a particularly interesting communication abstraction for anonymous processes; it may be seen as the equivalent of an atomic snapshot object in an anonymous system. It can be accessed through two operations: \(\textsc {add}()\) and \(\textsc {get}()\). Intuitively, an \(\textsc {add}(v)\) operation puts value v in the set represented by the object, while a \(\textsc {get}()\) operation returns the contents of the set. The paper describes a wait-free atomic implementation of a weak-set object shared by n anonymous processes using 3n MWMR registers. The description of the algorithm is incremental. The paper first presents an implementation that is wait-free only for the \(\textsc {Get}()\) operations, using 2n MWMR registers. Then it describes an implementation that is wait-free for the \(\textsc {Get}()\) and the \(\textsc {Add}()\) operations, using \(3n+1\) MWMR registers, and finally it is improved to an implementation using 3n MWMR registers. In addition, a lower-bound of n registers for the implementation of a wait-free atomic weak-set is proved.
Carole Delporte-Gallet, Hugues Fauconnier, Sergio Rajsbaum, Nayuta Yanagisawa

Efficient Means of Achieving Composability Using Object Based Semantics in Transactional Memory Systems

Composing together the individual atomic methods of concurrent data-structures (cds) pose multiple design and consistency challenges. In this context composition provided by transactions in software transaction memory (STM) can be handy. However, most of the STMs offer read/write primitives to access shared cds. These read/write primitives result in unnecessary aborts. Instead, semantically rich higher-level methods of the underlying cds like lookup, insert or delete (in case of hash-table or lists) aid in ignoring unimportant lower level read/write conflicts and allow better concurrency.
In this paper, we adapt transaction tree model in databases to propose OSTM which enables efficient composition in cds. We extend the traditional notion of conflicts and legality to higher level methods of cds using STMs and lay down detailed correctness proof to show that it is co-opaque. We implement OSTM with concurrent closed addressed hash-table (HT-OSTM) and list (list-OSTM ) which exports the higher-level operations as transaction interface.
In our experiments with varying workloads and randomly generated transaction operations, HT-OSTM shows speedup of 3 to 6 times and w.r.t aborts HT-OSTM is 3 to 7 times better than ESTM and read/write based STM, respectively. Where as, list-OSTM outperforms state of the art lock-free transactional list, NOrec STM list and boosted list by 30% to 80% across all workloads and scenarios. Further, list-OSTM incurred negligible aborts in comparison to other techniques considered in the paper.
Sathya Peri, Ajay Singh, Archit Somani

Unleashing and Speeding Up Readers in Atomic Object Implementations

Providing efficient emulations of atomic read/write objects in asynchronous, crash-prone, message-passing systems is an important problem in distributed computing. Communication latency is a factor that typically dominates the performance of message-passing systems, consequently the efficiency of algorithms implementing atomic objects is measured in terms of the number of communication exchanges involved in each read and write operation. The seminal result of Attiya, Bar-Noy, and Dolev established that two pairs of communication exchanges, or equivalently two round-trip communications, are sufficient. Subsequent research examined the possibility of implementations that involve less than four exchanges. The work of Dutta et al. showed that for single-writer/multiple-reader (SWMR) settings two exchanges are sufficient, provided that the number of readers is severely constrained with respect to the number of object replicas in the system and the number of replica failures, and also showed that no two-exchange implementations of multiple-writer/multiple-reader (MWMR) objects are possible. Later research focused on providing implementations that remove the constraint on the number of readers, while having read and write operations that use variable number of communication exchanges, specifically two, three, or four exchanges.
This work presents two advances in the state-of-the-art in this area. Specifically, for SWMR and MWMR systems algorithms are given in which read operations take two or three exchanges. This improves on prior works where read operations took either (a) three exchanges, or (b) two or four exchanges. The number of readers in the new algorithms is unconstrained, and write operations take the same number of exchanges as in prior work (two for SWMR and four for MWMR settings). The correctness of algorithms is rigorously argued. The paper presents an empirical study using the NS3 simulator that compares the performance of relevant algorithms, demonstrates the practicality of the new algorithms, and identifies settings in which their performance is clearly superior.
Chryssis Georgiou, Theophanis Hadjistasi, Nicolas Nicolaou, Alexander A. Schwarzmann

Optimal Recoverable Mutual Exclusion Using only FASAS

Recent research has focused on designing concurrent algorithms that are resilient to process crashes. The idea is to leverage non-volatile memory so that processes can recover from crashes with as little disruption to the normal behavior of the system as possible. We present the first Recoverable Mutual Exclusion algorithm whose Remote Memory Reference (RMR) complexity is optimal for both Cache-Coherent (CC) and Distributed Shared Memory (DSM) machines. If a process fails f times during its attempt to acquire the Critical Section, our algorithm ensures that the process incurs O(1) RMRs on a DSM machine and O(f) RMRs on a CC machine, which we prove is an optimal bound. Our algorithm improves on a recent algorithm by Golab and Hendler in three ways: It has a provably optimal RMR complexity, has a wait-free Exit section, and less reliance on instructions that are not commonly supported on multiprocessors. In particular, Golab and Hendler’s algorithm relies on hardware support for both Fetch-And-Store-And-Store (FASAS) and Double-Word Compare-And-Swap (DCAS), while our algorithm relies only on FASAS. (If X and Y are shared variables and v is a value, FASAS(XYv) writes X’s value in Y and writes v in X, all in a single atomic action.)
Prasad Jayanti, Siddhartha Jayanti, Anup Joshi



Declarative Parameterized Verification of Topology-Sensitive Distributed Protocols

We show that Cubicle [9], an SMT-based infinite-state model checker, can be applied as a verification engine for GLog, a logic-based specification language for topology-sensitive distributed protocols with asynchronous communication. Existential coverability queries in GLog can be translated into verification judgements in Cubicle by encoding relational updates rules as unbounded array transitions. We apply the resulting framework to automatically verify a distributed version of the Dining Philosopher mutual exclusion protocol formulated for an arbitrary number of nodes and communication buffers.
Sylvain Conchon, Giorgio Delzanno, Angelo Ferrando

On Verifying TSO Robustness for Event-Driven Asynchronous Programs

We present a method for checking whether an event-driven asynchronous program running under the Total Store Ordering (TSO) memory model is robust, i.e., all its TSO computations are equivalent to computations under the Sequential Consistency (SC) semantics. We show that this verification problem can be reduced in polynomial time to a reachability problem in a program with two threads, provided that the original program satisfies a criterion called robustness against concurrency, introduced recently in the literature. This result allows to avoid explicit handling of all concurrent executions in the analysis, which leads to an important gain in complexity.
Ahmed Bouajjani, Constantin Enea, Madhavan Mukund, Rajarshi Roy

Model Checking Dynamic Pushdown Networks with Locks and Priorities

A dynamic pushdown network (DPN) is a set of pushdown systems (PDSs) where each process can dynamically create new instances of PDSs. DPNs are a natural model of multi-threaded programs with (possibly recursive) procedure calls and thread creation. A PL-DPN is an extension of DPNs that allows threads to synchronize using locks and priorities. Transitions in a PL-DPN can have different priorities and acquire/release locks. We consider in this work model checking PL-DPNs against single-indexed LTL properties. We show that this model checking problem is decidable. We propose automata-based approaches for computing the set of configurations of a PL-DPN that satisfy the corresponding single-indexed LTL formula.
Marcio Diaz, Tayssir Touili



OSM-GKM Optimal Shared Multicast-Based Solution for Group Key Management in Mobile IPv6

In the last few years, multicasting is increasingly used as an efficient communication mechanism for group-oriented applications in the Internet. This progress has motivated Internet research community to propose many multicast routing protocols to support efficiently multimedia applications such as IPTV, videoconferencing, group games. However, these multicast routing protocols doesn’t designed for mobile members and sources, and has not tested in wireless and mobile environment since they were introduced for multicast parties whose members and sources are topologically stationary. In addition, multicast applications require confidentiality for transmitted data. Traffic encryption key is used to assure this confidentiality and has to be changed and distributed to all valid members whenever a membership change (join or leave) occurs in the group and members move from one network to another. Our goal aims to support secure group communications in mobile environments. This paper presents OSM-GKM a new scheme to secure a transparent multicast communication in mobile environment based on Optimal Shared Multicast tree protocol. Its contribution is twofold: first, we evoke transparent multicast routing in mobile IPv6. Second, we present an architecture topology to transmit keys to multicast members. The paper is concluded with simulation studies, which show that our architecture achieves better performance in terms of delay, variation delay and tree cost for rekeying process.
Youssef Baddi, Mohamed Dafir Ech-cherif El Kettani

A Game-Theoretic Approach for the Internet Content Distribution Chain

Currently, commercial CDN providers have become major actors in the Internet content distribution chain. They serve a large portion of the Internet traffic since they allow an efficient user-perceived response time and availability of content. In this paper, we consider an ecosystem that contains content providers CPs as customers of content distribution network providers CDNs. The content distribution network seeks to attract more content providers by offering them prices to save and distribute their contents to end users with better QoS. Thus, the quality and price of the content, which are considered in this study as decision parameters for content providers have an indirect impact on the revenue of the CDN. Once the content of a CP is stored in the CDN content replication servers, the CDN is the delivery manager of this content to all end users’ requests; for this another common parameter is added to our modeling and it determines the share of the CDN that the CP wins requests from users on this content. After formulating non-cooperative games, we have demonstrated the existence and uniqueness of the Nash equilibrium and used the best response dynamic algorithm to make a numerical analysis to the problems. We were able to learn that when the game between the CDNs is socially optimal, the CPs win more and vice versa in the case where the game becomes a monopoly.
Driss Ait Omar, Mohamed El Amrani, Mohamed Baslam, Mohamed Fakir

New Competition-Based Approach for Caching Popular Content in ICN

Improving the performance of the Internet, as part of the new generations networks, requires support for caching and multicast content delivery on each network cable. This new concept is called Information Centric Networking (ICN). In present paper, we consider a duopoly model of rational internet Service providers competing in ICN model where each ISP is motivated to cache content. Using a generalized Zipf distribution to model content popularity, we devise a game theoretic approach to determine caching and pricing strategies for each ISP. In turn, the subscribers’ demand for the service of an ISP depends not only on the price and QoS of that ISP but also upon those proposed by all of its competitors. Through rigorous mathematical analysis, we prove existence and uniqueness of the Nash equilibrium. An iterative and distributed algorithm based on best response dynamics are proposed to achieve the equilibrium point. Finally, extensive simulations show convergence of a proposed scheme to the Nash equilibrium and give some insights on how the game parameters may vary the price and QoS at Nash equilibrium.
Hamid Garmani, M’hamed Outanoute, Mohamed Baslam, Mostafa Jourhmane



Churn Possibilities and Impossibilities

Churn is processes joining or leaving the peer-to-peer overlay network. We study handling of various churn variants. Cooperative churn requires leaving processes to participate in the churn algorithm while adversarial churn allows the processes to just quit. Infinite churn considers unbounded number of churning processes throughout a single computation. Unlimited churn does not place a bound on the number of concurrently churning processes. Fair churn handling requires that each churn request is eventually satisfied. A local solution involves only a limited part of the network in handing a churn request.
We prove that it is impossible to handle adversarial unlimited churn. We sketch a global solution to all variants of cooperative churn and focus on local churn handling. We prove that a local fair solution to infinite churn, whether limited or unlimited, is impossible. On the constructive side, we present an algorithm that maintains a linear topology and handles the least restrictive unfair churn: infinite and unlimited. We extend this solution to a 1-2 skip list, describe enhancements for generalized skip lists and skip graphs.
Dianne Foreback, Mikhail Nesterenko, Sébastien Tixeuil

Practically-Self-stabilizing Vector Clocks in the Absence of Execution Fairness

Vector clock algorithms are basic wait-free building blocks that facilitate causal ordering of events. As wait-free algorithms, they are guaranteed to complete their operations within a finite number of steps. Stabilizing algorithms allow the system to recover after the occurrence of transient faults, such as soft errors and arbitrary violations of the assumptions according to which the system was designed to behave.
We present the first, to the best of our knowledge, stabilizing vector clock algorithm for asynchronous crash-prone message-passing systems that can recover in a wait-free manner after the occurrence of transient faults (as well as communication and crash failures) in the absence of execution fairness. We use bounded message and storage sizes and do not rely on any means of synchronization.
The proposed algorithm provides bounded time recovery during fair executions that follow the last transient fault. The novelty is for the case of more challenging settings that consider no execution fairness. The proposed algorithm guarantees a bound on the number of times in which the system might violate safety (while existing algorithms might block forever due to the presence of both transient faults and crash failures).
Iosif Salem, Elad Michael Schiller

Short Paper: Tight Bounds for Universal and Cautious Self-stabilizing 1-Maximal Matching

We consider the problem of constructing a matching in an n-nodes graph in a distributed and self-stabilizing manner. We prove that there exists a lower bound in space of \(\varOmega (n\log n)\) bits for universal maximal matching algorithms, and a lower bound in time of \(\varOmega (e)\) moves for universal and cautious 1-maximal matching algorithms. A side contribution of our result is the optimality in both time and space of the self-stabilizing 1-maximal matching algorithm of Inoue et al. [8].
Michiko Inoue, Sébastien Tixeuil



Automata-Based Bottom-Up Design of Conflict-Free Security Policies Specified as Policy Expressions

Security policies (or more briefly: policies) are used to filter accesses to computing resources. A policy is usually specified by a table of rules, where each rule specifies conditions to accept or reject an access request. Since the acceptance of malicious requests or the rejection of legitimate requests may lead to serious consequences, the correct design of policies is very important. The present paper is inspired by two works: the first one uses an automata-based method to design policies, while the second one suggests a bottom-up design method of policies specified as policy expressions. A policy expression looks like a boolean expression, where policies are composed using three operators: \(\lnot \), \(\wedge \), \(\vee \). In this paper, we generalize the automata-based method for the bottom-up design of policies specified as policy expressions. In our context, designing a policy specified as a policy expression \( PE \) amounts to constructing an automaton \(\varGamma _{ PE }\) that models the access control specified in \( PE \). To respect the essence of bottom-up design, the automaton \(\varGamma _{ PE }\) is constructed incrementally, by first constructing the automata that model the basic policies that compose \( PE \), and then constructing incrementally the automata that model the subexpressions that compose \( PE \), until we obtain \(\varGamma _{ PE }\). Then we show how to use \(\varGamma _{ PE }\) to determine whether \( PE \) verifies several properties, namely adequacy, implication, and equivalence. Also, we study the problem of conflicting rules, i.e. policy rules that do not agree on whether some request must be accepted or rejected. We show that our bottom-up design supports any strategy of conflict resolution.
Ahmed Khoumsi, Mohammed Erradi

Short Paper: Stress-SGX: Load and Stress Your Enclaves for Fun and Profit

The latest generation of Intel processors supports Software Guard Extensions (SGX), a set of instructions that implements a Trusted Execution Environment (TEE) right inside the CPU, by means of so-called enclaves. This paper presents Stress-SGX, an easy-to-use stress-test tool to evaluate the performance of SGX-enabled nodes. We build on top of the popular Stress-ng tool, while only keeping the workload injectors (stressors) that are meaningful in the SGX context. We report on several insights and lessons learned about porting legacy code to run inside an SGX enclave, as well as the limitations introduced by this process. Finally, we use Stress-SGX to conduct a study comparing the performance of different SGX-enabled machines.
Sébastien Vaucher, Valerio Schiavoni, Pascal Felber

Short Paper: Application of Noisy Attacks on Image Steganography

The data hiding techniques have attracted a lot of attention in recent years and mainly with the intensive growth of multimedia and its possibility for covert communication. Steganography is one of the information hiding methods to confirm the ability of a multimedia carrier to exchange secret information between two end-points so that it is imperceptible, thus avoiding the detection of hidden information. The secret information can be embedded in several multimedia carriers, such as image or audio or video files. It works by embedding the message in a source cover which may make the observer feel it is the source cover itself. The type of multimedia carrier here is an image. However, this technique suffers from the problem of the carrier distortion. In this paper, we investigate the impact of some distortion types on the carrier images and discuss the possibility of using distraction images in steganography to protect the stego-image. Furthermore, we highlight the current challenges of image steganography. The experimentations show very interesting results.
Ayidh Alharbi, Tahar M. Kechadi



A Measure for Quantifying the Topological Structure of Some Networks

Determining and quantifying the topological structure of networks is an exciting research topic in theoretical network science. For this purpose, a large amount of topological indices have been studied. They function as effective measures for improving the performance of existing networks and designing new robust networks. In this paper, we focus on a distance-based graph invariant named the Terminal Wiener index. We use this measure to analyze the structure of two well-known hierarchical networks: the Dendrimer tree \(\mathcal{T}_{d,h}\) and the Dendrimer graph \(\mathcal{D}_{d,h}\). We also investigate two methods of calculation in order to show that the proposed method reduces the computational complexity of the Terminal Wiener index.
Meryam Zeryouh, Mohamed El Marraki, Mohamed Essalih

Short Paper: Maintenance of Strongly Connected Component in Shared-Memory Graph

In this paper, we present an on-line fully dynamic algorithm for maintaining strongly connected component of a directed graph in a shared memory architecture. The edges and vertices are added or deleted concurrently by fixed number of threads. To the best of our knowledge, this is the first work to propose using linearizable concurrent directed graph and is build using both ordered and unordered list-based set. We provide an empirical comparison against sequential and coarse-grained. The results show our algorithm’s throughput is increased between 3 to 6x depending on different workload distributions and applications. We believe that there are huge applications in the on-line graph. Finally, we show how the algorithm can be extended to community detection in on-line graph.
Muktikanta Sa



Comparative Analysis of Our Association Rules Based Approach and a Genetic Approach for OLAP Partitioning

OLAP databases remain the first choice of enterprises to store and analyze huge amount of data. Thereby, to further enhance query performances and minimize the maintenance cost, many techniques exist, among which data partitioning is considered as an efficient technique to achieve this purpose. Although most of business intelligence tools support this feature, defining an appropriate partitioning strategy remains a big challenge. Hence, many approaches have been proposed in the literature. Nevertheless, most of them have been evaluated only in relational model. Therefore, we propose in this paper, a comparative study between our partitioning approach based on the association rules algorithm and a genetic based one. The study aims to compare the results of the aforementioned approaches in case of OLAP partitioning.
Khadija Letrache, Omar El Beggar, Mohammed Ramdani

Short Paper: IoT Context-Driven Architecture: Characterization of the Behavioral Aspect

The contribution of this paper deals with IoT coordination. After a thorough review of the literature, we conclude that the challenge of coordination has not yet been met despite all the contributions already suggested. To address this challenge, we have proposed in a previous work an IoT contextual architecture, while the purpose of this article focuses on the behavior of the proposed multilayer IoT architecture by describing a flood case study. Three aspects are considered: functional aspect, informational aspect and behavioral aspect. When modeling the behavioral aspect, we focus mainly on the control flow throughout the workflow and we put our interest on relative ordering of sub-workflows.
Radia Belkeziz, Zahi Jarir


Additional information

Premium Partner

    Image Credits