Skip to main content
main-content

Über dieses Buch

This book constitutes the revised selected papers of the 7th International Conference on Networked Systems, NETYS 2019, held in Marrakech, Morocco, in June 2019. The 23 revised full papers and 3 short papers presented were carefully reviewed and selected from 60 submissions. The papers are organized in the following topics: formal verification, distributed systems, security, concurrency, and networks.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter

Dynamic Partial Order Reduction Under the Release-Acquire Semantics (Tutorial)

Abstract
We describe at a high-level the main concepts in the Release-Acquire (RA) semantics that is part of the C11 language. Furthermore, we describe the ideas behind an optimal dynamic partial order reduction technique that can be used for systematic analysis of concurrent programs running under RA.
This tutorial is based on the material presented in [5], which also contains the formal definitions of all the models, concepts, and algorithms .
Parosh Aziz Abdulla, Mohamed Faouzi Atig, Bengt Jonsson, Tuan Phong Ngo

On the Complexity of Fault-Tolerant Consensus

Abstract
We consider the problem of reaching agreement in a distributed message-passing system prone to crash failures. Crashes are generated by Constrained adversaries - a Weakly-Adaptive adversary, who has to fix, in advance, the set of f crash-prone processes, and a k-Chain-Ordered adversary, who orders all the processes into k disjoint chains and has to follow this order when crashing them. Apart from these constraints, both of them may crash processes in an adaptive way at any time. While commonly used Strongly-Adaptive adversaries model attacks and Non-Adaptive ones - pre-defined faults, Constrained adversaries model more realistic scenarios when there are fault-prone dependent processes, e.g., in hierarchical or dependable software/hardware systems. In this view, our approach helps to understand better the crash-tolerant consensus in more realistic executions. We propose time-efficient consensus algorithms against such adversaries. We complement our algorithmic results with (almost) tight lower bounds, and extend the one for Weakly-Adaptive adversaries to hold also for (syntactically) weaker Non-Adaptive adversaries. Together with the consensus algorithm against Weakly-Adaptive adversaries (which automatically translates to the Non-Adaptive adversaries), these results extend the state-of-the-art of the popular class of Non-Adaptive adversaries, in particular, the result of Chor, Meritt and Shmoys [7], and prove separation gap between Constrained adversaries (including Non-Adaptive ones) and Strongly-Adaptive adversaries, analyzed by Bar-Joseph and Ben-Or [3] and others.
Dariusz R. Kowalski, Jarosław Mirek

Formal Verification

Frontmatter

Checking Causal Consistency of Distributed Databases

Abstract
Causal consistency is one of the strongest models that can be implemented to ensure availability and partition tolerance in distributed systems. In this paper, we propose a tool to check automatically the conformance of distributed/concurrent systems executions to causal consistency models. Our approach consists in reducing the problem of checking if an execution is causally consistent to solving Datalog queries. The reduction is based on complete characterizations of the executions violating causal consistency in terms of the existence of cycles in suitably defined relations between the operations occurring in these executions. We have implemented the reduction in a testing tool for distributed databases, and carried out several experiments on real case studies, showing the efficiency of the suggested approach.
Rachid Zennou, Ranadeep Biswas, Ahmed Bouajjani, Constantin Enea, Mohammed Erradi

Liveness in Broadcast Networks

Abstract
We study two liveness verification problems for broadcast networks, a system model of identical clients communicating via message passing. The first problem is liveness verification. It asks whether there is a computation such that one of the clients visits a final state infinitely often. The complexity of the problem has been open since 2010 when it was shown to be \(\mathsf{P}\)-hard and solvable in \(\mathsf{EXPSPACE}\). We close the gap by a polynomial-time algorithm. The algorithm relies on a characterization of live computations in terms of paths in a suitable graph, combined with a fixed-point iteration to efficiently check the existence of such paths. The second problem is fair liveness verification. It asks for a computation where all participating clients visit a final state infinitely often. We adjust the algorithm to also solve fair liveness in polynomial time.
Peter Chini, Roland Meyer, Prakash Saivasan

Formal Verification of UML State Machine Diagrams Using Petri Nets

Abstract
UML State Machine diagrams are widely used for behavioral modeling. They describe all of the possible states of a system and show its lifetime behavior. Nevertheless, they lack of semantics. A State Machine diagram may be interpreted in different manners that can lead to unwanted situations. In this paper, we propose a formal verification phase for UML State Machine diagrams using a formal language. The aim is to ensure UML State Machine diagrams properties to designers and to highlight errors. Petri nets, a formal notation for concurrent systems, are suitable for modeling systems behavior and they are well supported by analysis tools. Based on Model-Driven Engineering, we define a transformation from UML State Machine diagrams to Petri nets. The resulting Petri nets models are formally verified regarding properties. We also define a post-interpretation of the verification results in terms of UML State Machine diagrams.
Achraf Lyazidi, Salma Mouline

Synthesize Models for Quantitative Analysis Using Automata Learning

Abstract
We apply a probably approximately correct learning algorithm for multiplicity automata to generate quantitative models of system behaviors with a statistical guarantee. Using the generated model, we give two analysis algorithms to estimate the minimum and average values of system behaviors. We show how to apply the learning algorithm even when the alphabet is not fixed. The experimental result is encouraging; the estimation made by our approach is almost as precise as the exact reference answer obtained by a brute-force enumeration.
Yu-Fang Chen, Hsiao-Chen Chung, Wen-Chi Hung, Ming-Hsien Tsai, Bow-Yaw Wang, Farn Wang

Continuous vs. Discrete Asynchronous Moves: A Certified Approach for Mobile Robots

Abstract
Oblivious Mobile Robots have been studied both in continuous Euclidean spaces, and discrete spaces (that is, graphs). However the obtained literature forms distinct sets of results for the two settings. In our view, the continuous model reflects well the physicality of robots operating in some real environment, while the discrete model reflects well the digital nature of autonomous robots, whose sensors and computing capabilities are inherently finite.
We explore the possibility of bridging results between the two models. Our approach is certified using the Coq proof assistant and the Pactole framework, which we extend to the most general asynchronous model without compromising its genericity. Our extended framework is then used to formally prove the equivalence between atomic moves in a discrete space (the classical “robots on graphs” model) and non-atomic moves in a continuous unidimensional space when robot vision sensors are discrete (robots move in straigth lines between positions, but their observations are at source and destination positions only), irrespective of the problem being solved. Our effort consolidates the integration between the model, the problem specification, and its proof that is advocated by the Pactole framework.
Thibaut Balabonski, Pierre Courtieu, Robin Pelle, Lionel Rieg, Sébastien Tixeuil, Xavier Urbain

Distributed Systems

Frontmatter

Self-stabilizing Snapshot Objects for Asynchronous Failure-Prone Networked Systems

Abstract
A snapshot object simulates the behavior of an array of single-writer/multi-reader shared registers that can be read atomically. Delporte-Gallet et al. proposed two fault-tolerant algorithms for snapshot objects in asynchronous crash-prone message-passing systems. Their first algorithm is non-blocking; it allows snapshot operations to terminate once all write operations had ceased. It uses \(\mathcal {O}(n)\) messages of \(\mathcal {O}(n \cdot \nu )\) bits, where n is the number of nodes and \(\nu \) is the number of bits it takes to represent the object. Their second algorithm allows snapshot operations to always terminate independently of write operations. It incurs \(\mathcal {O}(n^2)\) messages. The fault model of Delporte-Gallet et al. considers node failures (crashes). We aim at the design of even more robust snapshot objects. We do so through the lenses of self-stabilization—a very strong notion of fault-tolerance. In addition to Delporte-Gallet et al. ’s fault model, a self-stabilizing algorithm can recover after the occurrence of transient faults; these faults represent arbitrary violations of the assumptions according to which the system was designed to operate (as long as the code stays intact). In particular, in this work, we propose self-stabilizing variations of Delporte-Gallet et al. ’s non-blocking algorithm and always-terminating algorithm. Our algorithms have similar communication costs to the ones by Delporte-Gallet et al. and \(\mathcal {O}(1)\) recovery time (in terms of asynchronous cycles) from transient faults. The main differences are that our proposal considers repeated gossiping of \(\mathcal {O}(\nu )\) bits messages and deals with bounded space, which is a prerequisite for self-stabilization.
Chryssis Georgiou, Oskar Lundström, Elad Michael Schiller

Self-stabilization Overhead: A Case Study on Coded Atomic Storage

Abstract
Shared memory emulation on distributed message-passing systems can be used as a fault-tolerant and highly available distributed storage solution or as a low-level synchronization primitive. Cadambe et al. proposed the Coded Atomic Storage (CAS) algorithm, which uses erasure coding to achieve data redundancy with much lower communication cost than previous algorithmic solutions. Recently, Dolev et al. introduced a version of CAS where transient faults are included in the fault model, making it self-stabilizing. But self-stabilization comes at a cost, so in this work we examine the overhead of the algorithm by implementing a system we call CASSS (CAS Self-Stabilizing). Our system builds on the self-stabilizing version of CAS, along with several other self-stabilizing building blocks. This provides us with a powerful platform to evaluate the overhead and other aspects of the real-world applicability of the algorithm.
In our case-study, we evaluated the system performance by running it on the world-wide distributed platform PlanetLab. Our study shows that CASSS scales very well in terms of the number of servers, the number of concurrent clients, as well as the size of the replicated object. More importantly, it shows (a) to have only a constant overhead compared to the traditional CAS algorithm and (b) the recovery period (after the last occurrence of a transient fault) is no more than the time it takes to perform a few client (read/write) operations. Our results suggest that the self-stabilizing variation of CAS, which is CASSS, does not significantly impact efficiency while dealing with automatic recovery from transient faults.
Chryssis Georgiou, Robert Gustafsson, Andreas Lindhé, Elad Michael Schiller

StakeCube: Combining Sharding and Proof-of-Stake to Build Fork-Free Secure Permissionless Distributed Ledgers

Abstract
Our work focuses on the design of a scalable permissionless blockchain in the proof-of-stake setting. In particular, we use a distributed hash table as a building block to set up randomized shards, and then leverage the sharded architecture to validate blocks in an efficient manner. We combine verifiable Byzantine agreements run by shards of stakeholders and a block validation protocol to guarantee that forks occur with negligible probability. We impose induced churn to make shards robust to eclipse attacks, and we rely on the UTXO coin model to guarantee that any stakeholder action is securely verifiable by anyone. Our protocol works against adaptive adversary, and makes no synchrony assumption beyond what is required for the byzantine agreement.
Antoine Durand, Emmanuelle Anceaume, Romaric Ludinard

Dissecting Tendermint

Abstract
In this paper we analyze Tendermint, proposed in [12], one of the most popular blockchains based on PBFT Consensus. Our methodology consists in identifying the algorithmic principles of Tendermint necessary for a specific system model. The current paper dissects Tendermint under two communication models: synchronous and eventually synchronous ones. This methodology allowed to identify bugs in preliminary versions of the protocol and to prove its correctness under the most adversarial conditions: an eventually synchronous communication model under Byzantine faults. The message complexity of Tendermint is \(O(n^3)\).
Yackolley Amoussou-Guenou, Antonella Del Pozzo, Maria Potop-Butucaru, Sara Tucci-Piergiovanni

CUDA-DTM: Distributed Transactional Memory for GPU Clusters

Abstract
We present CUDA-DTM, the first ever Distributed Transactional Memory framework written in CUDA for large scale GPU clusters. Transactional Memory has become an attractive auto-coherence scheme for GPU applications with irregular memory access patterns due to its ability to avoid serializing threads while still maintaining programmability. We extend GPU Software Transactional Memory to allow threads across many GPUs to access a coherent distributed shared memory space and propose a scheme for GPU-to-GPU communication using CUDA-Aware MPI. The performance of CUDA-DTM is evaluated using a suite of seven irregular memory access benchmarks with varying degrees of compute intensity, contention, and node-to-node communication frequency. Using a cluster of 256 devices, our experiments show that GPU clusters using CUDA-DTM can be up to 115x faster than CPU clusters.
Samuel Irving, Sui Chen, Lu Peng, Costas Busch, Maurice Herlihy, Christopher J. Michael

Towards Synthesis of Distributed Algorithms with SMT Solvers

Abstract
We consider the problem of synthesizing distributed algorithms working on a specific execution context. We show it is possible to use the linear time temporal logic in order to both specify the correctness of algorithms and their execution contexts. We then provide a method allowing to reduce the synthesis problem of finite state algorithms to some model-checking problems. We finally apply our technique to automatically generate algorithms for consensus and epsilon-agreement in the case of two processes using the SMT solver Z3.
Carole Delporte-Gallet, Hugues Fauconnier, Yan Jurski, François Laroussinie, Arnaud Sangnier

Recoverable Mutual Exclusion with Abortability

Abstract
In light of recent advances in non-volatile main memory technology, there has been a flurry of research in designing algorithms that are resilient to process crashes. As a result of main memory non-volatility, a process is allowed to crash any time during the execution, without affecting the state of the data stored in the main memory. With the assumption that a process eventually restarts after a crash, prior works have focused on designing mutual exclusion algorithms that use the non-volatile main memory to recover from such crashes. Such mutual exclusion algorithms that provide multiple processes with a mutually exclusive access to a shared resource in the presence of process crashes are called Recoverable Mutual Exclusion (RME) algorithms. We present the first RME algorithm where a process has the ability to abort executing the algorithm, if it decides to give up its request for a shared resource before being granted access to that resource. With n being the maximum number of processes for which the algorithm is designed, in the absence of a crash our algorithm guarantees a worst-case remote memory references (RMR) complexity of \(O(\log n)\) per passage on the Distributed Shared Memory (DSM) machines, and a complexity of \(O(\log n)\) or O(n) on Cache Coherent (CC) machines, depending on how caches are managed.
Prasad Jayanti, Anup Joshi

Security

Frontmatter

An Efficient Network IDS for Cloud Environments Based on a Combination of Deep Learning and an Optimized Self-adaptive Heuristic Search Algorithm

Abstract
Nowadays, Cloud Computing (CC) is one of the fastest emerging core technologies in the current information era. It is leading a new revolution on the ways of data storage and calculation. CC remains gaining traction among organizations thanks to its appealing features like pay-per-use model for billing customers, elasticity, ubiquity, scalability and availability of resources for businesses. Hence, many organizations are moving their workloads or processes to cloud due to its inherent advantages. Nevertheless, several security issues arise with the transition to this computing paradigm including intrusion detection. Attackers and intruders developed new sophisticated tools defeating traditional Intrusion Detection Systems (IDS) by huge amount of network traffic data and dynamic behaviors. The existing Cloud IDSs suffer from low detection accuracy and high false positive rate. To overcome this issue, we propose a smart approach using a self-adaptive heuristic search algorithm called “Improved Self-Adaptive Genetic Algorithm” (ISAGA) to build automatically a Deep Neural Network (DNN) based Anomaly Network Intrusion Detection System (ANIDS). ISAGA is a variant of standard Genetic Algorithm (GA), which is developed based on GA improved through an Adaptive Mutation Algorithm (AMA) and optimization strategies. The optimization strategies carried out are Parallel Processing and Fitness Value Hashing that reduce execution time, convergence time and save processing power. Our approach consists of using ISAGA with the goal of searching the optimal or near optimal combination of most relevant values of the parameters included in construction of DNN based IDS or impacting its performance, like feature selection, data normalization, architecture of DNN, activation function, learning rate and Momentum term, which ensure high detection rate, high accuracy and low false alarm rate. CloudSim 4.0 simulator platform and CICIDS2017 dataset were used for simulation and validation of the proposed system. The implementation results obtained have demonstrated the ability of our ANIDS to detect intrusions with high detection accuracy and low false alarm rate, and have indicated its superiority in comparison with state-of-the-art methods.
Zouhair Chiba, Noreddine Abghour, Khalid Moussaid, Amina El Omri, Mohamed Rida

Efficient Security Policy Management Using Suspicious Rules Through Access Log Analysis

Abstract
Logs record the events and actions performed within an organization’s systems and networks. Usually, log data should conform with the security policy in use. However, access logs may show the occurrence of unauthorized accesses which may be due to security breaches, such as intrusions or conflicting rules in security policies. Due to the huge amount of log data generated every day and presumed to grow over time, analyzing access logs becomes a hard task that requires enormous computational resources. In this paper, we suggest a method that analyses an access log, and uses the obtained results to determine whether an Attribute-Based Access Control (ABAC) security policy contains conflicting rules. This access log-based approach allows to obtain an efficient conflict detection method, since conflicts are searched among suspicious rules, instead of all the rules of the policy. Those suspicious rules are identified by analyzing the access log. To improve efficiency even more, the access log is decomposed into clusters which are analyzed separately. Furthermore, cluster representatives make the proposed approach scalable for continuous access log case. The scalability is confirmed by experiment results, and our approach effectively identifies conflicts with an average recall of 95.65%.
Maryem Ait El Hadj, Ahmed Khoumsi, Yahya Benkaouz, Mohammed Erradi

A Vaccination Game for Mitigation Active Worms Propagation in P2P Networks

Abstract
The spread of computer active worms is usually modeled by epidemic diffusion processes and widely applied to peer-to-peer computing and social networks. Many protective interventions are recommended to restrain the electronic epidemic, such as immunization strategies or the installation of anti-virus software. In real-world networks, a natural framework for game theory is created where each player (internet user) decides on his own strategy: to secure his host by paying the cost of antivirus software or to remain unsecured, and then takes the risk of being infected later. We introduce this issue by presenting an agent-based model for simulating a vaccination game. In this work, we study the neighbor’s impact including the imitation behavior effects on vaccination behavior, which may help to relieve the severity of active worms in peer to peer networks. The simulation results show that imitation behavior works well only when the network initially have more than 20% of vaccinated peers. Moreover, the higher the cost of vaccination, the more players tend to imitate the strategy of neighbors.
Mohamed Amine Rguibi, Najem Moussa

How to Choose Its Parents in the Tangle

Abstract
The Tangle is a data structure mainly used to store transactions in the IOTA cryptocurrency. It has similarities with the blockchain structure of Bitcoin but in the Tangle, a block contains only one transaction and has not one, but two parents. The security and the stability of this distributed data structure is highly dependent on the algorithm used to select the parents of a new block.
Previous work showed that the current parents selection algorithms are insecure, not stable or have low performances. And we propose a new algorithm that combines all these properties.
Vidal Attias, Quentin Bramas

Bitcoin Security with Post Quantum Cryptography

Abstract
In a future quantum world with a large quantum computer, the security of the digital signatures used for Bitcoin transactions will be broken by Shor’s algorithm. Bitcoin has to switch to post-quantum cryptography. In this paper, we show that the post quantum signatures based on LWE and ring LWE are the most promising to use in the presence of large quantum computers running Shor’s algorithm.
Meryem Cherkaoui Semmouni, Abderrahmane Nitaj, Mostafa Belkasmi

Concurrency

Frontmatter

Achieving Starvation-Freedom in Multi-version Transactional Memory Systems

Abstract
Software Transactional Memory systems (STMs) have garnered significant interest as an elegant alternative for addressing synchronization and concurrency issues with multi-threaded programming in multi-core systems. Client programs use STMs by issuing transactions. STM ensures that transaction either commits or aborts. A transaction aborted due to conflicts is typically re-issued with the expectation that it will complete successfully in a subsequent incarnation. However, many existing STMs fail to provide starvation freedom, i.e., in these systems, it is possible that concurrency conflicts may prevent an incarnated transaction from committing. To overcome this limitation, we systematically derive a novel starvation free algorithm for multi-version STM. Our algorithm can be used either with the case where the number of versions is unbounded and garbage collection is used or where only the latest K versions are maintained, KSFTM. We have demonstrated that our proposed algorithm performs better than existing state-of-the-art STMs.
Ved P. Chaudhary, Chirag Juyal, Sandeep Kulkarni, Sweta Kumari, Sathya Peri

Mutex-Based De-anonymization of an Anonymous Read/Write Memory

Abstract
Anonymous shared memory is a memory in which processes use different names for the same shared read/write register. As an example, a shared register named A by a process p and a shared register named B by another process q can correspond to the very same register X, and similarly for the names B at p and A at q which can correspond to the same register \(Y\ne X\). Hence, there is a permanent disagreement on the register names among the processes. This new notion of anonymity was recently introduced by G. Taubenfeld (PODC 2017), who presented several memory-anonymous algorithms and impossibility results.
This paper introduces a new problem, that consists in “de-anonymizing” an anonymous shared memory. To this end, it presents an algorithm that, starting with a shared memory made up of m anonymous read/write atomic registers (i.e., there is no a priori agreement on their names), allows each process to compute a local addressing mapping, such that all the processes agree on the names of each register. The proposed construction is based on an underlying deadlock-free mutex algorithm for \(n\ge 2\) processes (recently proposed in a paper co-authored by some of the authors of this paper), and consequently inherits its necessary and sufficient condition on the size m of the anonymous memory, namely m must belong to the set \(M(n)=\{m:~ \text{ such } \text{ that } \forall ~ \ell : 1<\ell \le n:~ \mathsf{{gcd}}(\ell ,m)=1\}\setminus \{1\}\). This algorithm, which is also symmetric in the sense process identities can only be compared by equality, requires the participation of all the processes; hence it can be part of the system initialization. Last but not least, the proposed algorithm has a noteworthy first-class property, namely, its simplicity.
Emmanuel Godard, Damien Imbs, Michel Raynal, Gadi Taubenfeld

A Pragmatic Non-blocking Concurrent Directed Acyclic Graph

Abstract
In this paper, we have developed two non-blocking algorithms for maintaining acyclicity in a concurrent directed graph. The first algorithm is based on a wait-free reachability query and the second one on partial snapshot-based obstruction-free reachability query. Interestingly, we are able to achieve the acyclic property in a dynamic setting without (1) making use of helping descriptors by other threads, or (2) clean double collect mechanism. We present a proof to show that the graph remains acyclic at all times in the concurrent setting. We also prove that the acyclic graph data-structure operations are linearizable. We implement both the algorithms in C++ and test through several micro-benchmarks. Our experimental results illustrate an average of 7x improvement over the sequential and global-lock implementation.
Sathya Peri, Muktikanta Sa, Nandini Singhal

Networks

Frontmatter

The Fake News Vaccine

A Content-Agnostic System for Preventing Fake News from Becoming Viral
Abstract
While spreading fake news is an old phenomenon, today social media enables misinformation to instantaneously reach millions of people. Content-based approaches to detect fake news, typically based on automatic text checking, are limited. It is indeed difficult to come up with general checking criteria. Moreover, once the criteria are known to an adversary, the checking can be easily bypassed. On the other hand, it is practically impossible for humans to check every news item, let alone preventing them from becoming viral.
We present Credulix, the first content-agnostic system to prevent fake news from going viral. Credulix is implemented as a plugin on top of a social media platform and acts as a vaccine. Human fact-checkers review a small number of popular news items, which helps us estimate the inclination of each user to share fake news. Using the resulting information, we automatically estimate the probability that an unchecked news item is fake. We use a Bayesian approach that resembles Condorcet’s Theorem to compute this probability. We show how this computation can be performed in an incremental, and hence fast manner.
Oana Balmau, Rachid Guerraoui, Anne-Marie Kermarrec, Alexandre Maurer, Matej Pavlovic, Willy Zwaenepoel

Distributed Online Data Aggregation in Dynamic Graphs

Abstract
We consider the problem of aggregating data in a dynamic graph, that is, aggregating the data that originates from all nodes in the graph to a specific node, the sink. We are interested in giving lower bounds for this problem, under different kinds of adversaries.
In our model, nodes are endowed with unlimited memory and unlimited computational power. Yet, we assume that communications between nodes are carried out with pairwise interactions, where nodes can exchange control information before deciding whether they transmit their data or not, given that each node is allowed to transmit its data at most once. When a node receives a data from a neighbor, the node may aggregate it with its own data.
We consider three possible adversaries: the online adaptive adversary, the oblivious adversary, and the randomized adversary that chooses the pairwise interactions uniformly at random. For the online adaptive and the oblivious adversaries, we give impossibility results when nodes have no knowledge about the graph and are not aware of the future. Also, we give several tight bounds depending on the knowledge (be it topology related or time related) of the nodes. For the randomized adversary, we show that the Gathering algorithm, which always commands a node to transmit, is optimal if nodes have no knowledge at all. Also, we propose an algorithm called Waiting Greedy, where a node either waits or transmits depending on some parameter, that is optimal when each node knows its future pairwise interactions with the sink.
Quentin Bramas, Toshimitsu Masuzawa, Sébastien Tixeuil

A Multi-criteria Group Decision Making Method for Big Data Storage Selection

Abstract
The terms Data Lake and Data Warehouse are very commonly used to talk about Big Data storage. The two concepts are providing opportunities for businesses to better strengthen data management and achieve competitive advantages. Evaluating and selecting the most suitable approach is however challenging. These two types of data storage are often confused, whereas they have many more differences than similarities. In fact, the only real similarity between them is their ability to store data. To effectively deal with this issue, this paper analyses these emerging Big Data technologies and presents a comparison of the selected data storage concepts. The main aim is then to propose and demonstrate the use of an AHP model for the Big Data storage selection, which may be used by businesses, public sector institutions as well as citizens to solve multiple criteria decision-making problems. This multi-criteria classification approach has been applied to define which of the two models is better suited for data management.
Jabrane Kachaoui, Abdessamad Belangour

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise