Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 8h International Conference on Model and Data Engineering, MEDI 2018, held in Marrakesh, Morocco, in October 2018.
The 23 full papers and 4 short papers presented together with 2 invited talks were carefully reviewed and selected from 86 submissions. The papers covered the recent and relevant topics in the areas of databases; ontology and model-driven engineering; data fusion, classsification and learning; communication and information technologies; safety and security; algorithms and text processing; and specification, verification and validation.



Invited Paper


Data Science with Vadalog: Bridging Machine Learning and Reasoning

Following the recent successful examples of large technology companies, many modern enterprises seek to build knowledge graphs to provide a unified view of corporate knowledge and to draw deep insights using machine learning and logical reasoning. There is currently a perceived disconnect between the traditional approaches for data science, typically based on machine learning and statistical modelling, and systems for reasoning with domain knowledge. In this paper we present a state-of-the-art Knowledge Graph Management System, Vadalog, which delivers highly expressive and efficient logical reasoning and provides seamless integration with modern data science toolkits, such as the Jupyter platform. We demonstrate how to use Vadalog to perform traditional data wrangling tasks, as well as complex logical and probabilistic reasoning. We argue that this is a significant step forward towards combining machine learning and reasoning in data science.
Luigi Bellomarini, Ruslan R. Fayzrakhmanov, Georg Gottlob, Andrey Kravchenko, Eleonora Laurenza, Yavor Nenov, Stéphane Reissfelder, Emanuel Sallinger, Evgeny Sherkhonov, Lianlong Wu



NoSQL Databases – Seek for a Design Methodology

NoSQL has emerged as a novel approach to bypass the rigidity and limits that traditional Databases presented when modeling real world features. Its heterogeneity, the variety of models it introduced and its several technical advantages helped NoSQL conquer the industrial and business world. NoSQL Databases are mostly conceived at physical design level, following a set of storage and structural rules regulated by each specific database. As NoSQL thrived, so did NoSQL data modeling. Research into unified approaches for NoSQL Databases at all design levels has been widely pursued in the last decade or so. This paper presents a survey of the various proposals aiming to unify all or most NoSQL Databases under a uniform design methodology. We also present the different data models of each NoSQL Database type, and the numerous approaches existing in the literature to designing and modeling them, in addition to an evaluation system for NoSQL design methodologies.
Chaimae Asaad, Karim Baïna

Mortadelo: A Model-Driven Framework for NoSQL Database Design

In big data contexts, the performance of relational databases can get overwhelmed, usually by numerous concurrent connections over large volumes of data. In these cases, the support of ACID transactions is dropped in favour of NoSQL data stores, which offer quick responses and high data availability. Although NoSQL systems solve this concrete performance problem, they also present some issues. For instance, the NoSQL spectrum covers a wide range of database paradigms, such as key-value, column-oriented or document stores. These paradigms differ too much from the relational model, provoking that it is not possible to make use of existent, well-known practices from relational database design. Moreover, the existence of that paradigm heterogeneity makes difficult the definition of general design practices for NoSQL data stores. We present Mortadelo, a framework devised for the automatic design of NoSQL databases. Mortadelo offers a model-driven transformation process, which starts from a technology-agnostic data model and provides an automatically generated design and implementation for the desired NoSQL data store. The main strength of our framework is its generality, i.e., Mortadelo can be extended to support any kind of NoSQL database. The validity of our approach has been checked through the implementation of a tool, which currently supports the generation of column family data stores and offers preliminary support of document-based ones.
Alfonso de la Vega, Diego García-Saiz, Carlos Blanco, Marta Zorrilla, Pablo Sánchez

Towards OntoUML for Software Engineering: Experimental Evaluation of Exclusivity Constraints in Relational Databases

Model-driven development approach to software engineering requires precise models defining as much of the system as possible. OntoUML is a conceptual modelling language based on Unified Foundational Ontology, which provides constructs to create ontologically well-founded and precise conceptual models. In the approach we utilize, OntoUML is used for making conceptual models of software application data and this model is then transformed into its proper realization in a relational database. In these transformations, the implicit constraints defined by various OntoUML universal types and relations are realized by database views and triggers. In this paper, we specifically discuss the realization of phase partitions of Phase types from the OntoUML model by exclusive associations and provide an experimental evaluation of this approach.
Zdeněk Rybola, Michal Valenta

Ontology and Model Driven Engineering


Scrum and V Lifecycle Combined with Model-Based Testing and Model Driven Architecture to Deal with Evolutionary System Issues

Model Driven Engineering (MDE) and Agile Methods (AM) are two principal domains that are in the way of improvement and evolution in order to facilitate the realisation of IT projects. However, these areas evolve separately despite the great number of researches that focus on improving realisation project’ techniques. Thus, our approach aims to provide an approach that combines two variants of MDE, Model Driven Architecture approach and Model-Based Testing with the V development lifecycle used in every scrum Agile Methodology sprint to deal with system evolution. In order to well illustrate this approach, we apply it on Rental Car Agency System realisation using Scrum methodology with some requirements’ evolution.
Imane Essebaa, Salima Chantit

Adaptive Algorithms for Computing Ontologies Metrics Through Processing of RDF Graphs

The advent of the Semantic Web has increased the number of ontologies representing domains knowledge on the internet. These ontologies are available in the form of thousands of Resource Description Framework (RDF) or Web Ontology Language (OWL) statements which are difficult to read and understand by a human being. On the one hand, several tools have been developed to enable the semi-automatic or automatic access and exploitation of RDF and OWL ontologies. On the other hand, several metrics have been defined to theoretically measure the complexity of RDF and OWL ontology graphs. However, implementing computer programs for the empirical analysis of ontology complexity metrics through the automatic exploration of RDF or OWL graphs remains challenging. This study proposes a set of generic algorithms for computing ontology complexity metrics through the processing of RDF graphs. The algorithms are applied on a set of 25 biomedical ontologies and provide promising results.
Jean Vincent Fonou-Dombeu, Yannick Kazela Kazadi

CRank: A Novel Framework for Ranking Semantic Web Ontologies

To support the reuse of ontologies on the Semantic Web (SW), various approaches have been proposed to rank these ontologies to help the users or ontology engineers to choose the appropriate ones that suit their needs. However, although some of the existing approaches for ranking ontologies are very effective, they all have been designed with a complete disregard of the degree or level of complexity of the ontologies on the SW. In fact, it is argued that the study of the complexity of ontologies in a domain provide useful information for the selection of the appropriate ones for reuse. This study proposes the CRank framework for ranking ontologies on the SW based on their degree or level of complexity. The CRank framework consists of two phases, namely, the pre-processing and ranking. In the pre-processing phase, the graph of each ontology in the dataset is processed to compute seven complexity metrics that measure the design complexity of ontologies. Thereafter, a decision making algorithm is applied in the ranking phase to rank the ontologies in the dataset by aggregation of their complexity metrics. The CRank framework was applied on a set of 100 ontologies of the biomedical domain and displayed promising results.
Jean Vincent Fonou-Dombeu, Serestina Viriri

Data Fusion, Classification and Learning


A New Way of Handling Missing Data in Multi-source Classification Based on Adaptive Imputation

Data fusion is an interesting methodology for improving the classification performance. It consists in combining data acquired from multiple sources for more informative decision and better decision making. This latter is a challenging task due to many issues. The main of these issues arises from the data to be fused. Missing data presents one of the issues, their presence affects the performance of the algorithms and results on a misleading prediction. Appropriately handling missing data is crucial for accurate inference. Several approaches have been proposed in the literature to deal with multi-source classification problems, however they neglect the presence of missingness in the data and assume that the data are complete which is not the case in real life. Other approaches use directly simple data imputation before the learning process, which is not always enough to obtain a reliable learning and prediction model. In this paper, we propose a new approach to deal with missing data in multi-source classification problem. In our approach, we avoid the direct imputation when the concerned feature is not important, but we also adjust the predictions fusion process based on the missing data rate in each data source and in the new instance to classify. This approach is used with Random Forests as an ensemble classifier, and it has shown improved classification performance compared to existing approaches.
Ikram Abdelkhalek, Afef Ben Brahim, Nadia Essousi

Feedback-Oriented Assessor Model

Application: Allocation of Submissions in Online Peer Assessment
Ensuring effective feedback for learners is an important factor in the success of the learning experience. In the context of MOOCs, instructors are unable to provide feedback to a big, heterogeneous community of participants. Different platforms and tools have adopted peer assessment to solve this problem. However, they have been faced with a large number of learners who do not have enough capacity to generate accurate assessments and meaningful feedback. This finding leads to relying on the intelligence of the mass in order to generate more valid and effective feedback. At this level, one limitation of most tools and platforms is that they create random groups of assessors without considering the individual characteristics of its members. For this reason, this article proposes an updated assessor model that focuses on the characteristics of learners related to assessment capacity and their ability to provide correct, objective and useful feedback for their peers. Based on this feedback-oriented assessor model, we consider the aforementioned characteristics in the context of an algorithm that creates groups of assessors and allocates submissions in order to optimize peer feedback.
Mohamed-Amine Abrache, Khalid Megder, Chihab Cherkaoui

Communication and Information Technologies


A Gamification and Objectivity Based Approach to Improve Users Motivation in Mobile Crowd Sensing

The advent of new communication and information technologies offers great potential for capturing and transmitting information related to mobility. The use of these technologies makes it possible to collect information and transmit it in a participative production (crowdsourcing) perspective for organizational government services such as suspect investigation. The objective of this work is to improve the process of identifying suspects by combining collective intelligence with mobile devices. To do this, this article proposes an approach for the development of a framework based on the gathering of information by the crowd (crowd sensing), their filtering and their analysis. This framework increases the user participation by integrating the gamification technique as a motivation approach. The reliability of the crowd sensed information, in turn, is provided by an objectivity analysis algorithm. The experimental results of the case study, carried out through AnyLogic simulations, show that the methods and technologies incorporated in the suspect identification procedures accelerated the search and location process by ensuring high system performance as well as by improving the quality of the sensed data.
Hasna El Alaoui El Abdallaoui, Abdelaziz El Fazziki, Fatima Zohra Ennaji, Mohamed Sadgal

Modeling and Evaluating Cross-layer Elasticity Strategies in Cloud Systems

Clouds are complex systems that provide computing resources in an elastic way. Elasticity property allows their adaptation to input workload by (de)provisioning resources as the demand rises and drops. However, due to the numerous overlapping factors that impact their elasticity and the unpredictable nature of the workload, providing accurate action plans to manage cloud systems’ elastic adaptations is a particularly challenging task. In this paper, we propose an approach based on Bigraphical Reactive Systems (BRS) to model cloud structures and their elastic behavior. We design elasticity strategies that operate at service and infrastructure cloud levels to manage the elastic adaptations. Besides, we provide a Maude encoding to permit generic executability and formal verification of the elastic behaviors. One step ahead, we show how the strategies can be combined at both levels to provide different high-level elastic behaviors. Finally, we evaluate the different cross-layer combinations using Queuing Theory.
Khaled Khebbeb, Nabil Hameurlain, Faiza Belala

Thing Federation as a Service: Foundations and Demonstration

This paper presents the design and implementation guidelines of thing federation-as-a-service. The large and growing number of things compliant with the Internet-of-Things (IoT) principles need to be “harnessed” so, that, things’ collective over individual behaviors prevail. A federation gathers necessary things together according to the needs and requirements of the situation that this federation is tasked to handle. Two types of federations exist: planned whose things are all known at design-time and ad-hoc whose things are known after a competitive selection at run-time. In this paper, federations handle situations about emergency services that involve different stakeholders with different backgrounds raising the complexity of ensuring a successful delivery of these services. A system for patient emergency transfer following a tunnel closure is implemented demonstrating the technical doability of thing federation-as-a-service.
Zakaria Maamar, Khouloud Boukadi, Emir Ugljanin, Thar Baker, Muhammad Asim, Mohammed Al-Khafajiy, Djamal Benslimane, Hasna El Alaoui El Abdallaoui

Formalizing Reusable Communication Models for Distributed Systems Architecture

Building distributed computing systems involves complex concerns integrating a multitude of communication styles, technologies (IoT, cloud and big data...), stakeholders (architects, developers, integrators, etc.) and addressing a multitude of application domains (smart cities, health, mobility, etc.). Existing architectural description languages fail to rigorously bridge the gap between the abstract representation of communication styles and those supported by existing execution infrastructures. In this paper, we aim at specifying software architecture of distributed systems using an approach combining semi-formal and formal languages to build reusable model libraries to represent communication solutions. Our contribution is two fold. First, we propose a metamodel to describe high level concepts of architecture in a component- port- connector fashion focusing on communication styles. Second, we attempt to formalize those concepts and their semantics following some properties (specifications) to check architectural conformance. To validate our work, we provide a set of reusable connector libraries within a set of properties to define architectures for systems with explicit communications models like message passing and remote procedure calls, that are common to most distributed systems.
Quentin Rouland, Brahim Hamid, Jason Jaskolka

Safety and Security


A Valid BPMN Extension for Supporting Security Requirements Based on Cyber Security Ontology

Business Process Model and Notation (BPMN) is the de facto standard for business process modeling. One of the most important aspect of business process models is security. Since most business processes revolve around the exchange of information, the security of such information assets becomes a critical factor for the success of the overall business process. Therefore, it is very important to capture the security requirements at conceptual level in order to identify the security needs in the first place. There is a need for an integrated tools and methodology that allows for specifying and enforcing compliance and security requirements for business process-driven enterprise systems. Furthermore, BPMN do not support the specification of security requirements along the business process modelling. This will increase the vulnerability of the system and make the future development of security for the system more difficult. In this paper, we extend the BPMN language to explicitly support the specification of security requirements derived from cyber security ontology. We incorporate visual constructs for modeling security requirements. In order to provide a commonly usable extension, these enhancements were implemented as a valid BPMN extension. An application example from the healthcare domain is used to demonstrate our approach. The experimentation denotes that the authors’ approach achieves better results in terms comprehensive understanding of incorporated security requirements.
Mohamed El Amine Chergui, Sidi Mohamed Benslimane

A Correct-by-Construction Model for Attribute-Based Access Control

In this paper, a formal specification approach of the Attribute-Based Access Control (ABAC) is proposed using the Event-B method. We apply an a-priori formal verification to build a correct model in a stepwise manner. Correctness of the specification model is insured during the construction steps. The model is composed of abstraction levels that are generated through refinement operations. A set of ABAC properties is defined in each level of refinement starting from the highest abstract level to the most concrete one. These properties are preserved by proofs with the behavior specification.
Hania Gadouche, Zoubeyr Farah, Abdelkamel Tari

Algorithmics and Text Processing


Voronoi-Diagram Based Partitioning for Distance Join Query Processing in SpatialHadoop

SpatialHadoop is an extended MapReduce framework supporting global indexing techniques that partition spatial data across several machines and improve query processing performance compared to traditional Hadoop systems. SpatialHadoop supports several spatial operations efficiently (e.g. k Nearest Neighbor search, spatial intersection join, etc.). Distance Join Queries (DJQs), e.g. k Nearest Neighbors Join Query, k Closest Pairs Query, etc., are important and common operations used in numerous spatial applications. DJQs are costly operations, since they combine joins with distance-based search. Therefore, performing DJQs efficiently is a challenging task. In this paper, a new partitioning technique based on Voronoi Diagrams is designed and implemented in SpatialHadoop. A new kNNJQ MapReduce algorithm and an improved kCPQ MapReduce algorithm, using the new partitioning mechanism, are also developed for SpatialHadoop. Finally, the results of an extensive set of experiments are presented, demonstrating that the new partitioning technique and the new DJQ MapReduce algorithms are efficient, scalable and robust in SpatialHadoop.
Francisco García-García, Antonio Corral, Luis Iribarne, Michael Vassilakopoulos

Graph Pattern Matching Preserving Label-Repetition Constraints

Graph pattern matching is a routine process for a wide variety of applications such as social network analysis. It is typically defined in terms of subgraph isomorphism which is NP-Complete. To lower its complexity, many extensions of graph simulation have been proposed which focus on some topological constraints of pattern graphs that can be preserved in polynomial-time over data graphs. We discuss the satisfaction of a new topological constraint, called Label-Repetition constraint. To the best of our knowledge, existing polynomial approaches fail to preserve this constraint, and moreover, one can adopt only subgraph isomorphism for this end which is cost-prohibitive. We present first a necessary and sufficient condition that a data subgraph must satisfy to preserve the Label-Repetition constraints of the pattern graph. Furthermore, we define matching based on a notion of triple simulation, an extension of graph simulation by considering the new topological constraint. We show that with this extension, graph pattern matching can be performed in polynomial-time, by providing such an algorithm. We extend our solution to deal with edges that have counting quantifiers of the form “\(\ge p\)”.
Houari Mahfoud

Standard and Dialectal Arabic Text Classification for Sentiment Analysis

In social networks, the users tend to express more themselves by sharing publicly their opinions, emotions and sentiments, the benefits of analyzing such data are eminent, however the process of extracting and transforming these raw data can be a very challenging task particularly when the sentiments are expressed in Arabic language. Two main categories of Arabic are massively used in social networks, namely the modern standard Arabic, which is the official language, and the dialectal Arabic, which is itself, subdivided to several categories depending on countries and regions. In this paper, we focus on analyzing Facebook comments that are expressed in modern standard or in Moroccan dialectal Arabic; therefore we put these two language categories under the scope by testing and comparing two approaches. The first one is the classical approach that considers all Arabic text as homogeneous. The second one, that we propose, require a text classification beforehand sentiment classification, based on language categories: the standard and the dialectal Arabic. The idea behind this approach is to adapt the text preprocessing on each language category with more precision. In supervised classification, we have applied two of the most reputed classifiers in sentiment analysis applications, Naive Bayes and SVM. The results of this study are promising since good performance were obtained.
Mohcine Maghfour, Abdeljalil Elouardighi

A Graph-Based Model for Tag Recommendations in Clinical Decision Support System

The healthcare providers use clinical decision support systems to manage the patients’ electronic health records. In this paper, we aim to enhance the computer-aided diagnosis in medical imaging. We developed a graph-based tag recommendations approach that suggests relevant diseases and pathologies by analysing the tagged medical images. Healthcare providers can rapidly get an improved diagnostic value of radiographs using the graph-based tag recommendations that enable discovering common and relevant diseases used within the patient’s community, his related images and semantically tied tags. The dataset ChestX-Ray14 has been conducted to evaluate the accuracy and effectiveness of our proposal. Futures works will address the online evaluation of the suggested tags by exploiting the healthcare providers’ feedback.
Sara Qassimi, El Hassan Abdelwahed, Meriem Hafidi, Rachid Lamrani

Spatial Batch-Queries Processing Using xBR-trees in Solid-State Drives

Efficient query processing in spatial databases is of vital importance for numerous modern applications. In most cases, such processing is accomplished by taking advantage of spatial indexes. The xBR\(^+\)-tree is an index for point data which has been shown to outperform indexes belonging to the R-tree family. On the other hand, Solid-State Drives (SSDs) are secondary storage devices that exhibit higher (especially read) performance than Hard Disk Drives and nowadays are being used in database systems. Regarding query processing, the higher performance of SSDs is maximized when large sequences of queries (batch queries) are executed by exploiting the massive I/O advantages of SSDs. In this paper, we present algorithms for processing common spatial (point-location, window and distance-range) batch queries using xBR\(^+\)-trees in SSDs. Moreover, utilizing small and large datasets, we experimentally study the performance of these new algorithms against processing of batch queries by repeatedly applying existing algorithms for these queries. Our experiments show that, even when the existing algorithms take advantage of LRU buffering that minimizes disk accesses, the new algorithms prevail performance-wise.
George Roumelis, Michael Vassilakopoulos, Antonio Corral, Athanasios Fevgas, Yannis Manolopoulos

Specification, Verification and Validation


Formalizing Railway Signaling System ERTMS/ETCS Using UML/Event-B

Critical systems like railway signaling systems need to guarantee important properties such as safety. Formal methods have achieved considerable success in designing critical systems with verified desirable properties. In this paper, we propose a formal model of ERTMS/ETCS (European Rail Traffic Management System/European Train Control System) which is an innovative railway signaling system. This work focuses on Hybrid ERTMS/ETCS Level 3 which is currently under design, by studying and modeling the functionalities and relations of its different sub-systems. The proposed model is based on model transformation from UML (Unified Modeling Language) class diagrams to the Event-B formal language. UML is used as the primary modeling notation to describe the structure and the main characteristics of the studied system. The generated Event-B model is enriched by the formalization of safety properties. We verify and validate the correctness of the proposed formalization using the ProB model-checker and animator.
Abderrahim Ait Wakrime, Rahma Ben Ayed, Simon Collart-Dutilleul, Yves Ledru, Akram Idani

A Dynamic Analysis for Reverse Engineering of Sequence Diagram Using CPN

Reverse engineering is a very efficient way to extract automatically behavioral models from legacy systems. This paper proposes a new approach to detect and decipher dynamic information from these systems in order to recover the corresponding sequence diagram. The approach is composed of three steps: trace generation and collection, trace merging using Colored Petri Nets and sequence diagram extraction. Our results show that this approach can produce a more accurate high-level sequence diagram with main operators: “seq”, “alt”, opt” and “par”.
Chafik Baidada, El Mahi Bouziane, Abdeslam Jakimi

A Formalized Procedure for Database Horizontal Fragmentation in Isabelle/HOL Proof Assistant

We propose a logical procedure for the horizontal fragmentation problem based on predicate abstraction over the entire domain of database relations. The set of minterm predicates is constructed using rewriting rules similar to the well-known semantic tableau algorithm. The procedure start from an initial set of simple predicates, build the set of minterm predicates until rules are no longer required. To ensure this proposition, we give a formal proof of its correctness namely, it’s soundness, completeness and termination with Isabelle proof assistant. The main contribution of this work are: refining the minterm approach by adding a semantic layer to predicates, minimizing the set of minterm predicates by automatically eliminating contradictory ones, detecting and handling subsumptions between them. This leads to the best construction time of the final partitioning schema. Finally, a source code of the procedure is generated automatically by the Isabelle proof assistant.
Cheikh Salmi, Mohamed Chaabani, Mohamed Mezghiche

Domain-Oriented Verification Management

V. Basili stated twenty years ago that a software organization that manages quality should have a corporate infrastructure that links together and transcends the single projects by capitalizing on successes and learning from failures. For critical systems design, the verification tasks play a crucial role; when an unexpected situation is detected, the engineer analyzes the cause, performing a diagnosis activity. To improve the quality of the design, diagnosis information have to be managed through a well-defined method and with a suitable system. In this paper we present how a Verification Organizing System together with a problem-oriented method could achieve these issues. The key aspect of the approach is to follow a step-wise building of the solution, reusing known problems that are relevant for the system under study.
Vincent Leildé, Vincent Ribaud, Ciprian Teodorov, Philippe Dhaussy

A Formal Model for Interaction Specification and Analysis in IoT Applications

The Internet of Things (IoT) is a concept where connected entities can work and interact with each other in order to facilitate daily life. Although, many research efforts in the IoT realm have been to date devoted to device, networking and application service perspectives, formalization and analysis of IoT systems are still in their infancy. This paper introduces a new BRS-based approach aiming to support specification and verification of interaction and interoperability aspects in IoT systems. The proposed approach is based on a bigraphical-agent model that investigates the spatial structure of the IoT system and its logical structure defining the behaviour and interactions of its different entities. The Tree Query Logic (TQL) is used to formally express and verify some properties inherent to IoT systems.
Souad Marir, Faiza Belala, Nabil Hameurlain

Mechanizing the Denotational Semantics of the Clock Constraint Specification Language

Domain Specific Modelling Languages provide the designers with appropriate languages for the task they must conduct. These dedicated languages play a key role in popular Model Driven Engineering (MDE) approaches. Their semantics are usually written in a semi-formal manner mixing natural language and mathematical notations. The mechanization of these semantics rely on formal specification languages. They are usually conducted in order to assess the correctness of verification and transformation tools for such languages. This contribution illustrates such a mechanization for the Clock Constraint Specification Language (CCSL). This language allows to model the timed concurrency concern in the MARTE UML profile and was designed to be easier to master than temporal logics for the system engineers. Its semantics has been defined in the usual semi-formal manner and implemented in the TimeSquare simulation tool. We discuss the interest of this mechanization and show how it allowed to prove properties about this language and ease the definition of a refinement relation for such models. This work relies on the Agda proof assistant and is presented accordingly.
Mathieu Montin, Marc Pantel

Ensuring the Functional Correctness of IoT through Formal Modeling and Verification

Recent research initiatives dedicated to formal modeling, functional correctness and security analysis of IoT systems, are generally limited to, model abstract behavioral patterns and look forward possible attacks beneath gauging and providing feasible attacks. This research considers the complementary problem by looking for more accurate attacks in IoT  by capturing richer behaviors -technical, physical, and social- including their quantitative features. We propose IoT-SEC  framework that establishes an adequate semantics to the IoT’s components and their interactions including social actors that behave differently than automated processes. For security analysis, we develop a general approach based on a library of attack trees from where we generate automatically the monitor, the security policies and requirements to harden the IoT model and to check how well the model is secure. We use PRISM model checker to analyze the functionality and to check security of the IoT model. Precisely this contribution ensures the functionality of IoT  systems by analyzing their functional correctness.
Samir Ouchani

Extensions to Hybrid Event-B to Support Concurrency in Cyber-Physical Systems

Event-B is one of the most commonly used rigorous methods that has proven its value in many applications. To support the development of cyber-physical systems (CPS) continuous extensions to the method have already been proposed and extensions to supporting tools are under development. In this paper further extensions are proposed addressing the need to support asynchronous behaviour of autonomous components in CPS. This can be accomplished by multiple Event-B machines with a semantics defined by concurrent runs, which preserve the semantics of single Event-B machines. This makes only sense, if shared locations are supported as well. A third extension covers partial updates, by means of which conflicting updates to shared locations with bulk data values such as sets or relations that are predominant in Event-B are avoided.
Klaus-Dieter Schewe


Weitere Informationen

Premium Partner