Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 15th International Conference on Software Technologies, ICSOFT 2020, which was held virtually due to the Covid-19 pandemic.
The 12 revised full papers were carefully reviewed and selected from 95 submissions. The papers deal with the following topics: business process modelling; IT service management; interoperability and service-oriented architecture; project management software; scheduling and estimating; software metrics; requirements elicitation and specification; software and systems integration among others.



Shared Autonomous Mobility on Demand: A Fuzzy-Based Approach and Its Performance in the Presence of Uncertainty

Ride-sharing Autonomous Mobility-on-Demand system (RAMoD), whereby self-driving vehicles provide coordinated travel services on-demand and potentially allowing multiple passengers to share a trip, has recently emerged as a promising solution to cope with several problems such as low vehicle utilization rates, pollution, and parking spaces. The expected uncertain travel demand on such systems and its resulting imbalance and insufficient charging resources require an efficient fleet management strategy. This paper focuses on designing and testing an integrated strategy for dispatching, rebalancing, and charging by accounting for the uncertain travel demands. Specifically, we first devise a novel multi-objective possibilistic (MILP) model, which contemplates the variability and uncertainty affecting travel demands in the RAMOD systems. The main target is to centralize the various decisions in order to keep vehicle availabilities balanced over the planning horizon and the transportation network so that travel requests are satisfied at a minimum cost. Second, leveraging appropriate strategies, we transform this fuzzy formulation into an equivalent auxiliary crisp multi-objective model. Due to the conflicting nature of the considered objectives, a goal programming approach with specific weights for each goal is used to compute an efficient compromise solution. Results show the applicability and usefulness of the proposed fuzzy approach as well as its merits compared to other schemes.
Rihab Khemiri, Mohamed Naija, Ernesto Exposito

Efficient Scheduling of Periodic, Aperiodic, and Sporadic Real-Time Tasks with Deadline Constraints

A real-time system is an operating system that guarantees a certain functionality within a specified time constraint. Such system is composed of tasks of various types: periodic, sporadic and aperiodic. These tasks can be subjected to a variety of temporal constraints, the most important one is the deadline. Thus, a reaction occurring too late may be useless or even dangerous. In this context, the main problem of this study is how to configure feasible real-time system having both periodic, aperiodic and sporadic tasks. This paper shows an approach for computing deadlines in uniprocessor real-time systems to guarantee real-time feasibility for hard-deadline periodic and sporadic tasks and provide good responsiveness for soft-deadline aperiodic tasks. An application to a case study and performance evaluation show the effectiveness of the proposed approach.
Aicha Goubaa, Mohamed Kahlgui, Frey Georg, Zhiwu Li

R-TNCES State Space Generation Using Ontology-Based Method on a Distributed Cloud-Based Architecture

This paper deals with formal verification (accessibility graph generation & state space analysis) of RDECSs modeled with specified reconfigurable timed net condition/event systems (R-TNCESs) where the properties to be verified to ensure the well behave of systems are expressed by computation tree logic CTL. Reconfigurable discrete event control systems (RDECSs) are complex and critical systems, which, make their formal verification expensive in terms of complexity and memory occupation. We aim to improve model checking used for formal verification of RDECSs by proposing a new approach of state space generation that considers similarities and a parallel verification of CTL properties. In this approach, we introduce the modularity concept for verifying systems by constructing incrementally their accessibility graphs. Furthermore, we set up an ontology-based history to deal with similarities between two or several systems by reusing state spaces of similar components that are computed during previous verification. A distributed cloud-based architecture is proposed to perform the parallel computation for control verification time and memory occupation. The paper’s contribution is applied to a benchmark production system. The evaluation of the proposed approach is performed by measuring the temporal complexity of several large scale system verification. The results show the relevance of this approach.
Chams Eddine Choucha, Mohamed Oussama Ben Salem, Moahmed Khalgui, Laid Kahloul, Naima Souad Ougouti

MLCA: A Model-Learning-Checking Approach for IoT Systems

The Internet of Things (IoT) is a broad concept comprising a wide ecosystem of interconnected services and devices connected to the Internet. The IoT concept holds fabulous promises, but security aspects tend to be significant barriers for the adoption of large-scale IoT deployments. This paper proposes an approach to assist companies or organisations in the security audit of IoT systems. This approach called Model Learning and Checking Approach (MLCA) combines model learning for automatically extracting models from event logs, and model checking for verifying whether security properties, given under the form of generic LTL formulas hold on models. The originality of MLCA lies in the fact that auditors do not have to craft models or to be expert LTL users. The LTL formula instantiation, which makes security properties concrete, is indeed semi-automatically performed by means of an expert system composed of inference rules. The latter encode some expert knowledge, which can be applied again to the same kind of systems with less efforts. We evaluated MLCA on 5 IoT systems with security measures provided by the European ENISA institute. We show that MLCA is very effective in detecting security issues and provides results within reasonable time.
Sébastien Salva, Elliott Blot

A Real-Time Integration of Semantic Annotations into Air Quality Monitoring Sensor Data

Nowadays, air pollution is one of the most serious problems in the world, therefore the real-time monitoring air quality is considered as necessity. Internet of Things (IoT) devices, such as sensors, enable real-time air quality monitoring, which produce sensed data continuously in the stream data, and transmit these data to a centralized server. Raw sensor stream data is useless unless properly annotated. Hence, the researchers proposed Semantic Sensor Web (SSW), which is a combination of Sensor Web and technologies of Semantic Web. However, how to advance techniques for integration of the semantic annotations in real-time is still an open issue that should be addressed. This research focuses on real-time integration of semantics into heterogeneous sensor stream data with context in the IoT. In this context, an IoT real-time air quality monitoring system and different semantic annotations are developed for sensor stream data in the format of Sensor Observation Service (SOS).
Besmir Sejdiu, Florije Ismaili, Lule Ahmedi

On Improvement of Formal Verification of Reconfigurable Real-Time Systems Using TCTL and CTL-Based Properties on IaaS Cloud Environment

The verification of reconfigurable real-time systems that dynamically change their structures due to external changes in environment or user requirements continues to challenge experts which have to face new challenges such as fault tolerance, response in time, flexibility, modularity, etc. Moreover, such systems face constraints as real-time requirements, their generated state spaces are much bigger, consequently, properties to be verified are more complex, which makes the formal verification more complex. For modeling systems, in this paper, we use Reconfigurable Timed Net Condition/Event Systems (R-TNCESs) for the optimal functional and temporal specification. To control the complexity and to reduce the verification time, a new method of properties verification in a cloud-based architecture is proposed. The novelty consists of a new method for state space generation and the decomposition of the complex properties for running an efficient verification. Moreover, An algorithm is proposed for the incremental state space generation. An application of the paper’s contribution is carried out on a case study to illustrate the impact of using this technique. The current results show the benefits of the paper’s contribution.
Chams Eddine Choucha, Mohamed Ramdani, Moahmed Khalgui, Laid Kahloul

A Genetic Algorithm with Tournament Selection for Automated Testing of Satellite On-board Image Processing

In the satellite domain, on-board image processing technologies are subject to extremely strict requirements with respect to reliability and accuracy in hard real-time. In this paper, we address the problem of automatically selecting test cases from a huge input domain that are specifically tailored to provoke mission-critical behavior of satellite on-board image processing applications. Due to the large input domain of such applications, it is infeasible to exhaustively execute all possible test cases. Moreover, the high number of input parameters and complex computations make it difficult to find specific test cases that cause mission-critical behavior. To overcome this problem, we define a test approach that is based on a genetic algorithm combined with input parameter partitioning. We partition the input parameters into equivalence classes to automatically generate a reduced search space with complete coverage of the input domain. Based on the reduced search space, we run a genetic algorithm to automatically select test cases that provoke worst case execution times and inaccurate results of the satellite on-board image processing application. For this purpose, we define a two-criteria fitness function and evaluate two different selection methods with a case study from the satellite domain. We show the efficiency of our test approach on experimental results from the Fine Guidance System of the ESA medium-class mission PLATO.
Ulrike Witteck, Denis Grießbach, Paula Herber

Model-Based Threat Modeling for Cyber-Physical Systems: A Computer-Aided Approach

Harming the security of a Cyber-Physical System (CPS) can lead to substantial damage and endanger for life because such a system includes many devices that interact with the physical world. Following the principle of security-by-design, the consideration of security should take place as early as possible during software development. However, the current state of the art often lacks systematic documentation of possible threats, and the identification of all relevant threats is not a trivial task.
In previous work, we presented a taxonomy of relevant attack actions for CPSs. The distinguishing feature of the taxonomy is its two-dimensional structure. We map typical attack actions to the attack surface. The attack surface is described by the component’s interfaces which can be misused by attackers to gain access to a component, thus potentially harming the security of the system. On top of this taxonomy, we described an example of an attack action catalog. The application of our taxonomy and the attack action catalog still requires manual effort from practitioners, e.g. when looking up relevant attack actions.
Therefore, we developed a tool based on our taxonomy which we present in the present paper. In a first step, we formalized our taxonomy in form of a metamodel. Each threat model is an instance of that metamodel. The metamodel reflects the way in which the taxonomy links attack actions with parts of the system. Furthermore, we created a graphical editor that assists practitioners in creating the threat model. Based on the taxonomy’s metamodel and attack action catalogs, the tool pre-filters relevant attack actions and allows to systematically document them in the threat model. Our tool provides different views on the threat model, thus helping to focus on the relevant aspects for a specific task.
Monika Maidl, Gerhard Münz, Stefan Seltzsam, Marvin Wagner, Roman Wirtz, Maritta Heisel

A Machine Learning Based Methodology for Web Systems Codeless Testing with Selenium

Web system testing is a crucial software development cycle. However, though there are real needs for testing these complex systems, it often requires specific skills in testing and/or technical programming. Moreover, the lifecycles of web systems are today very dynamic. They are often modified, updated, integrating new data, links, widgets, etc. Therefore, the testing processes and scripts for these systems have to be modified as well which can be very costly in terms of time and resources. Based on that context, this paper aims at reducing these prerequisites and constraints for tester in proposing a codeless testing automation framework. Our approach is based on Selenium and a machine learning technique to propose generic testing scripts that can be automatically tuned to the tested use cases. Experiments are provided leading to relevant results demonstrating the success of our methodology.
Phuc Nguyen, Stephane Maag

Multilevel Readability Interpretation Against Software Properties: A Data-Centric Approach

Given the wide adoption of the agile software development paradigm, where efficient collaboration as well as effective maintenance are of utmost importance, the need to produce readable source code is evident. To that end, several research efforts aspire to assess the extent to which a software component is readable. Several metrics and evaluation criteria have been proposed; however, they are mostly empirical or rely on experts who are responsible for determining the ground truth and/or set custom thresholds, leading to results that are context-dependent and subjective. In this work, we employ a large set of static analysis metrics along with various coding violations towards interpreting readability as perceived by developers. Unlike already existing approaches, we refrain from using experts and we provide a fully automated and extendible methodology built upon data residing in online code hosting facilities. We perform static analysis at two levels (method and class) and construct a benchmark dataset that includes more than one million methods and classes covering diverse development scenarios. After performing clustering based on source code size, we employ Support Vector Regression in order to interpret the extent to which a software component is readable against the source code properties: cohesion, inheritance, complexity, coupling, and documentation. The evaluation of our methodology indicates that our models effectively interpret readability as perceived by developers against the above mentioned source code properties.
Thomas Karanikiotis, Michail D. Papamichail, Andreas L. Symeonidis

Efficient Verification of Reconfigurable Discrete-Event System Using Isabelle/HOL Theorem Prover and Hadoop

This paper deals with the modelling and verification of reconfigurable discrete event systems using model driven engineering Hadoop. Hadoop is therefore a platform for establishing a dialogue between several machines. Its objectives are to solve the main problems of Hard disk size and of computing powers limitations. Isabelle/HOL is an interactive/automated theorem prover that combines the functional programming paradigm with high order logic (HOL), which makes it efficient for developing solid formalizations. In this paper, we are interested in reconfigurable discrete event systems, which we formalise using Isabelle/HOL. The proposed method consists of formalising a reconfigurable discrete event system with Isabelle/HOL, using Hadoop, we apply the distributed verification to perform an efficient verification of systems. The reason of this choice consists in the fact that theorem proving deals with the verification of infinite systems while model checking deals with finite systems and suffers from the well known state space explosion problem. Furthermore, thanks to Hadoop it can apply the distributed verification, which means more reduction in verification time. We implement the contributions of this paper using Hadoop platform and Isabelle tool. Finally, we illustrate the proposed method through FESTO MPS case study.
Sohaib Soualah, Yousra Hafidi, Mohamed Khalgui, Allaoua Chaoui, Laid Kahloul

A Method for the Joint Analysis of Numerical and Textual IT-System Data to Predict Critical System States

We present a method for the joint analysis of textual and numerical IT-system data usable to predict possibly critical system states. Towards a comparative discussion culminating in a justified model and method choice, we apply logistic regression, random forest and neural networks to the prediction of critical system states. Our models consume a set of different monitoring performance metrics and log file events. To ease the analysis of IT-systems, our models judge the future system state using one binary outcome variable for the system state’s criticality as “alarm” or “no alarm”. Moreover, we use feature importance measures to give IT-operators guidance on which system parameters, i.e., features, to consider primarily when responding to an alarm. We evaluate our models using different configurations, including (among others) the demanded lead time window for incident response, and a set of common performance measures. This paper is an extension to previous work that adds details on how to jointly process textual and numerical data.
Patrick Kubiak, Stefan Rass, Martin Pinzger, Stephan Schneider


Weitere Informationen

Premium Partner