Skip to main content
Top

2020 | Book

Evaluation of Novel Approaches to Software Engineering

14th International Conference, ENASE 2019, Heraklion, Crete, Greece, May 4–5, 2019, Revised Selected Papers

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 14th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2019, held in Heraklion, Crete, Greece, in May 2019.

The 19 revised full papers presented were carefully reviewed and selected from 102 submissions. The papers included in this book contribute to the understanding of relevant trends of current research on novel approaches to software engineering for the development and maintenance of systems and applications, specically with relation to: model-driven software engineering, requirements engineering, empirical software engineering, service-oriented software engineering, business process management and engineering, knowledge management and engineering, reverse software engineering, software process improvement, software change and configuration management, software metrics, software patterns and refactoring, application integration, software architecture, cloud computing, and formal methods.

Table of Contents

Frontmatter
Using Stanford CoreNLP Capabilities for Semantic Information Extraction from Textual Descriptions
Abstract
Automated extraction of semantic information from textual descriptions can be implemented by processing results of application of Stanford CoreNLP tools. This paper presents a sequence of processing steps and initial results of their application for two examples of a description of system’s functionality. The processing steps allow identifying main functional characteristics of the system and its operational domain. Results obtained as a result of application of the steps are compared with data obtained as a result of analysis by a developer. Application of Stanford CoreNLP parsers in certain cases can produce errors and can influence results of further processing. The comparison of the two results sets showed that variability of language constructs in descriptions affects an amount of implicitly expressed knowledge. Nevertheless, results of this research can be used as a start point of automated text processing for creation of analysis models.
Erika Nazaruka, Jānis Osis, Viktorija Griberman
An Overview of Ways of Discovering Cause-Effect Relations in Text by Using Natural Language Processing
Abstract
Understanding of cause-effect relations is vital for constructing a valid model of a system under development. Discovering cause-effect relations in text is one of the difficult tasks in Natural Language Processing (NLP). This paper presents a survey on trends in this field related to understanding how linguistically causal dependencies can be expressed in the text, what patterns and models exist, which of them are more and less successful and why. The results show that causal dependencies in text can be described using plenty lexical expressions as well as linguistic and syntactic patterns. Moreover, the same constructs can be used for non-causal dependencies. Solutions that combine the patterns, ontologies, temporal models and a use of machine learning demonstrate more accurate results in extracting and selecting cause-effect pairs. However, not all lexical expressions are well studied. There are few researches on multi-cause and multi-effect domains. The results of the survey are to be used for construction of a Topological Functioning Model (TFM) of a system, where cause-effect relations are one of key elements. However, they can be used also for construction of other behavioral models.
Erika Nazaruka
From Requirements to Automated Acceptance Tests with the RSL Language
Abstract
Software testing can promote software quality. However, this activity is often performed at the end of projects where failures are most difficult to correct. Combining requirements specification activities with test design at an early stage of the software development process can be beneficial. One way to do this is to use a more structured requirements specification language. This allow to reduce typical problems such as ambiguity, inconsistency, and incorrectness in requirements and may allow the automatic generation of (parts of) acceptance test cases reducing the test design effort. In this paper we discuss an approach that promotes the practice of requirements specification combined with testing specification. This is a model-based approach that promotes the alignment between requirements and tests, namely, test cases and also low-level automated test scripts. To show the applicability of this approach, we integrate two complementary languages: (i) the ITLingo RSL (Requirements Specification Language) that is specially designed to support both requirements and tests rigorously and consistently specified; and (ii) the Robot language, which is a low-level keyword-based language for specifying test scripts. This approach includes model-to-model transformation processes, namely a transformation process from requirements (defined in RSL) into test cases (defined in RSL), and a second transformation process from test cases (in RSL) into test scripts (defined according the Robot framework). This approach was applied in a fictitious online store that illustrates the various phases of the proposal.
Ana C. R. Paiva, Daniel Maciel, Alberto Rodrigues da Silva
Experimenting with Liveness in Cloud Infrastructure Management
Abstract
Cloud computing has been playing a significant role in the provisioning of services over the Internet since its birth. However, developers still face several challenges limiting its full potential. The difficulties are mostly due to the large, ever-growing, and ever-changing catalog of services offered by cloud providers. As a consequence, developers must deal with different cloud services in their systems; each managed almost individually and continually growing in complexity. This heterogeneity may limit the view developers have over their system architectures and make the task of managing these resources more complex. This work explores the use of liveness as a way to shorten the feedback loop between developers and their systems in an interactive and immersive way, as they develop and integrate cloud-based systems. The designed approach allows real-time visualization of cloud infrastructures using a visual city metaphor. To assert the viability of this approach, the authors conceived a proof-of-concept and carried on experiments with developers to assess its feasibility.
Pedro Lourenço, João Pedro Dias, Ademar Aguiar, Hugo Sereno Ferreira, André Restivo
Live Software Development Environment Using Virtual Reality: A Prototype and Experiment
Abstract
Successful software systems tend to grow considerably, ending up suffering from essential complexity, and very hard to understand as a whole. Software visualization techniques have been explored as one approach to ease software understanding. This work presents a novel approach and environment for software development that explores the use of liveness and virtual reality (VR) as a way to shorten the feedback loop between developers and their software systems in an interactive and immersive way. As a proof-of-concept, the authors developed a prototype that uses a visual city metaphor and allows developers to visit and dive into the system, in a live way. To assess the usability and viability of the approach, the authors carried on experiments to evaluate the effectiveness of the approach, and how to best support a live approach for software development.
Diogo Amaral, Gil Domingues, João Pedro Dias, Hugo Sereno Ferreira, Ademar Aguiar, Rui Nóbrega, Filipe Figueiredo Correia
Model-Based Risk Analysis and Evaluation Using CORAS and CVSS
Abstract
The consideration of security during software development is an important factor for deploying high-quality software. The later one considers security in a software development lifecycle the higher the effort to address security-related incident scenarios. Following the principle of security-by-design, we aim at providing methods to develop secure software right from the beginning, i.e. methods for an application during requirements engineering.
The level of risk can be used to prioritize the treatment of scenarios, thus spending the required effort in an efficient manner. It is defined as the likelihood of a scenario and its consequence for an asset. The higher a risk level, the higher the priority to address the corresponding incident scenario. In previous work, we proposed a method that allows to semi-automatically estimate and evaluate risks based on the Common Vulnerability Scoring System using a template-based description of scenarios. In the present paper, we show how the method can be integrated into an existing risk management process like CORAS. To relate the CORAS diagrams and the template, we provide a metamodel. Our model-based approach ensures consistency and traceability between the different steps of the risk management process.
Furthermore, we enhance the existing method with a questionnaire to improve the assessment of an incident scenario’s likelihood.
Roman Wirtz, Maritta Heisel
Towards GDPR Compliant Software Design: A Formal Framework for Analyzing System Models
Abstract
Software systems nowadays store and process large amounts of personal data of individuals, rendering privacy protection a major issue of concern during their development. The EU General Data Protection Regulation addresses this issue with several provisions for protecting the personal data of individuals and makes it compulsory for companies and individuals to comply with the regulation. However, few methodologies have been considered to date to support GDPR compliance during system development. In this paper, we propose a process-calculus framework for formal modeling of software systems during the design phase, and validation of properties relating to the GDPR notion of Consent, the Right to Erasure, the Right to Access, and the Right to Rectification. Moreover, the framework enables the treatment of the notion of purpose through privacy policy satisfaction. Validation is performed with static analysis using type checking. Our work is the first step towards a framework that will implement Privacy-by-Design and GDPR compliance throughout the development cycle of a software system.
Evangelia Vanezi, Dimitrios Kouzapas, Georgia M. Kapitsaki, Anna Philippou
Evaluation of Software Product Quality Metrics
Abstract
Computing devices and associated software govern everyday life, and form the backbone of safety critical systems in banking, healthcare, automotive and other fields. Increasing system complexity, quickly evolving technologies and paradigm shifts have kept software quality research at the forefront. Standards such as ISO’s 25010 express it in terms of sub-characteristics such as maintainability, reliability and security. A significant body of literature attempts to link these subcharacteristics with software metric values, with the end goal of creating a metric-based model of software product quality. However, research also identifies the most important existing barriers. Among them we mention the diversity of software application types, development platforms and languages. Additionally, unified definitions to make software metrics truly language-agnostic do not exist, and would be difficult to implement given programming language levels of variety. This is compounded by the fact that many existing studies do not detail their methodology and tooling, which precludes researchers from creating surveys to enable data analysis on a larger scale. In our paper, we propose a comprehensive study of metric values in the context of three complex, open-source applications. We align our methodology and tooling with that of existing research, and present it in detail in order to facilitate comparative evaluation. We study metric values during the entire 18-year development history of our target applications, in order to capture the longitudinal view that we found lacking in existing literature. We identify metric dependencies and check their consistency across applications and their versions. At each step, we carry out comparative evaluation with existing research and present our results.
Arthur-Jozsef Molnar, Alexandra Neamţu, Simona Motogna
Model-Driven Development Applied to Mobile Health and Clinical Scores
Abstract
Clinical scores are a widely discussed topic in health as part of modern clinical practice. In general, these tools predict clinical outcomes, perform risk stratification, aid in clinical decision making, assess disease severity or assist diagnosis. However, the problem is that clinical scores data are traditionally obtained manually, which can lead to incorrect data and result. Besides, by collecting biological/health data in real-time from humans, the current mobile health (mHealth) solutions that computationally solve that problem are limited because those systems are developed considering the specificities of a single clinical score. This work addresses the productivity in developing mHealth solutions for clinical scores through the use of Model-Driven Development concepts. This paper focuses on describing DSML4ClinicalScores, a high-level domain-specific modeling language that uses the Ecore metamodel to describe a clinical score specification. To propose the DSML4ClinicalScores, we analyzed 89 clinical scores for defining the artifacts of this proposed Metamodel. From the concrete model created by the DSML4ClinicalScores, we apply model transformation techniques to automatically generate software components in the domains of mHealth and clinical scores. In the end, we evaluate the proposed approach through the modeling of eight different clinical scores for validating the DSML4ClinicalScores metamodel, and the development a practical case study using a specific clinical score for illustrating how to use the proposal in a clinical situation scenario.
Allan Fábio de Aguiar Barbosa
Model-Driven Software Development Combined with Semantic Mutation of UML State Machines
Abstract
The paper presents an approach to semantic mutation of state machines that specify class behavior in Model-Driven Software Development. The mutations are aimed at different variants of UML state machine behavior. Mutation testing of a target application allows to compare different semantic interpretations and verify a set of test cases. We present a notation of a process combining model-driven development with semantic mutation and semantic consequence-oriented mutations. Origin and details of the proposed mutation operators are discussed. The approach has been supported by the Framework for eXecutable UML (FXU) that creates a C# application from UML classes and state machines. The tool architecture has been reengineered in order to apply semantic mutation operators into the model-driven development process and realize testing on a set of semantic mutants. The tool and the implemented mutation operators have been verified in a case study on a status service for a social network.
Anna Derezinska, Łukasz Zaremba
Model-Driven Automatic Question Generation for a Gamified Clinical Guideline Training System
Abstract
Clinical practice guidelines (CPGs) are a cornerstone of modern medical practice since they summarize the vast medical literature and provide care recommendations based on the current best evidence. However, there are barriers to CPG utilization such as lack of awareness and lack of familiarity of the CPGs by clinicians due to ineffective CPG dissemination and implementation. This calls for research into effective and scalable CPG dissemination strategies that will improve CPG awareness and familiarity. We describe a model-driven approach to design and develop a gamified e-learning system for clinical guidelines where the training questions are generated automatically. We also present the prototype developed using this approach. We use models for different aspects of the system, an entity model for the clinical domain, a workflow model for the clinical processes and a game engine to generate and manage the training sessions. We employ gamification to increase user motivation and engagement in the training of guideline content. We conducted a limited formative evaluation of the prototype system and the users agreed that the system would be a useful addition to their training. Our proposed approach is flexible and adaptive as it allows for easy updates of the guidelines, integration with different device interfaces and representation of any guideline.
Job N. Nyameino, Ben-Richard Ebbesvik, Fazle Rabbi, Martin C. Were, Yngve Lamo
New Method to Reduce Verification Time of Reconfigurable Real-Time Systems Using R-TNCESs Formalism
Abstract
Nowadays, several systems like manufacturing, aerospace, medical, and telecommunication ones face new challenges such as fault-tolerance, response in time, flexibility, modularity, etc. To deal with these requirements, systems had to include new abilities. Consequently, systems become more complex, and their verification becomes expensive in terms of computation time and memory. Reconfigurable real-time systems are ones of those complex systems that encompass reconfigurability constraints and subject to real-time requirements. Their verification is often a hard task due to their complex behavior. In this paper, we formally model these systems using reconfigurable timed net condition/event systems (R-TNCESs) formalism, which is a Petri net extension for reconfigurable systems. Then, we propose a new methodology to efficiently verify real-time properties by avoiding redundant computations. An application of the paper contributions is carried out on a benchmark manufacturing system, a performance evaluation is achieved to demonstrate it and to compare it with other related works.
Yousra Hafidi, Laid Kahloul, Mohamed Khalgui, Mohamed Ramdani
On Improving R-TNCES Rebuilding for Reconfigurable Real-Time Systems
Abstract
This paper deals with improved verification of real-time systems that extend the classical formal verification with the rebuilding of reconfigurable timed net condition event systems (R-TNCESs). Indeed, previous computation tree logic (CTL) model repair approaches make the model checking eligible for generating a new correct model from a faulty one at the debugging level. We propose R-TNCESs rebuilding method, which allows both verification and modification of real-time system models. The proposed approach generates from an incorrect model a new one that satisfies a given computation tree logic/Timed computation tree logic formula. Temporal logic formulas (CTL/TCTL) are defined to deal with the system functional/temporal properties specification respectively. This paper provides an efficient algorithm with a set of transformation rules to achieve the rebuilding phase. Finally, FESTO MPS platform is used as a case study to demonstrate the proposed rebuilding method for real-time system models. The obtained results show the efficiency of our contribution and its scalability in large complex systems.
Mohamed Ramdani, Laid Kahloul, Mohamed Khalgui, Yousra Hafidi
Towards the Efficient Use of Dynamic Call Graph Generators of Node.js Applications
Abstract
JavaScript is the most popular programming language these days and it is used in many environments such as node.js. The node.js ecosystem allows sharing JavaScript code easily, and the shared code can be reused as building blocks to create new applications. However, this ever growing environment has its own challenges as well. One of them is security: even simple applications can have many dependencies, and these dependencies might contain malware software. Another challenge is fault localization: finding the reason of a fault could be difficult in a software with many dependencies. Dynamic program analysis can help solving these problems. In particular, dynamic call graphs were used successfully in both cases before. Since no call graph generators were available for node.js before, we created them. In this paper, we compare the call graphs constructed by our generator tools. We show that a large amount of engine-specific information is present in the call graphs and filtering can efficiently remove it. We also discuss how the asynchronous nature of JavaScript affects call graphs. Finally, we show the performance overhead of call graph generation and its side effects on module testing.
Zoltán Herczeg, Gábor Lóki, Ákos Kiss
Comparison of Computer Vision Approaches in Application to the Electricity and Gas Meter Reading
Abstract
This chapter presents comparison of computer vision approaches in application to the meter reading process for the standard (non-smart) electricity and gas. In this work, we analyse four techniques, Google Cloud Vision, AWS Rekognition, Tesseract OCR, and Azure’s Computer Vision. Electricity and gas meter reading is a time consuming task, which is done manually in most cases. There are some approaches proposing use of smart meters that report their readings automatically. However, this solution is expensive and requires both replacement of the existing meters, even when they are functional and new, and extensive changes of the whole meter reading system dealing.
Maria Spichkova, Johan van Zyl, Siddharth Sachdev, Ashish Bhardwaj, Nirav Desai
Expanding Tracing Capabilities Using Dynamic Tracing Data
Abstract
Software traceability enables gaining insight into artifact relationships and dependencies throughout software development. This information can be used to support project maintenance and to reduce costs, e.g. by estimating the impact of artifact changes. Many traceability applications require manual effort for creating and managing the necessary data. Current approaches aim at reducing this effort by automating various involved tasks. To support this, we propose an enrichment of tracing data by capturing interactions that influence the artifacts’ life-cycle, which we refer to as dynamic tracing data. Its purpose is to expand capabilities of traceability applications and to enable assistance in development tasks. In this paper, we present our research methodology and current results, most importantly a flexible and modular framework for capturing and using dynamic tracing data, as well as an example scenario to demonstrate a possible implementation and usage of the framework.
Dennis Ziegenhagen, Andreas Speck, Elke Pulvermueller
Automated Software Measurement Strategies Elaboration Using Unsupervised Learning Data Analysis
Abstract
The software measurement becomes more complex as well as software systems. Indeed, the supervision of such systems needs to manage a lot of data. The measurement plans are heavy and time and resource consuming due to the amount of software properties to analyze. Moreover, the design of measurement processes depends on the software project, the used language, the used computer etc. Thereby, to evaluate a software, it is needed to know the context of the measured object, as well as, to analyze a software evaluation is needed to know the context. That is what makes difficult to automate a software measurement analysis. Formal models and standards have been standardized to facilitate some of these aspects. However, the maintainability of the measurements activities is still constituted of complex activities.
In our previous work, we conducted a research work to fully automate the generation of software measurement plans at runtime in order to have more flexible measurement processes adapted to the software needs. In this paper we aim at improving this latter. The idea is to learn from an historical measurements for generating an analysis model corresponding to the context. For that we propose to use a learning technique, which will learn from a measurements dataset of the evaluated software, as the expert does, and generate the corresponding analysis model.
The purpose is to use an unsupervised learning algorithm to generate automatically an analysis model in order to efficiently manage the efforts, time and resources of the experts.
This approach is well implemented, integrated on an industrial platform and experiments are processed to show the scalability and effectiveness of our approach. Discussions about the results have been provided.
Sarah A. Dahab, Stephane Maag
Agile Scaled Steps of Doneness: A Standardized Procedure to Conceptualizing and Completing User Stories Across Scrum Teams and Industries
Abstract
Agile software development (ASD) requires a shift in culture when compared to the traditional Waterfall software development. The traditional methods concentrate on project scope, using them to determine cost and time schedule. Agile concentrates on business values, using them to determine quality levels and possible technology constraints. Where waterfall methods are suitable for well-arranged and predictable environment. For an organization, one of the most important differences between agile and waterfall is the return of investment. Organizations are created to generate revenue and moreover profit for its stakeholders. Agile can help to produce earlier return on that investment. This enables an organization to get the maximum returns before their competitors start penetrating their market shares. Agile has scaling frameworks to assist organization transition to ASD. This paper aims to build on the original ENASE paper titled “Scaling A Standardized Procedure To Conceptualizing And Completing User Stories Across Scrum Teams And Industries” by extending the application of the Scaled Steps of Doneness procedure from four teams to six teams and analyze the results.
Matthew Ormsby, Curtis Busby-Earle
Indoor Localization Techniques Within a Home Monitoring Platform
Abstract
This paper details a number of indoor localization techniques developed for real-time monitoring of older adults. These were developed within the framework of the i-Light research project that was funded by the European Union. The project targeted the development and initial evaluation of a configurable and cost-effective cyber-physical system for monitoring the safety of older adults who are living in their own homes. Localization hardware consists of a number of custom-developed devices that replace existing luminaires. In addition to lighting capabilities, they measure the strength of a Bluetooth Low Energy signal emitted by a wearable device on the user. Readings are recorded in real time and sent to a software server for analysis. We present a comparative evaluation of the accuracy achieved by several server-side algorithms, including Kalman filtering, a look-back heuristic as well as a neural network-based approach. It is known that approaches based on measuring signal strength are sensitive to the placement of walls, construction materials used, the presence of doors as well as existing furniture. As such, we evaluate the proposed approaches in two separate locations having distinct building characteristics. We show that the proposed techniques improve the accuracy of localization. As the final step, we evaluate our results against comparable existing approaches.
Iuliana Marin, Maria-Iuliana Bocicor, Arthur-Jozsef Molnar
Backmatter
Metadata
Title
Evaluation of Novel Approaches to Software Engineering
Editors
Ernesto Damiani
George Spanoudakis
Prof. Leszek A. Maciaszek
Copyright Year
2020
Electronic ISBN
978-3-030-40223-5
Print ISBN
978-3-030-40222-8
DOI
https://doi.org/10.1007/978-3-030-40223-5

Premium Partner