Skip to main content
Top

2018 | Book

Software Quality: Methods and Tools for Better Software and Systems

10th International Conference, SWQD 2018, Vienna, Austria, January 16–19, 2018, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 10th Software Quality Days Conference, SWQD 2018, held in Vienna, Austria, in January 2018.

The Software Quality Days (SWQD) conference started in 2009 and has grown to the biggest conferences on software quality in Europe with a strong community. The program of the SWQD conference is designed to encompass a stimulating mixture of practical presentations and new research topics in scientific presentations. The guiding conference topic of the SWQD 2018 is “Software Quality 4.0: Methods and Tools for better Software and Systems”, as novel technologies include new challenges and might require new and adapted methods and tools to support quality assurance activities early.

The 6 full papers and 2 short papers presented in this volume were carefully reviewed and selected from 16 submissions. The volume also contains 2 invited talks. The contributions were organized in topical sections named: safety and security; requirements engineering and requirements-based testing; crowdsourcing in software engineering; software and systems architecture; experimentation in software engineering; and smart environments.

Table of Contents

Frontmatter

Safety and Security

Frontmatter
Security Challenges in Cyber-Physical Production Systems
Abstract
Within the last decade, Security became a major focus in the traditional IT-Industry, mainly through the interconnection of systems and especially through the connection to the Internet. This opened up a huge new attack surface, which resulted in major takedowns of legitimate services and new forms of crime and destruction. This led to the development of a multitude of new defense mechanisms and strategies, as well as the establishing of Security procedures on both, organizational and technical level. Production systems have mostly remained in isolation during these past years, with security typically focused on the perimeter. Now, with the introduction of new paradigms like Industry 4.0, this isolation is questioned heavily with Physical Production Systems (PPSs) now connected to an IT-world resulting in cyber-physical systems sharing the attack surface of traditional web based interfaces while featuring completely different goals, parameters like lifetime and safety, as well as construction. In this work, we present an outline on the major security challenges faced by cyber-physical production systems. While many of these challenges harken back to issues also present in traditional web based IT, we will thoroughly analyze the differences. Still, many new attack vectors appeared in the past, either in practical attacks like Stuxnet, or in theoretical work. These attack vectors use specific features or design elements of cyber-physical systems to their advantage and are unparalleled in traditional IT. Furthermore, many mitigation strategies prevalent in traditional IT systems are not applicable in the industrial world, e.g., patching, thus rendering traditional strategies in IT-Security unfeasible. A thorough discussion of the major challenges in CPPS-Security is thus required in order to focus research on the most important targets.
Peter Kieseberg, Edgar Weippl
Monitoring of Access Control Policy for Refinement and Improvements
Abstract
Access Control is among the most important security mechanisms to put in place in order to secure applications, and XACML is the de facto standard for defining access control policies. As systems and resource utilization evolve, access control policies become increasingly difficult to manage and update according to contextual behaviour. This paper proposes a policy monitoring infrastructure able to identify policy abnormal behaviour and prevent misuse in granting/denying further accesses. This proposal relies on coverage adequacy criteria as well as KPIs definition for assessing the most common usage behaviors and provide feedback for refinement and maintenance of the current access control policy. It integrates a flexible and adaptable event based monitoring facility for run time validation of policy execution. A first validation on an example shows the effectiveness of the proposed approach.
Antonello Calabró, Francesca Lonetti, Eda Marchetti

Requirements Engineering and Requirements-Based Testing

Frontmatter
On Evidence-Based Risk Management in Requirements Engineering
Abstract
Background: The sensitivity of Requirements Engineering (RE) to the context makes it difficult to efficiently control problems therein, thus, hampering an effective risk management devoted to allow for early corrective or even preventive measures.
Problem: There is still little empirical knowledge about context-specific RE phenomena which would be necessary for an effective context-sensitive risk management in RE.
Goal: We propose and validate an evidence-based approach to assess risks in RE using cross-company data about problems, causes and effects.
Research Method: We use survey data from 228 companies and build a probabilistic network that supports the forecast of context-specific RE phenomena. We implement this approach using spreadsheets to support a light-weight risk assessment.
Results: Our results from an initial validation in 6 companies strengthen our confidence that the approach increases the awareness for individual risk factors in RE, and the feedback further allows for disseminating our approach into practice.
Daniel Méndez Fernández, Michaela Tießler, Marcos Kalinowski, Michael Felderer, Marco Kuhrmann
Requirement-Based Testing - Extracting Logical Test Cases from Requirement Documents
Abstract
Much has been written on the subject of model-based testing, i.e. taking the test cases from the system model. The prerequisite to that is that the testers have access to a design model, preferably in UML. Less has been published on the subject of requirement-based testing, i.e. taking the test cases directly from the requirement document, which is usually some structured text. Model-based testing has, to a certain extent, already been automated. Tools process the model XMI schema to extract logical test cases or test conditions as they are referred to in the ISO Standard-29119. There has yet to be any tools to extract test cases from requirement documents since this entails natural language processing and presupposes the marking up of the documents. The author began his research in this field already in 2003 while working as tester in a large scale financial services project in Vienna. There he was able to generate several thousand logical test cases and store them in the project test case database. His first internationally published paper on the subject was at the QSIC Conference in 2007. Since then he has published another five papers on the subject, alone and with others. At the heart of this approach is a text analysis tool named TestSpec which processes both English and German language requirement texts. The tool has been used in several projects to set up a logical test case database and to determine the extent of the planned system test. By counting the number of test conditions contained in the requirements text, it is possible to better estimate test effort. In all cases it proves to be helpful to testers in recognizing and defining test cases. It saves the effort involved in manually scanning through large text documents often containing several hundred pages in search of features to be tested. What used to take many days can now be done within a few minutes with the help of the tool. Not only that, but the tool also generates a test plan and a test specification in accordance with the ISO/IEEE standard. In this paper the automated approach to extracting test conditions from natural language text is explained and how it is implemented by the tool. In this praxis-oriented paper four test projects are cited which deal with very different types of requirement documents.
Harry M. Sneed

Crowdsourcing in Software Engineering

Frontmatter
Expert Sourcing to Support the Identification of Model Elements in System Descriptions
Abstract
Context. Expert sourcing is a novel approach to support quality assurance: it relies on methods and tooling from crowdsourcing research to split model quality assurance tasks and parallelize task execution across several expert users. Typical quality assurance tasks focus on checking an inspection object, e.g., a model, towards a reference document, e.g., a requirements specification, that is considered to be correct. For example, given a text-based system description and a corresponding model such as an Extended Entity Relationship (EER) diagram, experts are guided towards inspecting the model based on so-called expected model elements (EMEs). EMEs are entities, attributes and relations that appear in text and are reflected by the corresponding model. In common inspection tasks, EMEs are not explicitly expressed but implicitly available via textual descriptions. Thus, a main improvement is to make EMEs explicit by using crowdsourcing mechanisms to drive model quality assurance among experts. Objective and Method. In this paper, we investigate the effectiveness of identifying the EMEs through expert sourcing. To that end, we perform a feasibility study in which we compare EMEs identified through expert sourcing with EMEs provided by a task owner who has a deep knowledge of the entire system specification text. Conclusions. Results of the data analysis show that the effectiveness of the crowdsourcing-style EME acquisition is influenced by the complexity of these EMEs: entity EMEs can be harvested with high recall and precision, but the lexical and semantic variations of attribute EMEs hamper their automatic aggregation and reaching consensus (these EMEs are harvested with high precisions but limited recall). Based on these lessons learned we propose a new task design for expert sourcing EMEs.
Marta Sabou, Dietmar Winkler, Sanja Petrovic

Software and Systems Architecture

Frontmatter
Are Your Requirements Covered?
Abstract
The coverage of requirements is a fundamental need throughout the software life cycle. It gives project managers an indication how well the software meets expected requirements. A precondition for the process is to link requirements with project artifacts, like test cases. There are various (semi-) automated methods deriving traceable relations between requirements and test scenarios aiming to counteract time consuming and error-prone manual approaches. However, even if traceability links are correctly established coverage is calculated based on passed test scenarios without taking into account the overall code base written to realize the requirement in the first place.
In this paper the “Requirements-Testing-Coverage” (ReTeCo) approach is described that establishes links between requirements and test cases by making use of knowledge available in software tools supporting the software engineering process and are part of the software engineering tool environment. In contrast to traditional approaches ReTeCo generates traceability links indirectly by gathering and analyzing information from version control system, ticketing system and test coverage tools. Since the approach takes into account a larger information base it is able to calculate coverage reports on a fine-grained contextual level rather than on the result of high-level artifacts.
Richard Mordinyi
High Quality at Short Time-to-Market: Challenges Towards This Goal and Guidelines for the Realization
Abstract
High quality and short time-to-market are business goals that are relevant for almost every company since decades. However, the duration between new releases heavily decreased, and the level of quality that customers expect increased drastically during the last years. Achieving both business goals imply investments that have to be considered. In this article, we sketch 22 best practices that help companies to strive towards a shorter time-to-market while providing high quality software products. We share furthermore experiences from a practical environment where a selected set of guidelines was applied. Especially DevOps including an automated deployment pipeline was an essential step towards high quality at short time-to-market.
Frank Elberzhager, Matthias Naab
Prioritizing Corrective Maintenance Activities for Android Applications: An Industrial Case Study on Android Crash Reports
Abstract
Context: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled exceptions. This is often reflected in a negative comment in the Google Play Store and developers are usually not able to reproduce the issue reported by the end users because of a lack of information.
Objective: In this work, we present an industrial case study aimed at prioritizing the removal of bugs related to uncaught exceptions. Therefore, we (1) analyzed crash reports of an Android application developed by a public transportation company, (2) classified uncaught exceptions that caused the crashes; (3) prioritized the exceptions according to their impact on users.
Results: The analysis of the exceptions showed that seven exceptions generated 70% of the overall errors and that it was possible to solve more than 50% of the exceptions-related issues by fixing just six Java classes. Moreover, as a side result, we discovered that the exceptions were highly correlated with two code smells, namely “Spaghetti Code” and “Swiss Army Knife”. The results of this study helped the company understand how to better focus their limited maintenance effort. Additionally, the adopted process can be beneficial for any Android developer in understanding how to prioritize the maintenance effort.
Valentina Lenarduzzi, Alexandru Cristian Stan, Davide Taibi, Gustavs Venters, Markus Windegger

Experimentation in Software Engineering

Frontmatter
Evaluation of an Integrated Tool Environment for Experimentation in DSL Engineering
Abstract
Domain specific languages (DSL) are a popular means for providing customized solutions to a certain problem domain. So far, however, language workbenches lack sufficient built-in features in providing decision support when it comes to language design and improvement. Controlled experiments can provide data-driven decision support for both, researchers and language engineers, for comparing different languages or language features. This paper provides an evaluation of an integrated end-to-end tool environment for performing controlled experiments in DSL engineering. The experimentation environment is presented by a running example from engineering domain specific languages for acceptance testing. The tool is built on and integrated into the Meta Programming System (MPS) language workbench. For each step of an experiment the language engineer is supported by suitable DSLs and tools all within the MPS platform. The evaluation, from the viewpoint of the experiments subject, is based on the technology acceptance model (TAM). Results reveal that the subjects found the DSL experimentation environment intuitive and easy to use.
Florian Häser, Michael Felderer, Ruth Breu

Smart Environments

Frontmatter
Leveraging Smart Environments for Runtime Resources Management
Abstract
Smart environments (SE) have gained widespread attention due to their flexible integration into everyday life. Applications leveraging the smart environments rely on regular exchange of critical information and need accurate models for monitoring and controlling the SE behavior. Different rules are usually specified and centralized for correlating sensor data, as well as managing the resources and regulating the access to them, thus avoiding security flaws. In this paper, we propose a dynamic and flexible infrastructure able to perform runtime resources’ management by decoupling the different levels of SE control rules. This allows to simplify their continuous updating and improvement, thus reducing the maintenance effort. The proposed solution integrates low cost wireless technologies and can be easily extended to include other possible existing equipments. A first validation of the proposed infrastructure on a case study is also presented.
Paolo Barsocchi, Antonello Calabró, Francesca Lonetti, Eda Marchetti, Filippo Palumbo
Backmatter
Metadata
Title
Software Quality: Methods and Tools for Better Software and Systems
Editors
Dietmar Winkler
Stefan Biffl
Johannes Bergsmann
Copyright Year
2018
Electronic ISBN
978-3-319-71440-0
Print ISBN
978-3-319-71439-4
DOI
https://doi.org/10.1007/978-3-319-71440-0

Premium Partner