Rapid quality assurance with Requirements Smells
Introduction
Defects in requirements, such as ambiguities or incomplete requirements, can lead to time and cost overruns in a project (Méndez Fernández and Wagner, 2015). Some of the issues require specific domain knowledge to be uncovered. For example, it is very difficult to decide whether a requirements artifact is complete without domain knowledge. Other issues, however, can be detected more easily: If a requirement states that a sensor should work with sufficient accuracy without detailing what sufficient means in that context, the requirement is vague and consequently not testable. The same holds for other pitfalls such as loopholes: Phrasing that a certain property of the software under development should be fulfilled as far as possible leaves room for subjective (mis-)interpretation and, thus, can have severe consequences during the acceptance phase of a product (Femmer, Méndez Fernández, Juergens, Klose, Zimmer, Zimmer, 2014, ISO, IEC, IEEE).
To detect such quality defects, quality assurance processes often rely on reviews. Reviews of requirements artifacts, however, need to involve all relevant stakeholders (Salger, 2013), who must manually read and understand each requirements artifact. Moreover, they are difficult to perform. They require a high domain knowledge and expertise from the reviewers (Salger, 2013) and the quality of their outcome depends on the quality of the reviewer (Zelkowitz et al., 1983). On top of all this, reviewers could be distracted by superficial quality defects such as the aforementioned vague formulations or loopholes. We therefore argue that reviews are time-consuming and costly.
Therefore, quality assurance processes would benefit from faster feedback cycles in requirements engineering (RE), which support requirements engineers and project participants in immediately discovering certain types of pitfalls in requirements artifacts. Such feedback cycles could enable a lightweight quality assurance, e.g., as a complement to reviews.
Since requirements in industry are nearly exclusively written in natural language (Mich et al., 2004) and natural language has no formal semantics, quality defects in requirements artifacts are hard to detect automatically. To face this challenge of fast feedback and the imperfect knowledge of a requirement’s semantics, we created an approach that is based on what we call Requirements (Bad) Smells. These are concrete symptoms for a requirement artifact’s quality defect for which we enable rapid feedback through automatic smell detection.
In this paper, we contribute an analysis of whether and to what extent Requirements Smell analysis can support quality assurance in RE. To this end, we
- 1.
define the notion of Requirements Smells and integrate the Requirements Smells1 concept into an analysis approach to complement (constructive and analytical) quality assurance in RE,
- 2.
present a prototypical realization of our smell detection approach, which we call Smella, and
- 3.
conduct an empirical investigation of our approach to better understand the usefulness of a Requirements Smell analysis in quality assurance.
Our empirical evaluation involves three industrial contexts: The companies Daimler AG as a representative for the automotive sector, Wacker Chemie AG as a representative for the chemical sector, and TechDivison GmbH as an agile-specialized company. We complement the industrial contexts with an academic one, where we apply Smella to 51 requirements artifacts created by students. With our evaluations, we aim at discovering the accuracy of our smell analysis taking both a technical and a practical perspective that determines the context-specific relevance of the detected smells. We further analyze which requirements quality defects can be detected with smells, and we conclude with a discussion of how smell detection could help in the (industrial) quality assurance (QA) process.
This article extends our previously published workshop paper (Femmer et al., 2014b) in the following aspects: We provide a richer discussion on the notion of Requirements Smell and give a precise definition. We introduce our (extended) tool-supported realization of our smell analysis approach and outline its integration into the QA process. We extend our first two case studies with another industrial one as well as with an investigation in an academic context to expand our initial empirical investigations by
- 1.
investigating the accuracy of our smell detection including precision, recall, and relevance from a practical perspective,
- 2.
analyzing which quality defects can be detected with smells and
- 3.
gathering practitioner’s feedback on how they would integrate smell detection in their QA process considering both formal and agile process environments.
The remainder of this paper is structured as follows. In Section 2, we describe previous work in the area. In Section 3, we define the concept of Requirements Smells and describe how we derived a set of Requirements Smells from ISO 29148. We introduce the tool realization in Section 4 and discuss the integration of smell detection in context of quality assurance in Section 5. In Section 6, we report on the empirical study that we set up to evaluate our approach, before concluding our paper in Section 7.
Section snippets
Related work
In the following, we discuss work relating to the concept of natural language processing and smells in general, followed by quality assurance in RE, before critically discussing currently open research gaps.
Requirements Smells
We first introduce the terminology on Requirements Smells as used in this paper. In a second step, we define those smells we derived from ISO 29148 and which we use in our studies, before describing the tool realization in the next section.
Smella: a prototype for Requirements Smell detection
Requirements Smell detection, as presented in this paper, serves to support manual quality assurance tasks (see also the next section). The smell detection is implemented on top of the software quality analysis toolkit ConQAT,4 a platform for source code analysis, which we extended with the required NLP features. In the following, we introduce the process for the automatic part of the approach, i.e. the detection and reporting of Requirements Smells. To the best of our
Requirements Smell detection in the process of quality assurance
The Requirements Smell detection approach described in previous sections serves the primary purpose of supporting quality assurance in RE. The detection process itself is, however, not restricted to particular quality assurance tasks, nor does it depend on a particular (software) process model as we will show in Section 6. Hence, a smell detection, similar to the notion of quality itself, always depends on the views in a socio-economic context. Thus, how to integrate smell detection into
Evaluation
For a better, empirical understanding of smells in requirements artifacts, we conducted an exploratory multi-case study with both industrial and academic cases. We particularly rely on case study research over other techniques, such as controlled experiments, because we want to evaluate our approach in practical settings under realistic conditions. For the design and reporting of the case study, we largely follow the guidelines of Runeson and Höst (2008).
Conclusion
In this paper, we defined Requirements Smells and presented an approach to the detection of Requirements Smells which we empirically evaluated in a multi-case study. In the following, we summarize our conclusions, relate it to existing evidence on the detection of natural language quality defects in requirements artifacts, and we discuss the impact and limitations of our approach and its evaluation. We close with outlining future work.
Acknowledgments
We would like to thank Elmar Juergens, Michael Klose, Ilona Zimmer, Joerg Zimmer, Heike Frank, Jonas Eckhardt as well as the software engineering students of Stuttgart University for their support during the case studies and feedback on earlier drafts of this paper.
This work was performed within the project Q-Effekt; it was partially funded by the German Federal Ministry of Education and Research (BMBF) under grant no. 01IS15003 A-B. The authors assume responsibility for the content.
Henning Femmer holds an MSc in Software Engineering with honors from Technical University of Munich, Ludwig-Maximilians University Munich, and the University of Augsburg. His main research interest is the quality of requirements specifications.
References (76)
- et al.
An improved inspection technique
Commun. ACM
(1993) - et al.
Naming the pain in requirements engineering: a design for a global family of surveys and first results from Germany
Inf. Software Technol.
(2015) - et al.
A literature survey on international standards for systems requirements engineering
Proceedings of the Conference on Systems Engineering Research
(2013) - et al.
On the systematic analysis of natural language requirements with CIRCE
Autom. Software Eng.
(2006) - et al.
Towards an inspection technique for use case models
Proceedings of the 14th International Conference on Software Engineering and Knowledge Engineering
(2002) Kanban
(2010)- et al.
Automated checking of conformance to requirements templates using natural language processing
IEEE Trans. Software Eng.
(2015) - et al.
The case for dumb requirements engineering tools
Requirements Engineering: Foundation for Software Quality
(2012) - et al.
A new quality model for natural language requirements specifications
Requirements Engineering: Foundation for Software Quality
(2006) - et al.
From contract drafting to software specification: linguistic sources of ambiguity
Technical Report
(2003)
A few billion lines of code later: using static analysis to find bugs in the real world
Commun. ACM
Quality analysis of NL requirements: an industrial case study
13th IEEE International Requirements Engineering Conference
Identifying nocuous ambiguities in natural language requirements
14th IEEE International Requirements Engineering Conference
Supporting use-case reviews
Business Information Systems
Writing Effective Use Cases
User Stories Applied: For Agile Software Development
Identifying and measuring quality in a software requirements specification
Proceedings First International Software Metrics Symposium
Ambiguity in natural language software requirements: a case study
Requirements Engineering: Foundation for Software Quality
Higher quality requirements specifications through natural language patterns
Software: Science, Technology and Engineering
Refactoring test code
An automatic quality evaluation for natural language requirements
Proceedings of the Seventh International Workshop on Requirements Engineering: Foundation for Software Quality
The linguistic approach to the natural language requirements quality: benefit of the use of an automatic tool
Proceedings 26th Annual NASA Goddard Software Engineering Workshop
Design and code inspections to reduce errors in program development
Software Pioneers
Empirical principles and an industrial case study in retrieving equivalent requirements via natural language processing techniques
IEEE Trans. Software Eng.
Application of linguistic techniques for use case analysis
Requirements Eng.
On the impact of passive voice requirements on domain modelling
Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
Rapid requirements checks with requirements smells: two case studies
Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering
It’s the activities, stupid! A new perspective on RE quality
Proceedings of the 2nd International Workshop on Requirements Engineering and Testing
Refactoring: Improving the Design of Existing Code
A framework to measure and improve the quality of textual requirements
Requirements Eng.
Lightweight validation of natural language requirements
Software: Pract. Exper.
Ambiguity detection: towards a tool explaining ambiguity sources
Requirements Engineering: Foundation for Software Quality
Hunting for smells in natural language tests
Proceedings of the International Conference on Software Engineering
Can clone detection support quality assessments of requirements specifications?
Proceedings of the International Conference on Software Engineering
Cited by (0)
Henning Femmer holds an MSc in Software Engineering with honors from Technical University of Munich, Ludwig-Maximilians University Munich, and the University of Augsburg. His main research interest is the quality of requirements specifications.
Daniel Méndez Fernández studied computer science at the Ludwig-Maximilians University Munich. He received his PhD and subsequently his habilitation in Computer Science from Technical University of Munich. His research covers empirical software engineering with a particular focus on requirements engineering.
Stefan Wagner studied computer science in Augsburg and Edinburgh and holds a PhD in computer science from Technical University of Munich. Since 2011, he is a full professor of software engineering at the University of Stuttgart. His research includes work on software quality, requirements engineering, safety & security engineering and agile/lean/continuous software development.
Sebastian Eder holds an MSc in Software Engineering with honors from Technical University of Munich, Ludwig-Maximilians University Munich, and the University of Augsburg. His main research interest is software maintenance based on software usage.