Skip to main content

2008 | Buch

Innovations for Requirement Analysis. From Stakeholders’ Needs to Formal Designs

14th Monterey Workshop 2007, Monterey, CA, USA, September 10-13, 2007. Revised Selected Papers

herausgegeben von: Barbara Paech, Craig Martell

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Wearepleasedtopresenttheproceedingsofthe14thMontereyWorkshop,which tookplaceSeptember10–13,2007inMonterey,CA,USA. Inthispreface,wegive the reader an overview of what took place at the workshop and introduce the contributions in this Lecture Notes in Computer Science volume. A complete introduction to the theme of the workshop, as well as to the history of the Monterey Workshop series, can be found in Luqi and Kordon’s “Advances in Requirements Engineering: Bridging the Gap between Stakeholders’ Needs and Formal Designs” in this volume. This paper also contains the case study that many participants used as a problem to frame their analyses, and a summary of the workshop’s results. The workshop consisted of three keynote talks, three panels, presentations of peer-reviewed papers, as well as presentations of various position papers by the participants. The keynote speakers at this year’s workshop were Daniel Berry, Aravind Joshi, and Lori Clarke. Each of their talks was used to set the tone for the p- sentations and discussions for that particular day. Daniel Berry presented an overview of the needs and challenges of natural language processing in requi- ments engineering, with a special focus on ambiguity in his talk “Ambiguity in Natural Language Requirements. ” Aravind Joshi provided an overview of current natural language processing research in discourse analysis in the talk “Some Recent Developments in Natural Language Processing. ” Finally, Lori Clarke showed how to combine formal requirements speci?cation with natural language processing to cope with the complex domain of medical information processes in “Getting the Details Right.

Inhaltsverzeichnis

Frontmatter

Abstracts

Ambiguity in Natural Language Requirements Documents
(Extended Abstract)
Abstract
This paper is an extended abstract of an invited talk at the workshop that I put together using material from other talks and from papers that I and colleagues have written. The purposes of this extended abstract are to summarize the talk and to allow the reader to find the source materials for the talk directly.
Daniel M. Berry
Towards Discourse Meaning
Abstract
The overall goal is to discuss some issues concerning the dependencies at the discourse level and at the sentence level. However, first I will briefly describe the Penn Discourse Treebank (PDTB)*, a corpus in which we annotate the discourse connectives (explicit and implicit) and their arguments together with “attributions” of the arguments and the relations denoted by the connectives, and also the senses of the connectives. I will then focus on the complexity of dependencies in terms of (a) the elements that bear the dependency relations, (b) graph theoretic properties of these dependencies such as nested and crossed dependencies, dependencies with shared arguments, and (c) attributions and their relationship to the dependencies, among others. I will compare these dependencies with those at the sentence level and discuss some issues that relate to the transition from the sentence level to the level of ”immediate discourse” and propose some conjectures.
An increasing interest in moving human language technology beyond the level of the sentence in text summarization, question answering, and natural language generation , among others, has recently led to the development of several resources that are richly annotated at the discourse level. Among these is the Penn Discourse TreeBank. (PDTB), a large-scale resource of annotated discourse relations and their arguments over the one million word Wall Street Journal (WSJ) Corpus. Since the sentence-level syntactic annotations of the Penn Treebank [2] and the predicate-argument annotations of the Propbank [4] have been done over the same target corpus, the PDTB thus provides a richer substrate for the development and evaluation of practical algorithms while supporting the extraction of useful features pertaining to syntax, semantics and discourse all at once. The PDTB is the first to follow a lexically - grounded approach to the annotation of discourse relations. Discourse relations, when realized explicitly in the text, are annotated by marking the necessary lexical items – called discourse connectives - expressing them, thus supporting their automatic identification.
PDTB adopts a theory-neutral approach to the annotation, making no commitments to what kinds of high-level structures may be created from the low level annotations of relations and their arguments. This approach has the appeal of allowing the corpus to be useful for researchers working within different frameworks. This theory neutrality also permits investigation of the general question of how structure at the sentence level relates to structure at the discourse level, at least that part of the discourse structure that is “parallel” to the sentence structure [6]. In addition to the argument structure of discourse relations, the PDTB provides sense labels for each relation following a hierarchical classification scheme. Annotation of senses highlights the polysemy of connectives, making the PDTB useful for sense disambiguation tasks [3]. Finally, the PDTB separately annotates the attribution of each discourse relation and of each of its two arguments. While attribution is a relation between agents and abstract objects and thus not a discourse relation, it has been annotated in the PDTB because (a) it is useful for applications such as subjectivity analysis and multi-perspective QA [5], and (b) it exhibits an interesting and complex interaction between sentence-level structure and discourse structure [1]. The first preliminary release of the PDTB was in April 2006. A significantly extended version was released as PDTB-2.0 in February 2008, through the Linguistic Data Consortium (LDC), see http://www.seas.upenn.edu/ pdtb, for the annotation manual, published papers, tutorial slides and a link to LDC.
Aravind K. Joshi
Getting the Details Right
Abstract
Requirement engineering usually involves repeated refinements of the requirements specifications, starting with high-level systems goals and constraints, to more precise and measurable specifications of intended behavior, to detailed, focused statements that provide the basis for formal reasoning. We refer to these more detailed, mathematically rigorous specifications as property specifications. Although care must be taken when defining requirements at all these levels of abstraction, it is particularly difficult to accurately capture all the subtle details associated with property specifications. To help understand these decisions, the PROPEL (PROPerty ELicitation) system [1, 2] provides templates for commonly occurring property patterns [3] in which the options that need to be considered for each pattern are explicitly represented. PROPEL currently provides three views of each template and its associated options: natural language phrases to be selected, a set of hierarchical questions to be answered, or a finite-state automaton with optional labels, transitions, and accepting states to be selected. After all the options have been selected for a template, the finite-state automaton view provides a mathematically precise property specification.
Lori A. Clarke
Defect Detection and Prevention (DDP)
Abstract
The Defect Detection and Prevention (DDP) decision support process, developed at JPL, has over the last 8 years been applied to assist in making a variety of spacecraft decisions. It was originally conceived of as a means to help select and plan hardware assurance activities (inspections, tests, etc) [1], generally late in the development lifecycle. However, since then it has been used predominantly in early phase of system design, when information is scarce, yet many critical decisions are made. Its range of application has extended to encompass a wide variety of kinds of systems and technologies. Its predominant role has been to assist in planning the maturation of promising new technologies to help guide the next steps in their development as they emerge from the laboratory and seek to mature sufficiently to become acceptable to spacecraft missions [2]. Although this may at first glance seem far removed from terrestrial considerations, the factors that come into play in this kind of decision-making are universal - unclear and inconsistent perceptions about requirements and capabilities, uncertainty of what are the driving concerns that should be addressed and how best to address them, challenges of gathering and combining information from experts of multiple difference disciplines, and inevitably the lack of sufficient resources (money, time, CPU, power, ...) to do everything one would wish. Other significant applications of DDP have been as the risk management tool for entire spacecraft projects in their early phases of development, as an aid to planning portfolios of mission activities (e.g., [3]), and as a means to help guide R&D decisions (e.g., [4], [5]).
Martin S. Feather

Papers

Innovative Requirements Engineering Techniques

Could an Agile Requirements Analysis Be Automated?—Lessons Learned from the Successful Overhauling of an Industrial Automation System
Abstract
This paper sketches a recent successful requirements analysis of a complex industrial automation system that mainly required a talented expert, with a beginner’s mind, who has been willing to dig into the domain details together with a committed customer and a motivated team. With these key factors and the application of an appropriate combination of well-established and some newer methods and tools, we were able to efficiently elicit, refine, and validate requirements. From this specific context, we try to derive implications for innovative requirements analysis. We argue that in projects that go beyond simple, well defined, and well understood applications, automated requirements analysis is unlikely to lead to a successful specification of a system.
Thomas Aschauer, Gerd Dauenhauer, Patricia Derler, Wolfgang Pree, Christoph Steindl
Model-Driven Prototyping Based Requirements Elicitation
Abstract
This paper presents a requirements elicitation approach that is based on model-driven prototyping. Model-driven development fits naturally in evolutionary prototyping because modeling and design are not treated merely as documents but as key parts of the development process. A novel rapid program synthesis approach is applied to speed up the prototype development. MDA, AI planning, and component-based software development techniques are seamlessly integrated together in the approach to achieve rapid prototyping. More importantly, the rapid program synthesis approach can ensure the correctness of the generated code, which is another favorable factor in enabling the development of a production quality prototype in a timely manner.
Jicheng Fu, Farokh B. Bastani, I-Ling Yen
A Case for ViewPoints and Documents
Abstract
In this contribution we consider various sorts of vague and imprecise pieces of (requirements) specification information as different view points provided by different stakeholders. Usually there is an obvious “non convergence” in the stakeholders’ views and it is important to address the various sources of ambiguity and inconsistency between such view points. We advocate addressing not only “traditional” inconsistency to drive development forward but include other forms of imprecision like ambiguity and vagueness. The aim is to provide a path from a decentralized viewpoint-oriented style to a document-oriented style of software requirements specification. We use parts of the airport security case study to show aspects of our approach.
Michael Goedicke, Thomas Herrmann
Towards Combining Ontologies and Model Weaving for the Evolution of Requirements Models
Abstract
Software change resulting from new requirements, environmental modifications, and error detection creates numerous challenges for the maintenance of software products. While many software evolution strategies focus on code-to-modeling language analysis, few address software evolution at higher abstraction levels. Most lack the flexibility to incorporate multiple modeling languages. Not many consider the integration and reuse of domain knowledge with design knowledge. We address these challenges by combining ontologies and model weaving to assist in software evolution of abstract artifacts. Our goals are to: recover high-level artifacts such as requirements and design models defined using a variety of software modeling languages; simplify modification of those models; reuse software design and domain knowledge contained within models; and integrate those models with enhancements via a novel combination of ontological and model weaving concepts. Additional benefits to design recovery and software evolution include detecting high-level dependencies and identifying differences between evolved software and initial specifications.
Allyson M. Hoss, Doris L. Carver
Reducing Ambiguities in Requirements Specifications Via Automatically Created Object-Oriented Models
Abstract
In industry, reviews and inspections are the primary methods to identify ambiguities, inconsistencies, and under specifications in natural language (NL) software requirements specifications (SRSs). However, humans have difficulties identifying ambiguities and tend to overlook inconsistencies in a large NL SRS. This paper presents a three-step, semi-automatic method, supported by a prototype tool, for identifying inconsistencies and ambiguities in NL SRSs. The method combines the strengths of automation and human reasoning to overcome difficulties with reviews and inspections. First, the tool parses a NL SRS according to a constraining grammar. Second, from relationships exposed in the parse, the tool creates the classes, methods, variables, and associations of an object-oriented analysis model of the specified system. Third, the model is diagrammed so that a human reviewer can use the model to detect ambiguities and inconsistencies. Since a human finds the problems, the tool has to have neither perfect recall nor perfect precision. The effectiveness of the approach is demonstrated by applying it and the tool to a widely published example NL SRS. A separate study evaluates the tool’s domain-specific term detection.
Daniel Popescu, Spencer Rugaber, Nenad Medvidovic, Daniel M. Berry

Innovative Applications of Natural-Language Processing Techniques

Innovations in Natural Language Document Processing for Requirements Engineering
Abstract
This paper evaluates the potential contributions of natural language processing to requirements engineering. We present a selective history of the relationship between requirements engineering (RE) and natural-language processing (NLP), and briefly summarize relevant recent trends in NLP. The paper outlines basic issues in RE and how they relate to interactions between a NLP front end and system-development processes. We suggest some improvements to NLP that may be possible in the context of RE and conclude with an assessment of what should be done to improve likelihood of practical impact in this direction.
Valdis Berzins, Craig Martell, Luqi, Paige Adams
Logic-Based Regulatory Conformance Checking
Abstract
In this paper, we describe an approach to formally assess whether an organization conforms to a body of regulation. Conformance is cast as a model checking question where the regulation is represented in a logic that is evaluated against an abstract model representing the operations of an organization. Regulatory bases are large and complex, and the long term goal of our work is to be able to use natural language processing (NLP) to assist in the translation of regulation to logic.
We argue that the translation of regulation to logic should proceed one sentence at a time. A challenge in taking this approach arises from the fact that sentences is regulation often refer to others. We motivate the need for a formal representation of regulation to accomodate references between statements. We briefy describe a logic in which statements can refer to and reason about others. We then discuss preliminary work on using NLP to assist in the translation of regulatory sentences into logic.
Nikhil Dinesh, Aravind K. Joshi, Insup Lee, Oleg Sokolsky
On the Identification of Goals in Stakeholders’ Dialogs
Abstract
Contradictions in requirements are inevitable in early project stages. To resolve these contradictions, it is necessary to know the rationale (goals) that lead to the particular requirements. In early project stages one stakeholder rarely knows the goals of the others. Sometimes the stakeholders cannot explicitly state even their own goals. Thus, the goals have to be elaborated in the process of requirements elicitation and negotiation.
This paper shows how the goals can be derived by systematic analysis of stakeholders dialogs. The derived goals have to be presented to the stakeholders for validation. Then, when the goals are explicitly stated and validated, it becomes easier to resolve requirements contradictions.
Leonid Kof
Text Classification and Machine Learning Support for Requirements Analysis Using Blogs
Abstract
Text classification and machine learning technologies are being investigated for use in supporting knowledge management requirements in military command centers. Military communities of interest are beginning to use blogs and related tools for information sharing, providing a comparable environment to the use of blogs for system requirement discussions. This paper describes the work in the area being performed under the Personalized Assistant that Learns (PAL) program sponsored by the Defense Advanced Research Projects Agency. Comparisons are then made to how the technology could provide similar capabilities for a requirements analysis environment. An additional discussion of how the task learning capabilities from PAL could also benefit requirements analysis in a rapid prototyping process is provided.
Douglas S. Lange
Profiling and Tracing Stakeholder Needs
Abstract
The first stage in transitioning from stakeholders’ needs to formal designs is the synthesis of user requirements from information elicited from the stakeholders. In this paper we show how shallow natural language techniques can be used to assist analysis of the elicited information and so inform the synthesis of the user requirements. We also show how related techniques can be used for the subsequent management of requirements and even help detect the absence of requirements’ motivation by identifying unprovenanced requirements.
Pete Sawyer, Ricardo Gacitua, Andrew Stone
Advances in Requirements Engineering: Bridging the Gap between Stakeholders’ Needs and Formal Designs
Abstract
The lions’s share of the software faults can be traced to requirements and specification errors, so improvements in requirements engineering can have a large impact on the effectiveness of the overall system development process. A weak link in the chain is the transition from the vague and informal needs of system stakeholders to the formal models that support theoretical analysis and software tools.
This paper explains the context for the 2007 Monterey workshop that was dedicated to this problem. It provides the case study that participants were asked to use to illustrate their new methods, and summarizes the discussion and conclusions of the workshop.
Luqi, Fabrice Kordon
Backmatter
Metadaten
Titel
Innovations for Requirement Analysis. From Stakeholders’ Needs to Formal Designs
herausgegeben von
Barbara Paech
Craig Martell
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-89778-1
Print ISBN
978-3-540-89777-4
DOI
https://doi.org/10.1007/978-3-540-89778-1

Premium Partner