Skip to main content
main-content

Über dieses Buch

The dependence on quality software in all areas of life is what makes software engineering a key discipline for today’s society. Thus, over the last few decades it has been increasingly recognized that it is particularly important to demonstrate the value of software engineering methods in real-world environments, a task which is the focus of empirical software engineering. One of the leading protagonists of this discipline worldwide is Prof. Dr. Dr. h.c. Dieter Rombach, who dedicated his entire career to empirical software engineering. For his many important contributions to the field he has received numerous awards and recognitions, including the U.S. National Science Foundation’s Presidential Young Investigator Award and the Cross of the Order of Merit of the Federal Republic of Germany. He is a Fellow of both the ACM and the IEEE Computer Society. This book, published in honor of his 60th birthday, is dedicated to Dieter Rombach and his contributions to software engineering in general, as well as to empirical software engineering in particular.

This book presents invited contributions from a number of the most internationally renowned software engineering researchers like Victor Basili, Barry Boehm, Manfred Broy, Carlo Ghezzi, Michael Jackson, Leon Osterweil, and, of course, by Dieter Rombach himself. Several key experts from the Fraunhofer IESE, the institute founded and led by Dieter Rombach, also contributed to the book. The contributions summarize some of the most important trends in software engineering today and outline a vision for the future of the field. The book is structured into three main parts. The first part focuses on the classical foundations of software engineering, such as notations, architecture, and processes, while the second addresses empirical software engineering in particular as the core field of Dieter Rombach’s contributions. Finally, the third part discusses a broad vision for the future of software engineering.

Inhaltsverzeichnis

Frontmatter

Empirical Software Engineering Models: Can They Become the Equivalent of Physical Laws in Traditional Engineering?

Abstract
Traditional engineering disciplines such as mechanical and electrical engineering are guided by physical laws. They provide the constraints for acceptable engineering solutions by enforcing regularity and thereby limiting complexity. Violations of physical laws can be experienced instantly in the lab. Software engineering is not constrained by physical laws. Consequently, we often create software artifacts that are too complex to be understood, tested, or maintained. As overly complex software solutions may even work initially, we are tempted to believe that no laws apply. We only learn about the violation of some form of “cognitive laws” late during development or during maintenance, when overly high complexity inflicts follow-up defects or increases maintenance costs. Innovative life cycle process models (e.g., the Spiral model) provide the basis for incremental risk evaluation and adjustment of such predictions. The proposal in this paper is to work towards a scientific basis for software engineering by capturing more such time-lagging dependencies among software artifacts in the form of empirical models and thereby making developers aware of so-called “cognitive laws” that must be adhered to. This paper attempts to answer the questions of why we need software engineering laws and what they might look like, how we have to organize our discipline in order to establish software engineering laws, which such laws already exist and how we could develop further laws, how such laws could contribute to the maturing of the science and engineering of software in the future, and what challenges remain for teaching, research, and practice in the future.
Dieter Rombach

Software Development: Notation, Architecture, and Process

Frontmatter

Domain Modeling and Domain Engineering: Key Tasks in Requirements Engineering

Abstract
Requirements engineering is an essential part of software and systems development. Besides the elicitation, analysis, and specification of the intrinsic system requirements as a basis for these activities, it also involves the elicitation, analysis, and specification of the information about the application domain (also called problem domain or domain for short: includes terminology, concepts, and rules). The result of this activity is an elaborated domain model, which is a model of the relevant parts of the application domain.
Roughly speaking, a domain model for a system or software development task comprises the following parts:
  • The domain ontology rules, laws, terminology, and notions describing the relevant terms giving an ontology/taxonomy of the domain and specific rules and principles
    • Concepts, data types, and functions
    • Rules and laws
  • The context model, which describes the general properties of the system’s environment. This includes the operational context such as software systems, physical systems, and actors, encompassing users in the environment, properties of the physical environment in case of cyber-physical systems, as well as the wider business and technological context.
These aspects can be captured by adequate data models.
The domain model collects all the information about the problem domain that must be known and understood to allow capturing requirements for the system, specifying them, implementing and verifying the system. The detailed system requirements, however, are not part of the domain model, but they are based upon it.
Ultimately, the domain model is a collection of knowledge about the application domain at an adequate level of abstraction—including the use of modeling techniques where useful.
Manfred Broy

Towards Agile Verification

Abstract
Advances in software verification techniques have been impressive in the past decade. Formal verification of large production software is now increasingly feasible and this is paving the way to transferring these techniques from research to practice. We argue, however, that there is still a serious mismatch between verification and modern development processes, which highly focus on agility and incremental, iterative development. To address this issue, verification has to become agile, and seamless introduction into agile processes has to become feasible. We envision new approaches that will support verification-driven development in the same way as test-driven development is possible today, for example through JUnit within an IDE like Eclipse. In this paper we discuss how agile verification can be achieved, and we show some promising initial steps in this direction.
Carlo Ghezzi, Amir Molzam Sharifloo, Claudio Menghi

On Model-Based Software Development

Abstract
Due to its many advantages, the growing use in software practice of Model-Based Development (MBD) is a promising trend. However, major problems in MBD of software remain, for example, the failure to integrate formal system requirements models with current code synthesis methods. This chapter introduces FMBD, a formal MBD process for building software systems which addresses this problem. The goal of FMBD is to produce high assurance software systems which are correct by construction. The chapter describes three types of models built during the FMBD process, provides examples from an avionics system to illustrate the models, and proposes three major challenges in MBD as topics for future research.
Constance L. Heitmeyer, Sandeep Shukla, Myla M. Archer, Elizabeth I. Leonard

From Software Systems to Complex Software Ecosystems: Model- and Constraint-Based Engineering of Ecosystems

Abstract
Software is not self-supporting. It is executed by hardware and interacts with its environment. So-called software systems are complicated hierarchical systems. They are carefully engineered by competent engineers. In contrast, complex systems, like biological ecosystems, railway systems and the Internet itself, have never been developed and tested as a whole by a team of engineers. Nevertheless, those complex systems have the ability to evolve without explicit control by anyone, and they are more robust to dealing with problems at the level of their constituent elements than classical engineered systems. Consequently, in this article we introduce the concept of complex software ecosystems comprised of interacting adaptive software systems and human beings. Ecosystems achieve the demanded flexibility and dependability by means of a kind of higher-level regulatory system. Their equilibrium is continuously preserved through the appropriate balance between the self-adaptation and the self-control capabilities of an ecosystem’s participants.
We will outline a methodology to support the engineering of ecosystems by integrating a model- and constraint-based engineering approach and applying it during design- and runtime. The open-world semantics of constraints establish a framework for the behavior of the participants and the ecosystem itself. Violations of constraints can be identified during design time, but also provide knowledge transfer to runtime. Constraints are additionally monitored and enforced during runtime. Thus, we propose an evolutionary engineering approach covering the whole life-cycle for forever active complex software ecosystems.
Andreas Rausch, Christian Bartelt, Sebastian Herold, Holger Klus, Dirk Niebuhr

A Safety Roadmap to Cyber-Physical Systems

Abstract
In recent years, the term cyber-physical systems has emerged to characterize a new generation of embedded systems. In cyber-physical systems, embedded systems will be open in the sense that they will dynamically interconnect with other systems and will be able to dynamically adapt to changing runtime contexts. Such open adaptive systems provide a huge potential for society and for the economy. On the other hand, however, openness and adaptivity make it hard or even impossible for developers to predict a system’s dynamic structure and behavior. This impedes the assurance of important system quality properties, especially safety and reliability. Safety assurance of cyber-physical systems will therefore be both one of the most urgent and one of the most challenging research questions of the next decade. This chapter analyzes the state of the art in order to identify open gaps and suggests a runtime safety assurance framework for cyber-physical systems to structure ongoing and future research activities.
Mario Trapp, Daniel Schneider, Peter Liggesmeyer

Modeling Complex Information Systems

Abstract
We are living in an information society. For us it is normal to access relevant information almost immediately. In our world, information systems play an important role in our private as well as in our professional lives. When selecting or developing such systems, especially complex ones, we need to understand and model the requirements on these systems. This paper deals with the modeling of complex information systems. We show which requirements concepts could be modeled, but also argue that it is not necessary to model all concepts. We show empirical studies that make us believe that further empirical research is needed in order to know which requirements concepts are most relevant. Current challenges as well as future challenges with regard to information system modeling are outlined.
Joerg Doerr

Continuous Process Improvement

Abstract
Nowadays, a variety of different processes for the development and maintenance of software-intensive systems exists, ranging from agile development processes to classical plan-based approaches. There is no ultimate process that can be applied in each and every situation. It depends on the project goals and environment as well as on the required characteristics of the system under development. Development processes support organizations in developing software-intensive systems with certain quality characteristics, within a certain time span, and requiring a certain amount of effort. Continuous process improvement deals with the establishment and maintenance of high-quality processes, with analyzing their performance and effectiveness, and with initiating corresponding improvement actions if needed. In this chapter, we will take a closer look at how to systematically define and continuously improve development processes based on documented best practices and the use of measurement data collected during the enactment of the development process. The chapter highlights current challenges and presents solution approaches for establishing continuous process improvement in practice.
Jens Heidrich

Empirical Research and Studies

Frontmatter

Paths to Software Engineering Evidence

Abstract
In recent years there has been a call from researchers in empirical software engineering to carry out more research in the industrial setting. The arguments for this have been well founded and the benefits clearly enunciated. But apart from the community’s call for empirical goals to be based around business goals, there has been little consideration of the business conditions under which empirical software engineering methods may, or may not, be appropriate for the business. In this paper the empirically derived high-level management practices that are associated with business success are used as initial decision criteria to decide the path to follow: (a) whether empirical software engineering research will be of value to the business, and (b) if it is of value, the form that that research might take. The place of theory is considered in the case of path (b).
Ross Jeffery

An Evidence Profile for Software Engineering Research and Practice

Abstract
Evidence-based software engineering has emerged as an important part of software engineering. The need for empirical evaluation and hence evidence when developing new models, methods, techniques and tools in research has grown in the last couple of decades. Furthermore, industrial decision-making ought to become more evidence-based. The objective here is to develop and present an evidence-based profile, which could be used to divide pieces of evidence into different types and hence create an overall picture of evidence in a specific case. The evidence profile is developed in such a way that it allows evidence to be judged in context. The evidence profile consists of five types of evidence, and it is illustrated for perspective-based reading. It is shown how pieces of evidence can be classified into the different types. It is concluded that this type of approach may be useful for capturing the evidence with respect to a specific topic and in a specific context. Further work will include applying the evidence profile to evidence collected from different types of studies and contexts.
Claes Wohlin

Challenges of Evaluating the Quality of Software Engineering Experiments

Abstract
Good-quality experiments are free of bias. Bias is considered to be related to internal validity (e.g., how well experiments are planned, designed, executed, and analysed). Quality scales and expert opinion are two approaches for assessing the quality of experiments. Aim: Identify whether there is a relationship between bias and quality scale and expert opinion predictions in SE experiments. Method: We used a quality scale to determine the quality of 35 experiments from three systematic literature reviews. We used two different procedures (effect size and response ratio) to calculate the bias in diverse response variables for the above experiments. Experienced researchers assessed the quality of these experiments. We analysed the correlations between the quality scores, bias and expert opinion. Results: The relationship between quality scales, expert opinion and bias depends on the technology exercised in the experiments. The correlation between quality scales, expert opinion and bias is only correct when the technologies can be subjected to acceptable experimental control. Both correct and incorrect expert ratings are more extreme than the quality scales. Conclusions: A quality scale based on formal internal quality criteria will predict bias satisfactorily provided that the technology can be properly controlled in the laboratory.
Oscar Dieste, Natalia Juristo

Technical Debt: Showing the Way for Better Transfer of Empirical Results

Abstract
In this chapter, we discuss recent progress and opportunities in empirical software engineering by focusing on a particular technology, Technical Debt (TD), which ties together many recent developments in the field. Recent advances in TD research are providing empiricists the chance to make more sophisticated recommendations that have observable impact on practice.
TD uses a financial metaphor and provides a framework for articulating the notion of tradeoffs between the short-term benefits and the long-term costs of software development decisions. TD is seeing an explosion of interest in the practitioner community, and research in this area is quickly having an impact on practice. We argue that this is due to several strands of empirical research reaching a level of maturity that provides useful benefits to practitioners, who in turn provide excellent data to researchers. They key is providing observable benefit to practitioners, such as the ability to tie technical debt measures to business goals, and the ability to articulate more sophisticated value-based propositions regarding how to prioritize rework. TD is an interesting case study in how the maturing field of empirical software engineering research is paying dividends. It is only a little hyperbolic to call this a watershed moment for empirical study, where many areas of progress are coming to a head at the same time.
Forrest Shull, Davide Falessi, Carolyn Seaman, Madeline Diep, Lucas Layman

An Empirical Investigation of the Component-Based Performance Prediction Method Palladio

Abstract
Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilization of a software system at design time, before implementation, to achieve predictability of the system’s performance characteristics. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort had not been quantified or analyzed systematically yet. Furthermore, the achieved prediction accuracy of the methods when applied by developers had not yet been compared. To study this effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio) in 2007. This paper summarizes two earlier papers on this study. The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, provided they are reused at least once.
Ralf Reussner, Steffen Becker, Anne Koziolek, Heiko Koziolek

Can We Trust Software Repositories?

Abstract
To acquire data for empirical studies, software engineering researchers frequently leverage software repositories as data sources. Indeed, version and bug databases contain a wealth of data on how a product came to be. To turn this data into knowledge, though, requires deep insights into the specific process and product; and it requires careful scrutiny of the techniques used to obtain the data. The central challenge of the future will thus be to combine both automatic and manual empirical analysis.
Andreas Zeller

Empirical Practice in Software Engineering

Abstract
Experimental software engineering has been defined as the scientific approach to systematically evaluating software technologies by referring to predefined hypotheses using sound empirical methods.
The purpose of this chapter is to give an overview of the history, current practice, and future of empirical practice in Software Engineering. In particular, based on what we have learned from 20 years of research in empirical software engineering, we describe the empirical approach we are currently using in terms of a scientific approach to applied research and as a means for systematic evaluation.
Andreas Jedlitschka, Liliana Guzmán, Jessica Jung, Constanza Lampasona, Silke Steinbach

Visions on the Future of Software Engineering as a Discipline

Frontmatter

What Is Software? The Role of Empirical Methods in Answering the Question

Abstract
This paper explores the potentially pivotal role of Empirical Methods in addressing existential questions about the nature of software. Building upon an earlier paper that asked the question “What is software?”, this paper suggests that a key way to gain such understanding is to ponder the question of how to determine the size of a software entity. The paper notes that there have been a variety of indirect approaches to measuring software size, such as measuring the amount of time taken to produce software, and measuring the number of lines of code in a software entity. But these assume implicitly that such measures correlate positively with the inherent size of the software entity, broadly construed to include the entire panoply of code and non-code artifacts and their interconnections that comprise this entity. As in the original paper, this paper makes the case that entities such as recipes, laws, and processes are types of software, and that learning about their natures illuminates the nature of computer software—and conversely. This paper discusses possible approaches to measuring the size of these other types of artifacts, and uses observations about these approaches to suggest a possible approach to measuring the size of computer software entities. All of this is aimed at making progress in gaining understandings about the nature of software, broadly construed.
Preface: This paper is an updating of a paper previously published in Automated Software Engineering, entitled “What is Software?” [1]. That previous paper, written over 5 years ago, made a case for the importance of understanding the essence of what “software” is, noting that computer software is one of a number of different kinds of intellectual products that can and should be considered to be closely related to each other. The paper noted that laws, processes, and recipes all seem to be closely related in fundamental ways to computer software, and suggested that all might be considered to be subtypes of a type of intellectual product that might be called “software”. That being the case, the earlier paper suggested that studying any of these might well produce results of interest and value to the others, and studying the relations among these types of artifacts might ultimately provide insight into the fundamental nature of the type of thing of which all might be considered to be subtypes.
The main addition that this paper makes to the previous version is to note a potentially key contribution that Empirical Methods could make to these understandings. In the paper we argue that the understanding of an object (physical or non-physical) is greatly enhanced by the ability to measure that object. Indeed, Lord Kelvin suggested, over 100 years ago, that
… when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.
That being the case, Empirical Methods research should be viewed as being essential to gaining knowledge and establishing the science of the nature of software, in that it addresses issues of how to measure various aspects of software. This paper focuses as a case in point on how to define one particular basic measure of software, namely its size. This would seem to be a basic measure and yet we note that no such satisfactory measure of software size seems to exist. Grappling with this and related questions has been a focus of the Empirical Methods community. The community’s success in understanding how to establish such measures of computer software is clearly important to progress in being more effective in computer software engineering, but might indeed also have important ramifications for improvements in the engineering of other kinds of software, such as processes and laws, as well. For that reason the ongoing efforts of the Empirical Methods research community should be viewed by the entire “software” community as being of fundamental importance.
Leon J. Osterweil

A Personal Perspective on the Evolution of Empirical Software Engineering

Abstract
This paper offers a four-decade overview of the evolution of empirical software engineering from a personal perspective. It represents what I saw as major milestones in terms of the kind of thinking that affected the nature of the work. I use examples from my own work as I feel that work followed the evolution of the field and is representative of the thinking at various points in time. I try to say where we fell short and where we need to go, in the end discussing the barriers we still need to address.
Victor R. Basili

Moving Toward Evidence-Based Software Production

Abstract
Computer software is increasingly critical to the products, infrastructure, and science upon which society depends. However, the production of society’s software is known to be problematic. Current understanding of software production, largely based on anecdotes, is inadequate. Achieving the deeper understanding needed to transform software production experiences into software production improvements requires collecting and using evidence on a large scale. This paper proposes some steps toward that outcome, with particular attention to what government can do to stimulate software engineering studies that will advance the capabilities of software production.
David M. Weiss, James Kirby, Robyn R. Lutz

Skating to Where the Puck Is Going: Future Systems and Software Engineering Opportunities and Challenges

Abstract
This paper provides an update and extension of a 2005 paper on The Future of Systems and Software Engineering Processes. Some of its challenges and opportunities are similar, such as the need to simultaneously achieve high levels of both agility and assurance. Others have emerged as increasingly important, such as the opportunities and challenges of dealing with smart systems involving ultralarge volumes of data; with multicore chips; with social networking services; and with cloud computing or software as a service. The paper is organized around eight relatively surprise-free trends and two “wild cards” whose trends and implications are harder to foresee. The eight surprise-free trends are:
1.
Increasing emphasis on rapid development and adaptability;
 
2.
Increasing software criticality and need for assurance;
 
3.
Increased complexity, global systems of systems, and need for scalability and interoperability;
 
4.
Increased needs to accommodate COTS, software services, and legacy systems;
 
5.
Smart systems with increasingly large volumes of data and ways to learn from them;
 
6.
Increased emphasis on users, social networking services, web applications, and end value;
 
7.
Computational plenty and multicore chips;
 
8.
Increasing integration of software and systems engineering. The two wild-card trends are:
 
9.
Increasing software autonomy; and
 
10.
Combinations of biology and computing.
 
Barry Boehm

Formalism and Intuition in Software Engineering

Abstract
A major and so far unmet challenge in software engineering is to achieve and act upon a clear and sound understanding of the relationship between formalism and intuition in the development process. The challenge is salient in the development of cyber-physical systems, in which the computer interacts with the human and physical world to ensure a behaviour there that satisfies the requirements of the system’s stakeholders. The nature of the computer as a formally defined symbol-processing engine invites a formal mathematical approach to software development. Contrary considerations militate against excessive reliance on formalism. The non-formal nature of the human and physical world, the complexity of system function, and the need for human comprehension at every level demand application of non-formal and intuitional knowledge, of insight and technique rather than calculation. The challenge, then, is to determine how these two facets of the development process—formalism and intuition—can work together most productively. This short essay describes some origins and aspects of the challenge and offers a perspective for addressing it.
Michael Jackson

Education of Software Engineers

Abstract
The field of software engineering had its beginnings in the 1960s, almost 50 years ago. Since that time you would expect that significant progress has been made in understanding the models, methods, and techniques that lend themselves to proper software development. However, we are still making some of the same mistakes that were supposedly “solved” in the 1960s and 1970s. Industry still doesn’t understand the critical importance that correct programs have in the proper functioning of society today. In this paper, several examples are given in how we are still “reinventing the wheel” as well as describing new challenges that will impact software engineers in the near future.
Marvin V. Zelkowitz

Integrated Software Process and Product Lines

Abstract
Increasing demands imposed on software-intensive systems will require more rigorous engineering and management of software artifacts and processes. Software product line engineering allows for the effective reuse of software artifacts based on the pro-active organization of similar artifacts according to similarities and variances. Software processes—although also variable across projects—are still not managed in a similar systematic way. This paper motivates the need for Software Process Lines similar to Product Lines. As a result of such organization, processes within an organization could be organized according to similarities and differences, allowing for better tailoring to specific project needs (corresponds to application engineering in product lines). The vision of SPPL (integrated product and process line) engineering is presented, where suitable artifacts and processes can be chosen based on a set of product & process requirements and project constraints. The paper concludes with some resulting challenges for research, practice, and teaching.
Dieter Rombach
Weitere Informationen

Premium Partner

    Bildnachweise