Skip to main content
main-content

Über dieses Buch

Software engineering research can trace its roots to a few highly influential individuals. Among that select group is Leon J. Osterweil, who has been a major force in driving software engineering from its infancy to its modern reality. For more than three decades, Prof. Osterweil's work has fundamentally defined or significantly impacted major directions in software analysis, development tools and environments, and software process--all critical parts of software engineering as it is practiced today. His exceptional contributions to the field have been recognized with numerous awards and honors through his career, including the ACM SIGSOFT Outstanding Research Award, in recognition of his extensive and sustained research impact, and the ACM SIGSOFT Influential Educator Award, in recognition of his career-long achievements as an educator and mentor.

In honor of Prof. Osterweil's profound accomplishments, this book was prepared for a special honorary event held during the 2011 International Conference on Software Engineering (ICSE). It contains some of his most important published works to date, together with several new articles written by leading authorities in the field, exploring the broad impact of his work in the past and how it will further impact software engineering research in the future. These papers, part of the core software engineering legacy and now available in one commented volume for the first time, are grouped into three sections: flow analysis for software dependability, the software lifecycle, and software process.

Inhaltsverzeichnis

Frontmatter

Introduction to “Engineering of Software: The Continuing Contributions of Leon J. Osterweil”

Software engineering research can trace its roots to a small number of highly influential individuals. Among that select group is Prof. Leon J. Osterweil, whose work has fundamentally defined or impacted major directions in software analysis, development tools and environments, and software process. His exceptional and sustained contributions to the field have been recognized with numerous awards and honors throughout his career. This section briefly reviews his exceptionally distinguished career.
Peri L. Tarr, Alexander L. Wolf

Flow Analysis for Software Dependability

Frontmatter

Data Flow Analysis for Software Dependability: The Very Idea

Data flow analysis was developed as a means for enabling the optimization of code generated by source language compilers. Over the past decade the majority of new applications of data flow analysis in published research and in tools for practicing developers have focused on software quality. The roots of the shift from optimization to quality are, perhaps surprisingly, not recent. The very idea of applying data flow analysis to detect potential errors in programs can be traced back to the work of Osterweil and Fosdick (Chapter 5) in the mid-70s. Remarkably, their work outlined the conceptual architecture of nearly all subsequent static analysis techniques targeting software quality, and the application of their approach revealed the key challenges that drive research in practical static software quality tools even today. In this paper, we identify the key insights behind their approach, relate those insights to subsequent approaches, and trace several lines of research that generalized, extended, and refined those insights.
Matthew B. Dwyer

The SAFE Experience

We present an overview of the techniques developed under the SAFE project. The goal of SAFE was to create a practical lightweight framework to verify simple properties of realistic Java applications. The work on SAFE covered a lot of ground, starting from typestate verification techniques, through inference of typestate specifications, checking for absence of null derefences, automatic resource disposal, and an attempt at modular typestate analysis. In many ways, SAFE represents a modern incarnation of early ideas on the use of static analysis for software reliability. SAFE went a long way in making these ideas applicable to real properties of real software, but applying them at the scale of modern framework-intensive software remains a challenge. We are encouraged by our experience with SAFE, and believe that the technique developed in SAFE can serve as a solid basis for future work on practical verification technology.
Eran Yahav, Stephen Fink

Checking Concurrent Typestate with Access Permissions in Plural: A Retrospective

Objects often define usage protocols that clients must follow in order for these objects to work properly. In the presence of aliasing, however, it is difficult to check whether all the aliases of an object properly coordinate to enforce the protocol. Plural is a type-based system that can soundly enforce challenging protocols even in concurrent programs. In this paper, we discuss how Plural supports natural idioms for reasoning about programs, leveraging access permissions that express the programmer’s design intent within the code. We trace the predecessors of the design intent idioms used in Plural, discuss how we have found different forms of design intent to be complimentary, and outline remaining challenges and directions for future work in the area.
Kevin Bierhoff, Nels E. Beckman, Jonathan Aldrich

Data Flow Analysis In Software Reliability

The ways that the methods of data flow analysis can be applied to improve software reliability are described. There is also a review of the basic terminology from graph theory and from data flow analysis in global program optimization. The notation of regular expressions is used to describe actions on data for sets of paths. These expressions provide the basis of a classification scheme for data flow which represents patterns of data flow along paths within subprograms and along paths which cross subprogram boundaries. Fast algorithms, originally introduced for global optimization, are described and it is shown how they can be used to implement the classification scheme. It is then shown how these same algorithms can also be used to detect the presence of data flow anomalies which are symptomatic of programming errors. Finally, some characteristics of and experience with DAVE, a data flow analysis system embodying some of these ideas, are described.
Lloyd D. Fosdick, Leon J. Osterweil

Anomaly Detection in Concurrent Software by Static Data Flow Analysis

Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency. The algorithms employ data flow analysis techniques. First used in compiler object code optimization, the techniques have more recently been used in the detection of variable usage errors in single process programs. By adapting these existing algorithms, the same classes of variable usage errors can be detected in concurrent process programs. Important classes of errors unique to concurrent process programs are also described, and algorithms for their detection are presented.
Richard N. Taylor, Leon J. Osterweil

Cecil: A Sequencing Constraint Language for Automatic Static Analysis Generation

This paper presents a flexible and general mechanism for specifying problems relating to the sequencing of events and mechanically translating them into dataflow analysis algorithms capable of solving those problems. Dataflow analysis has been used for quite some time in compiler code optimization. It has recently gained increasing attention as a way of statically checking for the presence or absence of errors and as a way of guiding the test case selection process. Most static analyzers, however, have been custom-built to search for fixed, and often quite limited, classes of dataflow conditions. We show that the range of sequences for which it is interesting and worthwhile to search is actually quite broad and diverse. We create a formalism for specifying this diversity of conditions. We then show that these conditions can be modeled essentially as dataflow analysis problems for which effective solutions are known and further show how these solutions can be exploited to serve as the basis for mechanical creation of analyzers for these conditions.
Kurt M. Olender, Leon J. Osterweil

The Software Lifecycle

Frontmatter

Lifecycle Environments

A Retrospective View of the Contributions of Leon J. Osterweil
Throughout his career, Leon Osterweil has made significant contributions that have impacted the research and state-of-the-practice on development environments. Initially his focus was on programming environments, mostly addressing issues needed to support his work in program analysis. Later his focus expanded to software lifecycle issues, such as flexible component interaction models, efficient system regeneration, and the use of process definitions as the major coordination mechanism to orchestrate the interactions among collections of tools, hardware devices, and human agents. His current research continues to address environment issues, but now the emphasis is on supporting continuous process improvement by providing process languages, execution, simulation, and an assortment of analysis tools for evaluating the effectiveness, safety, and vulnerabilities of processes for a range of domains, from healthcare, to digital government, to scientific workflow.
Lori A. Clarke

Software Architecture, (In)consistency, and Integration

As other chapters in this volume demonstrate, Leon Osterweil has made critical contributions to software analysis and testing. That stream of contributions began with his work in the DAVE project, which produced a static data flow analysis tool capable of analyzing FORTRAN programs. What I am sure Lee did not recognize at the time was that this work also launched him on a path to making critical contributions in environment architectures, inconsistency management, and integration technologies. These contributions arose from his work with Toolpack, Odin, the Arcadia project, and their recent successors. This chapter traces some of these key results, highlighting not only Lee’s contributions, but places where they remain to be fully exploited.
Richard N. Taylor

Process Programming in the Service Age: Old Problems and New Challenges

Most modern software systems have a decentralized, modular, distributed, and dynamic structure. They are often composed of heterogeneous components and operate on heterogeneous infrastructures. They are increasingly built by composing services; that is, components owned (designed, deployed, maintained, and run) by remote and independent stakeholders. The quality of service perceived by the clients of such a composite application depends directly on the individual services that are integrated in it, but also on the way they are composed. At the same time, the world in which applications are situated (in particular, the remote services upon which they can rely) changes continuously. These requirements ask for an ability of applications to self-adapt to dynamic changes, especially when they need to run for a long time without interruption. This, in turn, has an impact on the way service compositions are defined using ad-hoc process languages that are defined to support compositions. This paper discusses how the service setting has revamped the field of process (workflow) programming: where old problems that were identified in the past still exist now, how we can learn from past work, and where and why new challenges instead require additional research.
Gianpaolo Cugola, Carlo Ghezzi, Leandro Sales Pinto

Toolpack—An Experimental Software Development Environment Research Project

This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized.
Leon J. Osterweil

A Mechanism for Environment Integration

This paper describes research associated with the development and evaluation of Odin—an environment integration system based on the idea that tools should be integrated around a centralized store of persistent software objects. The paper describes this idea in detail and then presents the Odin architecture, which features such notions as the typing of software objects, composing tools out of modular tool fragments, optimizing the storage and rederivation of software objects, and isolating tool interconnectivity information in a single centralized object. The paper then describes some projects that have used Odin to integrate tools on a large scale. Finally, it discusses the significance of this work and the conclusions that can be drawn about superior software environment architectures.
Geoffrey Clemm, Leon Osterweil

Foundations for the Arcadia Environment Architecture

Early software environments have supported a narrow range of activities (programming environments) or else been restricted to a single “hard-wired” software development process. The Arcadia research project is investigating the construction of software environments that are tightly integrated, yet flexible and extensible enough to support experimentation with alternative software processes and tools. This has led us to view an environment as being composed of two distinct, cooperating parts. One is the variant part, consisting of process programs and the tools and objects used and defined by those programs. The other is the fixed part, or infrastructure, supporting creation, execution, and change to the constituents of the variant part. The major components of the infrastructure are a process programming language and interpreter, object management system, and user interface management system. Process programming facilitates precise definition and automated support of software development and maintenance activities. The object management system provides typing, relationships, persistence, distribution and concurrency control capabilities. The user interface management system mediates communication between human users and executing processes, providing pleasant and uniform access to all facilities of the environment. Research in each of these areas and the interaction among them is described.
Richard N. Taylor, Frank C. Belz, Lori A. Clarke, Leon Osterweil, Richard W. Selby, Jack C. Wileden, Alexander L. Wolf, Michal Young

Issues Encountered in Building a Flexible Software Development Environment

Lessons from the Arcadia Project
This paper presents some of the more significant technical lessons that the Arcadia project has learned about developing effective software development environments. The principal components of the Arcadia-1 architecture are capabilities for process definition and execution, object management, user interface development and management, measurement and evaluation, language processing, and analysis and testing. In simultaneously and cooperatively developing solutions in these areas we learned several key lessons. Among them: the need to combine and apply heterogeneous componentry, multiple techniques for developing components, the pervasive need for rich type models, the need for supporting dynamism (and at what granularity), the role and value of concurrency, and the role and various forms of event-based control integration mechanisms. These lessons are explored in the paper.

Software Process

Frontmatter

From Process Programming to Process Engineering

Osterweil proposed the idea of processes as a kind of software in 1986. It arose from prior work on software tools, tool integration, and development environments, and from a desire to improve the specification and control of software development activities. The vision of process programming was an inspiring one, directly leading to ideas about process languages, process environments, process science (both pure and applied), and to opportunities for process analysis and simulation. Osterweil, his colleagues, and a thriving community of researchers worldwide have worked on these and related ideas for 25 years now, with many significant results. Additionally, as Osterweil and others have shown, ideas and approaches that originated in the context of software process are applicable in other domains, such as science, government, and medicine. In light of this, the future of process programming looks as exciting and compelling as ever.
Stanley M. Sutton

The Mechatronic UML Development Process

The advanced functions of mechatronic systems today are essentially realized by software that controls complex processes and enables the communication and coordination of multiple system components. We have developed Mechatronic UML, a comprehensive technique for the model-based development of hybrid real-time component-based systems. Mechatronic UML is based on a well-defined subset of UML diagrams, formal analysis and composition methods. Vital for the successful development with Mechatronic UML, however, is a systematic development process, on which we report in this paper.
Joel Greenyer, Jan Rieke, Wilhelm Schäfer, Oliver Sudmann

Software Processes are Software Too

The major theme of this meeting is the exploration of the importance of process as a vehicle for improving both the quality of software products and the way in which we develop and evolve them. In beginning this exploration it seems important to spend at least a short time examining the nature of process and convincing ourselves that this is indeed a promising vehicle.
Leon Osterweil

Software Processes Are Software Too, Revisited

An Invited Talk on the Most Influential Paper of ICSE9
The ICSE 9 paper, “Software Processes are Software Too,” suggests that software processes are themselves a form of software and that there are considerable benefits that will derive from basing a discipline of software process development on the more traditional discipline of application software development. This paper attempts to clarify some misconceptions about this original ICSE 9 suggestion and summarizes some research carried out over the past ten years that seems to confirm the original suggestion. The paper then goes on to map out some future research directions that seem indicated. The paper closes with some ruminations about the significance of the controversy that has continued to surround this work.
Leon J. Osterweil

Language Constructs for Managing Change in Process-Centered Environments

Change is pervasive during software development, affecting objects, processes, and environments. In process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (SPPLs). To fully realize this goal SPPLs should include constructs that specifically address the problems of change management. These problems include lack of representation of inter-object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually.
APPL/A is a prototype SPPL that addresses these problems. APPL/A is an extension to Ada. The principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally-enforcible predicates on relations, and five composite statements with transaction-like capabilities.
APPL/A relations and triggers are especially important for the problems raised here. Relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. Relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. Triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. Predicates and the transaction-like statements support change management in the face of evolving standards of consistency. Together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments.
Stanley M. Sutton, Dennis Heimbigner, Leon J. Osterweil

Using Little-JIL to Coordinate Agents in Software Engineering

Little-JIL, a new language for programming the coordination of agents is an executable, high-level process programming language with a formal (yet graphical) syntax and rigorously defined operational semantics. Little-JIL is based on two main hypotheses. The first is that the specification of coordination control structures is separable from other process programming language issues. Little-JIL provides a rich set of control structures while relying on separate systems for support in areas such as resource, artifact, and agenda management. The second is that processes can be executed by agents who know how to perform their tasks but can benefit from coordination support. Accordingly, each step in Little-JIL is assigned to an execution agent (human or automated): agents are responsible for initiating steps and performing the work associated with them. This approach has so far proven effective in allowing us to clearly and concisely express the agent coordination aspects of a wide variety of software, workflow, and other processes.
Alexander Wise, Aaron G. Cass, Barbara Staudt Lerner, Eric K. McCall, Leon J. Osterweil, Stanley M. Sutton

Analyzing Medical Processes

This paper shows how software engineering technologies used to define and analyze complex software systems can also be effective in detecting defects in human-intensive processes used to administer healthcare. The work described here builds upon earlier work demonstrating that healthcare processes can be defined precisely. This paper describes how finite-state verification can be used to help find defects in such processes as well as find errors in the process definitions and property specifications. The paper includes a detailed example, based upon a real-world process for transfusing blood, where the process defects that were found led to improvements in the process.
Bin Chen, George S. Avrunin, Elizabeth A. Henneman, Lori A. Clarke, Leon J. Osterweil, Philip L. Henneman
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise