Skip to main content
main-content

Über dieses Buch

HIS BOOK CONTAINS a most comprehensive text that presents syntax-directed and compositional methods for the formal veri?- T cation of programs. The approach is not language-bounded in the sense that it covers a large variety of programming models and features that appear in most modern programming languages. It covers the classes of - quential and parallel, deterministic and non-deterministic, distributed and object-oriented programs. For each of the classes it presents the various c- teria of correctness that are relevant for these classes, such as interference freedom, deadlock freedom, and appropriate notions of liveness for parallel programs. Also, special proof rules appropriate for each class of programs are presented. In spite of this diversity due to the rich program classes cons- ered, there exist a uniform underlying theory of veri?cation which is synt- oriented and promotes compositional approaches to veri?cation, leading to scalability of the methods. The text strikes the proper balance between mathematical rigor and - dactic introduction of increasingly complex rules in an incremental manner, adequately supported by state-of-the-art examples. As a result it can serve as a textbook for a variety of courses on di?erent levels and varying durations. It can also serve as a reference book for researchers in the theory of veri?- tion, in particular since it contains much material that never before appeared in book form. This is specially true for the treatment of object-oriented p- grams which is entirely novel and is strikingly elegant.

Inhaltsverzeichnis

Frontmatter

In the Beginning

Frontmatter

Chapter 1. Introduction

Abstract
Program Verification IS a systematic approach to proving the correctness of programs. Correctness means that the programs enjoy certain desirable properties. For sequential programs these properties are delivery of correct results and termination. For concurrent programs, that is, those with several active components, the properties of interference freedom, deadlock freedom and fair behavior are also important. The emphasis in this book is on verification of concurrent programs, in particular of parallel and distributed programs where the components communicate either via shared variables or explicit message passing. Such programs are usually difficult to design, and errors are more a rule than an exception. Of course, we also consider sequential programs because they occur as components of concurrent ones.

Chapter 2. Preliminaries

Abstract
In This Chapter we explain the basic concepts and notations used throughout this book. We recommend to the reader to move now to Chapter 3 and consult the individual sections of this chapter whenever needed. This chapter is organized as follows. In Section 2.1, we list the standard mathematical notation for sets, tuples, relations, functions, sequences, strings, proofs, induction and grammars.

Deterministic Programs

Frontmatter

Chapter 3. While Programs

Abstract
In A Deterministic program there is at most one instruction to be executed “next,” so that from a given initial state only one execution sequence is generated. In classical programming languages like Pascal, only deterministic programs can be written. In this chapter we study a small class of deterministic programs, called while programs, which are included in all other classes of programs studied in this book. We start by defining the syntax (Section 3.1), then introduce an operational semantics (Section 3.2), subsequently study program verification by introducing proof systems allowing us to prove various program properties and prove the soundness of the introduced proof systems (Section 3.3). This pattern is repeated for all classes of programs studied in this book. We introduce here two semantics —partial correctness and total correctness semantics. The former does not take into account the possibility of divergence while the latter does.

Chapter 4. Recursive Programs

Abstract
While Programs, Discussed in the previous chapter, are of course extremely limited. Focusing on them allowed us to introduce two basic topics which form the subject of this book: semantics and program verification. We now proceed by gradually introducing more powerful programming concepts and extending our analysis to them. Every realistic programming language offers some way of structuring the programs. Historically, the first concept allowing us to do so was procedures. To systematically study their verification we proceed in two steps. In this chapter we introduce parameterless procedures and in the next chapter procedures with parameters. This allows us to focus first on recursion, an important and powerful concept. To deal with it we need to modify appropriately the methods introduced in the previous chapter.

Chapter 5. Recursive Programs with Parameters

Abstract
Now That We understand the semantics and verification of recursive procedures without parameters, we extend our study to the case of recursive procedures with parameters. The presentation follows the one of the last chapter. In Section 5.1 we introduce the syntax of recursive procedures with parameters. We deal here with the most common parameter mechanism, namely call-by-value. To properly capture its meaning we need to introduce a block statement that allows us to distinguish between local and global variables. In Section 5.2 we introduce the operational semantics that appropriately modifies the semantics of recursive procedures from the last chapter. The block statement is used to define the meaning of procedure calls. Then, in Section 5.3 we focus on program verification. The approach is a modification of the approach from the previous chapter, where the additional difficulty consists of a satisfactory treatment of parameters. Finally, as a case study, we consider in Section 5.5 the correctness of the Quicksort program.

Chapter 6. Object-Oriented Programs

Abstract
In This Chapter we study the verification of object-oriented programs. We focus on the following main characteristics of objects:
  • objects possess (and encapsulate) their own local variables,
  • objects interact via method calls,
  • objects can be dynamically created.
In contrast to the formal parameters of procedures and the local variables of block statements which only exist temporarily, the local variables of an object exist permanently. To emphasize the difference between these temporary variables and the local variables of an object, the latter are called instance variables. The local state of an object is a mapping that assigns values to its instance variables. Each object represents its local state by a pointer to it. Encapsulation means that the instance variables of an object cannot be directly accessed by other objects; they can be accessed only via method calls of the object.

Parallel Programs

Frontmatter

Chapter 7. Disjoint Parallel Programs

Abstract
As We Have seen in Chapter 1, concurrent programs can be quite difficult to understand in detail. That is why we introduce and study them in several stages. In this part of the book we study parallel programs, and in this chapter we investigate disjoint parallelism, the simplest form of parallelism. Disjointness means here that the component programs have only reading access to common variables. Many phenomena of parallel programs can already be explained in this setting. In Chapter 8 we study parallelism with shared variables and in Chapter 9 we add synchronization to shared variable parallelism. Disjoint parallelism provides a good preparation for studying these extensions. Disjoint parallelism is also a good starting point for studying distributed programs in Chapter 11.

Chapter 8. Parallel Programs with Shared Variables

Abstract
Disjoint Parallelism IS a rather restricted form of concurrency. In applications, concurrently operating components often share resources, such as a common database, a line printer or a data bus. Sharing is necessary when resources are too costly to have one copy for each component, as in the case of a large database. Sharing is also useful to establish communication between different components, as in the case of a data bus. This form of concurrency can be modeled by means of parallel programs with shared variables, variables that can be changed and read by several components. Design and verification of parallel programs with shared variables are much more demanding than those of disjoint parallel programs. The reason is that the individual components of such a parallel program can interfere with each other by changing the shared variables. To restrict the points of interference, we consider so-called atomic regions. whose execution cannot be interrupted by other components.

Chapter 9. Parallel Programs with Synchronization

Abstract
for Many Applications the classes of parallel programs considered so far are not sufficient. We need parallel programs whose components can synchronize with each other. That is, components must be able to suspend their execution and wait or get blocked until the execution of the other components has changed the shared variables in such a way that a certain condition is fulfilled. To formulate such waiting conditions we extend the program syntax of Section 9.1 by a synchronization construct, the await statement introduced in Owicki and Gries [1976a]. This construct permits a very flexible way of programming, but at the same time opens the door for subtle programming errors where the program execution ends in a deadlock. This is a situation where some components of a parallel program did not terminate and all nonterminated components are blocked because they wait eternally for a certain condition to become satisfied. The formal definition is given in Section 9.2 on semantics.

Nondeterministic and Distributed Programs

Frontmatter

Chapter 10. Nondeterministic Programs

Abstract
In The Previous chapters we have seen that parallel programs introduce nondeterminism: from a given initial state several computations resulting in different final states may be possible. This nondeterminism is implicit; that is, there is no explicit programming construct for expressing it. In this chapter we introduce a class of programs that enable an explicit description of nondeterminism. This is the class of Dijkstra's [1975,1976] guarded commands; it represents a simple extension of while programs considered in Chapter 3. Dijkstra's guarded commands are also a preparation for the study of distributed programs in Chapter 11.

Chapter 11. Distributed Programs

Abstract
Many Real Systems consist of a number of physically distributed components that work independently using their private storage, but also communicate from time to time by explicit message passing. Such systems are called distributed systems. Distributed programs are abstract descriptions of distributed systems. A distributed program consists of a collection of processes that work concurrently and communicate by explicit message passing. Each process can access a set of variables which are disjoint from the variables that can be changed by any other process.

Chapter 12. Fairness

Abstract
As We Have seen in the zero search example of Chapter 1, fairness is an important hypothesis in the study of parallel programs. Fairness models the idea of “true parallelism,” where every component of a parallel program progresses with unknown, but positive speed. In other words, every component eventually executes its next enabled atomic instruction. Semantically, fairness can be viewed as an attempt at reducing the amount of nondeterminism in the computations of programs. Therefore fairness can be studied in any setting where nondeterminism arises. In this chapter we study fairness in the simplest possible setting of this book, the class of non-deterministic programs studied in Chapter 10.

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise