Skip to main content
main-content

Über dieses Buch

" .. .1 always worked with programming languages because it seemed to me that until you could understand those, you really couldn't understand computers. Understanding them doesn't really mean only being able to use them. A lot of people can use them without understanding them." Christopher Strachey The development of programming languages is one of the finest intellectual achievements of the new discipline called Computer Science. And yet, there is no other subject that I know of, that has such emotionalism and mystique associated with it. Thus my attempt to write about this highly charged subject is taken with a good deal of caution. Nevertheless, in my role as Professor I have felt the need for a modern treatment of this subject. Traditional books on programming languages are like abbreviated language manuals, but this book takes a fundamentally different point of view. I believe that the best possible way to study and understand today's programming languages is by focusing on a few essential concepts. These concepts form the outline for this book and include such topics as variables, expressions, statements, typing, scope, procedures, data types, exception handling and concurrency. By understanding what these concepts are and how they are realized in different programming languages, one arrives at a level of comprehension far greater than one gets by writing some programs in a vi vB Preface few languages. Moreover, knowledge of these concepts provides a framework for understanding future language designs.

Inhaltsverzeichnis

Frontmatter

Chapter 1. The Evolution of Programming Languages

Abstract
A programming language is a systematic notation by which we describe computational processes to others. By a computational process I mean nothing more than a set of steps which a machine can perform for solving a task. To describe the solution of a problem to a computer, we need to know a set of commands that the computer can understand and execute. Given the diversity of tasks that computers can do today, people naturally find it surprising that the computer’s built-in abilities are so primitive. When a computer comes off the assembly line, it will usually be able to do only arithmetic and logical operations, input and output, and some “control” functions. These capabilities constitute the machine language of the computer. But because this language is so far away from the way people think and want to describe solutions to problems, so-called high-level programming languages have been conceived. These languages use less primitive notations than machine language and hence they require a program which will interpret their meaning to the computer. This program is not generally part of the computer’s circuitry, but is provided as part of the system software which is included with the computer. The purpose of this book is to study how these programming languages are designed to meet the needs of the human and the machine.
Ellis Horowitz

Chapter 2. The Challenge of Programming Language Design

Abstract
What makes a programming language successful? If we study the history of this field one quickly comes to the conclusion that success is not based strictly on technical merit. In the early days of computing FORTRAN was sponsored by a major computer company while the use of COBOL was mandated by the largest user of computers, the U.S. Department of Defense. No doubt these sponsorships helped greatly. But, though LISP has had no major industrial supporters, it has continued to have an enthusiastic group of users, especially in the artificial intelligence community. And Pascal, which was developed as a teaching tool, is fast becoming the standard language available on microprocessors and the major language taught in academic departments of computer science. On the other hand PL/1 did not succeed to the extent it was originally hoped, despite the fact that it had influential backers. So, as it is in many fields, there is no one formula for success. In this chapter, rather than focusing on a formula for success, we will focus on some criteria which can be used to evaluate the quality of any programming language design.
Ellis Horowitz

Chapter 3. Defining Syntax

Abstract
Every language is based upon an alphabet of characters. The English alphabet for example contains 26 letters whose capitals are written
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
The digits are generally written in Aramaic form
0 1 2 3 4 5 6 7 8 9
The character set for a programming language typically begins with these 36 characters.
Ellis Horowitz

Chapter 4. Variables, Expressions and Statements

Abstract
The central characteristic of imperative programming languages is that they allow the creation of variables. An identifier (or name) is usually some combination of alphabetic and numeric characters of restricted length. But what is a variable and how is it distinguished from a name? As we will see a name is merely one component of a variable. Suppose we have the assignment statement
Ellis Horowitz

Chapter 5. Types

Abstract
A data type is a set of objects and a set of operations on those objects which create, build-up, destroy, modify and pick apart instances of the objects. Every programming language begins by supplying a set of data types. In LISP, the major data type is the binary tree (called an S-expression) and the basic operations are called CAR, CDR and CONS. More on this in Chapter 12. In modern imperative programming languages the usual built-in data types include integer, real, character and Boolean. Table 5-1 lists the built-in types for some of these languages.
Ellis Horowitz

Chapter 6. Scope and Extent

Abstract
Scope refers to the way in which named entities such as variables, labels, types and procedures are controlled in their ability to have an effect in a program. To be more specific, the scope of a name is that part of the program text where all uses of the name are the same. This concept should not be confused with the related but distinct concept called extent which is the time during execution that the storage used to hold a variable’s value is bound to its name. In this chapter we study both of these notions. As scope and extent have long been recognized as essential issues of programming languages, they have been discussed in many texts before. Similar and more detailed treatments can be found in [Wulf 81 and Tennent 81].
Ellis Horowitz

Chapter 7. Procedures

Abstract
In Chapter 2 we mentioned the importance of supplying a means for abstraction in a programming language. The procedure was the earliest major form of abstraction and continues to be a central concept, so it certainly deserves a chapter of its own. A procedural abstraction is a mapping from a set of inputs to a set of outputs which can be described by a specification. The specification must show how the outputs relate to the inputs, but it need not reveal or even imply the way the outputs are to be computed. We call the procedure an abstraction because it allows a user to concentrate only on what is done and not on how it is accomplished.
Ellis Horowitz

Chapter 8. Data Abstraction

Abstract
An abstraction is a way of representing a group of related things by a single thing which expresses their similarities and suppresses their differences. The procedure is the time honored method for abstracting out how something is done leaving only a description of what is done. Only recently has the need for a mechanism which abstracts a data type been recognized, and now several languages have developed methods for describing a data abstraction. A data type is a set of objects and operations on those objects. The operations create, build-up, destroy and pick apart instances of the set of objects. A data abstraction in a programming language is a mechanism which collects together (or encapsulates) the representation and the operations of a data type. This encapsulation forms a wall which is intended to shield the data type from improper uses. But it also provides a “window” which allows the user a well-defined means for accessing the data type. Thus a data abstraction facility is more than just a new way of defining a data type, like a record, array, set or file would be in Pascal. In addition to defining a new type and usually allowing variables to be declared as only holding values of that type, it also shields the representation of the type, allows implementors to provide some operations to the user of the type, but possibly retain others for themselves. It usually does all this by introducing new scope rules into a programming language. In this section we will examine how all or some of these features have been combined into an abstract data type facility in the latest programming languages.
Ellis Horowitz

Chapter 9. Exception Handling

Abstract
A procedure may terminate in many ways. We may have an intentional termination caused by encountering either a return or a go to which branches out of the procedure. Or, an error may occur, such as division-by-zero or subscript-out-of-range, which causes a system interrupt which terminates execution. Or, an anticipated but rare condition may arise such as end_of_file. The return or goto mechanisms of a programming language do not allow us to handle these last two occurrences very well. On the one hand they provide no real means for the programmer to make use of the interrupt capability of the underlying machine. On the other hand it encourages awkward and inefficient programming, in the following sense. To handle anticipated errors, one typically adds a parameter to a procedure, which is appropriately set if the anticipated error is encountered. Then, a return is taken and the error flag must be queried to see if an abnormal event has occurred. An alternate approach once the error is encountered is to jump out of the procedure to an error routine, thereby disrupting the normal flow of control. Neither of these alternatives offers a powerful way to structure exception handling.
Ellis Horowitz

Chapter 10. Concurrency

Abstract
Up until now, underlying everything we have said is the basic assumption that when we run a program, only one statement is executed at any given time. This presents no problem if the model of the computer which we intend to use is a uniprocessor of the von Neumann variety. But there are now many reasons for considering this view of computing to be out of date. Similarly when we discussed procedures and coroutines, in both cases only one procedure (or coroutine) could be active at a given time. The language concept of the procedure (and the coroutine) is simply the next higher level of abstraction one gets by starting with a machine which permits only sequential execution. New hardware configurations will mean that new language features will be devised for exploiting the capabilities of these new machines.
Ellis Horowitz

Chapter 11. Input-Output

Abstract
Of all of the aspects of language design, input-output (or i/o) is regarded as the most distasteful. One reason for this is likely that it is impossible to make the design of this part of a language entirely independent of the computer’s characteristics. Some language designers have gone so far as to put their collective heads in the sand and not define input-output at all. The ALGOL60 team was an early sinner in this regard, and more recently the Euclid design group has done the same. But most language designers have faced up to the fact that it is better to design as much of the i/o in a machine independent manner, than to have the entire facility patched on by the language implementors.
Ellis Horowitz

Chapter 12. Functional Programming

Abstract
In this chapter we will be examining the language design aspects of the major applicative programming language LISP. As such, this chapter is not meant to be a tutorial on LISP. The reader who has not had previous experience with the language would do well to read the primer by [Weissman 67] or [Winston 79] for an excellent introduction to the language. I will begin with a general discussion of functions and evaluation. This is followed by showing how programs and data can be represented via a uniform mechanism, the so-called S-expression. Then I present the LISP interpreter in LISP, to reveal both the simple semantics of this powerful language and the simplicity of writing a sophisticated program. In the later sections I examine more detailed issues of language design and implementation relating to LISP including shallow binding, the handling of FEXPRs and FUNARGs and lazy evaluation. The treatment here has been especially influenced by John Allen’s Anatomy of LISP, (McGraw-Hill), which is recommended to the reader who wants a more advanced discussion of these topics.
Ellis Horowitz

Chapter 13. Data Flow Programming Languages

Abstract
The advancement of the speed of computers has been phenomenal over the past two decades. There have been several orders of magnitude improvement, so that now processors exist with cycle times of a few nanoseconds that can execute several million instructions in a second. Nevertheless the demand for computing cycles has outstripped even these gains. Compute bound jobs such as weather prediction, simulation, or monte-carlo analysis can require execution speeds in the billions of instructions per second. Therefore the search for faster computers goes on.
Ellis Horowitz

Chapter 14. Object Oriented Programming Languages

Abstract
Object oriented programming is a phrase that is beginning to catch on, just like the phrase structured programming did in the ’70s. Using this analogy, it is clear that object oriented programming will mean different things to different people, but in sum all people will subscribe to it. In this chapter I hope to clarify, somewhat, the meaning of this term so the reader will have a better idea of what it means.
Ellis Horowitz

Backmatter

Weitere Informationen