Skip to main content
main-content

Über dieses Buch

This book examines computer architecture, computability theory, and the history of computers from the perspective of minimalist computing - a framework in which the instruction set consists of a single instruction. This approach is different than that taken in any other computer architecture text, and it is a bold step. The audience for this book is researchers, computer hardware engineers, software engineers, and systems engineers who are looking for a fresh, unique perspective on computer architecture. Upper division undergraduate students and early graduate students studying computer architecture, computer organization, or embedded systems will also find this book useful. A typical course title might be "Special Topics in Computer Architecture." The organization ofthe book is as follows. First, the reasons for studying such an "esoteric" subject are given. Then, the history and evolution of instruction sets is studied with an emphasis on how modern computing has features ofone instruction computing. Also, previous computer systems are reviewed to show how their features relate to one instruction computers. Next, the primary forms of one instruction set computing are examined. The theories of computation and of Turing machines are also reviewed to examine the theoretical nature of one instruction computers. Other processor architectures and instruction sets are then mapped into single instructions to illustrate the features of both types of one instruction computers. In doing so, the features of the processor being mapped are highlighted.

Inhaltsverzeichnis

Frontmatter

Chapter 1. One Instruction Set Computing

The Lure of Minimalism
Abstract
The single or one instruction set computer (OISC, pronounced, “whisk”) is the penultimate reduced instruction set computer (RISC)1. In OISC, the instruction set consists of one instruction, and then by the orthogonality of the instruction along with composition, a complete set of operations is synthesized. This approach is completely opposed to a complex instruction set computer (CISC), which incorporates many complex instructions as microprograms within the processor.
William F. Gilreath, Phillip A. Laplante

Chapter 2. Instruction Sets

An Overview
Abstract
An instruction set constitutes the language that describes a computer’s functionality. It is also a function of the computer’s organization2. While an instruction set reflects differing underlying processor design, all instruction sets have much in common in terms of specifying functionality.
William F. Gilreath, Phillip A. Laplante

Chapter 3. Types of Computer Architectures

A Miminimalist View
Abstract
It is inappropriate here to provide a complete review of basic computer architecture principles as the reader is assumed to have acquired these. Instead, as nomenclature can differ slightly, a few key components are reviewed for consistency of reference. In this regard, the glossary may also be helpful.
William F. Gilreath, Phillip A. Laplante

Chapter 4. Evolution of Instruction Sets

Going Full Circle
Abstract
To understand how OISC fits into the instruction set concept it is helpful to study the evolution of instruction sets. The emphasis on this synopsis is not comprehensive.6 Instead, the focus is on key moments in computing history that impacted the nature of instruction sets.
William F. Gilreath, Phillip A. Laplante

Chapter 5. CISC, RISC, OISC

A Tale of Three Architectures
Abstract
Complex instruction set computers (CISC) supply relatively sophisticated functions as part of the instruction set. This gives the programmer a variety of powerful instructions with which to build applications programs and even more powerful software tools, such as assemblers and compilers. In this way, CISC processors seek to reduce the programmer’s coding responsibility, increase execution speeds, and minimize memory usage.
William F. Gilreath, Phillip A. Laplante

Chapter 6. OISC Architectures

Elegance Through Simplicity
Abstract
There are several theoretical models OISC, of which subtract and branch if negative, MOVE, and the more primitive haft adder architecture are the most important. The following sections describe these paradigms in some detail.
William F. Gilreath, Phillip A. Laplante

Chapter 7. Historical Review of OISC

Nothing New Under the Sun
Abstract
A one instruction computer was first described by van der Poel in his doctoral thesis, “The Logical Principles of Some Computers” [van der Poel56] and in an article, “Zebra, a simple binary computer” [van der Poel59]. van der Poel’s computer called “ZEBRA” for Zeer Eenvoudige Binaire Reken Automaat, was a subtract and branch if negative based machine. Standard Telephones and Cables Limited of South Wales in the United Kingdom later built the computer, which was delivered in 1958. For his pioneering efforts, van der Poel was awarded the 1984 Computer Pioneer Award by the IEEE Computer Society.
William F. Gilreath, Phillip A. Laplante

Chapter 8. Instruction Set Completeness

The Completeness Theorem
Astract
It is natural to assume that since a processor must perform many kinds of operations, having only one instruction is inadequate. This leads to the question of instruction set completeness, or “what is the minimal functionality of an instruction needed to perform all other kinds of operations in a processor?” This simple question touches upon the area of computation theory of effective computability.
William F. Gilreath, Phillip A. Laplante

Chapter 9. OISC Mappings

Simulating Real Processors with OISC
Abstract
It has been noted that one of the advantages in studying OISC is that it can be used to benchmark existing instruction sets. In this chapter such a process is demonstrated for the four architecture types. The strategy involves mapping or simulating each architecture with a OISC architecture. This is done for both MOVE and SBN OISCs.
William F. Gilreath, Phillip A. Laplante

Chapter 10. Parallel Architectures

Multiplicty
Abstract
For all its virtues, the stored-program computer paradigm contains a significant defect – the “von Neumann bottleneck.” The bottleneck is caused by the fact that both data and instruction access uses the same bus, which has limited bandwidth. The processor, therefore, has to balance the number of instructions executed against the amount of data to be accessed. Furthermore, the stored program concept is inherently sequential, which does not work easily with the various parallel models of computation. High-speed memory internal and external to the processor, mitigates the problem, but also creates a cache coherence problem necessitating that the cache be kept in sync with main memory.
William F. Gilreath, Phillip A. Laplante

Chapter 11. Applications and Implementations

And the Future
Abstract
One instruction computing is based on the principle of taking one behavior and repeatedly applying it to create more complex behaviors. This phenomenon is not confined to OISC. Other examples exist in nature and in other domains.
William F. Gilreath, Phillip A. Laplante

Backmatter

Weitere Informationen