Skip to main content
main-content

Über dieses Buch

Today's microprocessors are the powerful descendants of the von Neumann 1 computer dating back to a memo of Burks, Goldstine, and von Neumann of 1946. The so-called von Neumann architecture is characterized by a se­ quential control flow resulting in a sequential instruction stream. A program counter addresses the next instruction if the preceding instruction is not a control instruction such as, e. g. , jump, branch, subprogram call or return. An instruction is coded in an instruction format of fixed or variable length, where the opcode is followed by one or more operands that can be data, addresses of data, or the address of an instruction in the case of a control instruction. The opcode defines the types of operands. Code and data are stored in a common storage that is linear, addressed in units of memory words (bytes, words, etc. ). The overwhelming design criterion of the von Neumann computer was the minimization of hardware and especially of storage. The most simple implementation of a von Neumann computer is characterized by a microar­ chitecture that defines a closely coupled control and arithmetic logic unit (ALU), a storage unit, and an I/O unit, all connected by a single connection unit. The instruction fetch by the control unit alternates with operand fetches and result stores for the AL U.

Inhaltsverzeichnis

Frontmatter

1. Basic Pipelining and Simple RISC Processors

Abstract
CISC. Conventional state-of-the-art computers in the 1960s and 1970s, exemplified by the IBM System/370, and the DEC PDP-11 minicomputer series and VAX-11/780 super minicomputer, were rack-based machines implemented with discrete logic and only rarely with microchips. A processor of such a machine used a complex instruction set, consisting of as many as 304 instructions, 16 addressing modes, and more than 10 different instruction lengths in the case of VAX.
Jurij Šilc, Borut Robič, Theo Ungerer

2. Dataflow Processors

Abstract
Control-Flow. The most common computing model (i.e., a description of how a program is to be evaluated) is the von Neumann control-flow computing model. This model assumes that a program is a series of addressable instructions, each of which either specifies an operation along with memory locations of the operands or it specifies (un)conditional transfer of control to some other instruction. A control-flow computing model essentially specifies the next instruction to be executed depending on what happened during the execution of the current instruction. The next instruction to be executed is pointed to and triggered by the program counter PC. This instruction is executed even if some of its operands are not available yet (e.g., uninitialized).
Jurij Šilc, Borut Robič, Theo Ungerer

3. CISC Processors

Abstract
Modern superscalar processors, which will be covered extensively in Chap. 4, use multiple FUs. To keep these FUs as busy as possible situations must be allowed where instructions are executed in a different order from that of the original program. Techniques supporting such an out-of-order execution were developed even in the mid-1960s in some complex instruction set computers (CISC) which were large mainframe computers at that time. It would take too much space to describe CISC mainframes in detail. Therefore we only briefly itemize some in this chapter and point out those that made a strong impact on the microarchitecture of today’s superscalars.
Jurij Šilc, Borut Robič, Theo Ungerer

4. Multiple-Issue Processors

Abstract
Superscalar processors started to conquer the microprocessor market at the beginning of the 1990s with dual-issue processors. The principal motivation was to overcome the single issue of scalar RISC processors by providing the facility to fetch, decode, issue, execute, and write back results of more than one instruction per cycle. In fact, the first commercially successful super-scalar microprocessor was the Intel i960 RISC processor which hit the market in 1990. Further first-generation dual-issue superscalar RISC processors were the Motorola 88110, Alpha 21064, and the HP PA-7100.1 Other super-scalar RISC processors of the mid-1990s era were the IBM POWER2 RISC System/6000 processor, its offspring PowerPC 601, 603, 604, 750 (G3), and 620, the DEC Alpha 21164, the Sun SuperSPARC and U1traSPARC, the HP PA-8000, and the MIPS R10000. Today’s superscalar RISC processors MIPS R12000, HP PA-8500, Sun U1traSPARC-II, IIi and III, Alpha 21264, IBM POWER2-Super-Chip (P2SC) are 4-issue or 6-issue processors.
Jurij Šilc, Borut Robič, Theo Ungerer

5. Future Processors to use Fine-Grain Parallelism

Abstract
Forecasting the implications of technology is even harder than forecasting technological trends, although even the latter can prove wildly inaccurate. This and the next two chapters describe some ideas about future microarchitectures based on technological forecasts. Principal problems and possible application domains are stated. All architectures described are still in survey stage. Up to now only simulation studies exist and very few prototype implementations are available. So most of this and the following two chapters is highly speculative.
Jurij Šilc, Borut Robič, Theo Ungerer

6. Future Processors to use Coarse-Grain Parallelism

Abstract
Current superscalar microprocessors are able to issue up to six multiple instructions each clock cycle from a conventional linear instruction stream. VLSI technology will allow future microprocessors with an issue bandwidth of 8–32 instructions per cycle.
Jurij Šilc, Borut Robič, Theo Ungerer

7. Processor-in-Memory, Reconfigurable, and Asynchronous Processors

Abstract
Processor-in-memory (PIM) or intelligent RAM (IRAM) chips integrate one or more processors with large, high-bandwidth, on-chip DRAM banks, which provide the processor(s) with sufficient bandwidth at a reasonable cost.
Jurij Šilc, Borut Robič, Theo Ungerer

Backmatter

Weitere Informationen