Skip to main content

2010 | Buch

Power-efficient System Design

verfasst von: Preeti Ranjan Panda, B. V. N. Silpa, Aviral Shrivastava, Krishnaiah Gummidipudi

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

The Information and communication technology (ICT) industry is said to account for 2% of the worldwide carbon emissions – a fraction that continues to grow with the relentless push for more and more sophisticated computing equipment, c- munications infrastructure, and mobile devices. While computers evolved in the directionofhigherandhigherperformanceformostofthelatterhalfofthe20thc- tury, the late 1990’s and early 2000’ssaw a new emergingfundamentalconcern that has begun to shape our day-to-day thinking in system design – power dissipation. As we elaborate in Chapter 1, a variety of factors colluded to raise power-ef?ciency as a ?rst class design concern in the designer’s mind, with profound consequences all over the ?eld: semiconductor process design, circuit design, design automation tools, system and application software, all the way to large data centers. Power-ef?cient System Design originated from a desire to capture and highlight the exciting developments in the rapidly evolving ?eld of power and energy op- mization in electronic and computer based systems. Tremendous progress has been made in the last two decades, and the topic continues to be a fascinating research area. To develop a clearer focus, we have concentrated on the relatively higher level of design abstraction that is loosely called the system level. In addition to the ext- sive coverage of traditional power reduction targets such as CPU and memory, the book is distinguished by detailed coverage of relatively modern power optimization ideas focussing on components such as compilers, operating systems, servers, data centers, and graphics processors.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Low Power Design: An Introduction
Abstract
Through most of the evolution of the electronics and computer industry in the twentieth century, technological progress was defined in terms of the inexorable march of density and speed. Increasing density resulted from process improvements that led to smaller and smaller geometries of semiconductor devices, so the number of transistors packed into a given area kept increasing. Increasing processing speed indirectly resulted from the same causes: more processing power on the same chip meant more computation resources and more data storage on-chip, leading to higher levels of parallelism and hence, more computations completed per unit time. Circuit and architectural innovations were also responsible for the increased speed. In the case of processors, deeper pipelines and expanding parallelism available led to improvements in the effective execution speed. In the case of memories, density continued to make remarkable improvements; performance improvements were not commensurate, but nevertheless, architectural innovations led to increased bandwidth at the memory interface.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 2. Basic Low Power Digital Design
Abstract
Moore’s law [12], which states that the “number of transistors that can be placed inexpensively on an integrated circuit will double approximately every two years,” has often been subject to the following criticism: while it boldly states the blessing of technology scaling, it fails to expose its bane. A direct consequence of Moore’s law is that the “power density of the integrated circuit increases exponentially with every technology generation”. History is witness to the fact that this was not a benign outcome. This implicit trend has arguably brought about some of the most important changes in electronic and computer designs. Since the 1970s, most popular electronics manufacturing technologies used bipolar and nMOS transistors. However, bipolar and nMOS transistors consume energy even in a stable combinatorial state, and consequently, by 1980s, the power density of bipolar designs was considered too high to be sustainable. IBM and Cray started developing liquid, and nitrogen cooling solutions for high-performance computing systems [5, 11, 16, 19, 21, 23–25]. The 1990s saw an inevitable switch to a slower, but lower-power CMOS technology (Fig. 2.1). CMOS transistors consume lower power largely because, to a first order of approximation, power is dissipated only when they switch states, and not when the state is steady. Now, in the late 2000s, we are witnessing a paradigm shift in computing: the shift to multi-core computing. The power density has once again increased so much that there is little option but to keep the hardware simple, and transfer complexity to higher layers of the system design abstraction, including software layers.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 3. Power-efficient Processor Architecture
Abstract
Since the creation of the first processor/CPU in 1971, silicon technology consistently allowed to pack twice the number of transistors on the same die every 18 to 24 months [33]. Scaling of technology allowed the implementation of faster and larger circuits on silicon, permitting a sophisticated and powerful set of features to be integrated into CPU. Figure 3.1 shows the evolution of processors from 4-bit scalar datapath to 64-bit superscalar datapath and their respective transistor counts. Processors evolved not only in terms of datapath width, but also in terms of a wide variety of architectural features such as pipelining, floating point support, on-chip memories, superscalar processing, out-of-order processing, speculative execution, multi-threading, muticore CPUs, etc.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 4. Power-efficient Memory and Cache
Abstract
The memory subsystem plays a dominant role in every type of modern electronic design, starting from general purpose microprocessors to customized application specific systems. Higher complexity in processors, SoCs, and applications executing on such platforms usually results from a combination of two factors: (1) larger amounts of data interacting in complex ways and (2) larger and more complex programs. Both factors have a bearing on an important class of components: memory. This is because both data and instructions need to be stored on the chip. Since every instruction results in instruction memory accesses to fetch it, and may optionally cause the data memory to be accessed, it is obvious that the memory unit must be carefully designed to accommodate and intelligently exploit the memory access patterns arising out of the very frequent accesses to instructions and data. Naturally, memory has a significant impact on most meaningful design metrics [31]:
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 5. Power Aware Operating Systems, Compilers, and Application Software
Abstract
What does a compiler have to do with power dissipation? A compiler is a piece of system software that parses a high level language, performs optimizing transformations, and finally generates code for execution on a processor. On the surface, it seems very far removed from an electrical phenomenon like power dissipation. Yet, it was not long before the two got inextricably linked. The involvement of the compiler along with the processor architecture in the design space exploration loop of application specific systems (ASIPs) might have eased the transition. In this scenario, compiler analysis can actually influence the choice of architectural parameters of the final processor. Clearly, if a low power system consisting of an application running on a processor is desired, the selected processor architecture has to work in tandem with the compiler and application programmer – an architectural feature is useless if it is not properly exploited by the code generated by a compiler or written by a programmer. Low power instruction encoding is an example optimization that features the compiler in a central role with the explicit role of reducing power. In an ASIP, the opcode decisions need not be fixed, and could be tuned to the application. Since the compiler has an intimate knowledge of the application, it could anticipate the transition patterns between consecutive instructions from the program layout and suggest an encoding of instructions that reduces switching power arising out of the fetch, transmission, and storage of sequences of instructions. Modern compiler designers are investigating the development of power awareness in a more direct way in general purpose processor systems, not just ASIPs. The role of the compiler and application programmer grows along with the concomitant provision of hooks and control mechanisms introduced by the hardware to support high level decision making on power-related issues.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 6. Power Issues in Servers and Data Centers
Abstract
Power optimization as a research topic was first studied in the context of portable and handheld systems where saving battery life was of prime importance. However, since that time, the need for saving power has become significantly more pervasive all over the computing machinery, from portable devices to high-end servers and data centers. The need for power optimization in larger scale computing environments such as servers and data centers arise from the increasing maintenance costs (including the electricity charges) due to the power demands of a very large number of computers. This introduces new contradicting requirements in the server design space, which, in the past, were designed to a different set of specifications. Performance was the primary design metric, with execution time and throughput being the main considerations. In addition, reliability and fault-tolerance related concerns were also significant, leading to designs with redundancy that offered high availability. While these concerns continue to be important, the emergence of power efficiency has led to interesting innovations in the way such systems are conceived, architected, and programmed.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Chapter 7. Low Power Graphics Processors
Abstract
So far we studied power optimizations at various levels of design abstraction such as the circuit level, architectural level, all the way up to the server and data center level. In this chapter, we present a case study that combines several of the aforementioned techniques in a reasonably complex system: a power efficient Graphics Processor.
Preeti Ranjan Panda, Aviral Shrivastava, B. V. N. Silpa, Krishnaiah Gummidipudi
Backmatter
Metadaten
Titel
Power-efficient System Design
verfasst von
Preeti Ranjan Panda
B. V. N. Silpa
Aviral Shrivastava
Krishnaiah Gummidipudi
Copyright-Jahr
2010
Verlag
Springer US
Electronic ISBN
978-1-4419-6388-8
Print ISBN
978-1-4419-6387-1
DOI
https://doi.org/10.1007/978-1-4419-6388-8

Neuer Inhalt