skip to main content
10.1145/1383422.1383438acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

Vpm tokens: virtual machine-aware power budgeting in datacenters

Published:23 June 2008Publication History

ABSTRACT

Power consumption and cooling overheads are becoming increasingly significant for large scale machines, affecting overall costs and the ability to extend resource capacities and performance capabilities. To help mitigate these issues, active power management technologies are being deployed aggressively, including power budgeting, which enables improved power provisioning and can address critical periods when power delivery or cooling capabilities are temporarily reduced. Given the use of virtualization to encapsulate application components into virtual machines (VMs), however, such power management capabilities must address the interplay between budgeting physical resources and the performance of the virtual machines used to run these applications. This paper proposes a set of cluster- and datacenter-level management components and abstractions for use by power budgeting policies. The key idea is to manage power from a VM-centric point of view, where the goal is to be aware of global utility tradeoffs between different virtual machines (and their applications) when maintaining power constraints for the physical hardware on which they run. Our approach to VM-aware power budgeting uses multiple distributed managers integrated into the VirtualPower Management (VPM) framework whose actions are coordinated via a new abstraction, termed VPM tokens. An implementation with the Xen hypervisor illustrates technical benefits of VPM tokens that include up to 43% improvements in global utility, highlighting the ability to dynamically improve cluster performance while still meeting power budgets.

References

  1. H. Abbasi, M. Wolf, and K. Schwan. Live data workspace: A flexible, dynamic and extensible platform for petascale applications. In Proceedings of IEEE Cluster Computing Conference, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Amazon Elastic Compute Cloud. http://aws.amazon.com/ec2.Google ScholarGoogle Scholar
  3. P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Chase, D. Anderson, P. Thakar, A. Vahdat, and R. Doyle. Managing energy and server resources in hosting centers. In Proceedings of the 18th Symposium on Operating Systems Principles (SOSP), 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield. Live migration of virtual machines. In Proceedings of the 2nd ACM/USENIX Symposium on Networked Systems Design and Implementation (NSDI), May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. N. Elnozahy, M. Kistler, and R. Rajamony. Energy-efficient server clusters. In Proceedings of the Workshop on Power-Aware Computing Systems, February 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. X. Fan, W.-D.Weber, and L. Barroso. Power provisioning for a warehouse-sized computer. In Proceedings of the International Symposium on Computer Architecture (ISCA), June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Femal and V. Freeh. Boosting data center performance through non-uniform power allocation. In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. R. Ge, X. Feng, W. Feng, and K. Cameron. Cpu miser: A performance-directed, run-time system for power-aware clusters. In Proceedings of the International Conference on Parallel Processing (ICPP), 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Ghiasi, T. Keller, and F. Rawson. Scheduling for heterogeneous processors in server systems. In Proceedings of the International Conference on Computing Frontiers, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Heath, A. P. Centeno, P. George, L. Ramos, Y. Jaluria, and R. Bianchini. Mercury and freon: Temperature emulation and management in server systems. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), October 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Heath, B. Diniz, E. V. Carrera, W. Meira Jr., and R. Bianchini. Energy conservation in heterogeneous server clusters. In Proceedings of the 10th Symposium on Principles and Practice of Parallel Programming (PPoPP), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hewlett-Packard, Intel, Microsoft, Phoenix, and Toshiba. Advanced configuration and power interface specification. http://www.acpi.info, September 2004.Google ScholarGoogle Scholar
  14. Y. Koh, R. Knauerhase, P. Brett, M. Bowman, Z. Wen, and C. Pu. An analysis of performance interference effects in virtual environments. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2007.Google ScholarGoogle ScholarCross RefCross Ref
  15. R. Kotla, S. Ghiasi, T. Keller, and F. Rawson. Scheduling processor voltage and frequency in server and cluster systems. In Proceedings of the Workshop on High-Performance, Power-Aware Computing (HP-PAC), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R. Kumar, D. Tullsen, P. Ranganathan, N. Jouppi, and K. Farkas. Single-isa heterogeneous multi-core architectures for multithreaded workload performance. In Proceedings of the International Symposium on Computer Architecture (ISCA), June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. the International Symposium on Computer Architecture (ISCA), June 2004.Google ScholarGoogle Scholar
  18. M. Lim, V. Freeh, and D. Lowenthal. Adaptive, transparent frequency and voltage scaling of communication phases in mpi programs. In IEEE/ACM Supercomputing, November 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. Mallik, J. Cosgrove, R. Dick, G. Memik, and P. Dinda. Picsel: Measuring user-perceived performance to control dynamic frequency scaling. In Proceedings of the 13th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), March 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Moore, J. Chase, P. Ranganathan, and R. Sharma. Making scheduling cool: Temperature-aware workload placement in data centers. In Proceedings of the USENIX Annual Technical Conference, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Nathuji, C. Isci, and E. Gorbatov. Exploiting platform heterogeneity for power efficient data centers. In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC), June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. R. Nathuji and K. Schwan. Virtualpower: Coordinated power management in virtualized enterprise systems. In Proceedings of the 21st ACM Symposium on Operating Systems Principles (SOSP), October 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Neiger, A. Santoni, F. Leung, D. Rodgers, and R. Uhlig. Intel virtualization technology: Hardware support for efficient processor virtualization. In Intel Technology Journal (http://www.intel.com/technology/itj/2006/v10i3/), August 2006.Google ScholarGoogle Scholar
  24. Nutch. http://lucene.apache.org/nutch.Google ScholarGoogle Scholar
  25. K. Rajamani and C. Lefurgy. On evaluating request-distribution schemes for saving energy in server clusters. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), March 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. P. Ranganathan, P. Leech, D. Irwin, and J. Chase. Ensemble-level power management for dense blade servers. In Proceedings of the International Symposium on Computer Architecture (ISCA), 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. Rountree, D. Lowenthal, S. Funk, V. Freeh, B. Supinski, and M. Schulz. Bounding energy consumption in large-scale mpi programs. In IEEE/ACM Supercomputing, November 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Stoess, C. Lang, and F. Bellosa. Energy management for hypervisor-based virtual machines. In Proceedings of the USENIX Annual Technical Conference, June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. Sugerman, G. Venkitachalam, and B.-H. Lim. Virtualizing i/o devices on vmware workstation's hosted virtual machine monitor. In Proceedings of the USENIX Annual Technical Conference, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. Waldspurger and W. Weihl. Lottery scheduling: Flexible proportional-share resource mangement. In Proceedings of the First Symposium on Operating System Design and Implementation (OSDI), 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. W. Walsh, G. Tesauro, J. Kephart, and R. Das. Utility functions in autonomic systems. In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC), 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Vpm tokens: virtual machine-aware power budgeting in datacenters

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              HPDC '08: Proceedings of the 17th international symposium on High performance distributed computing
              June 2008
              252 pages
              ISBN:9781595939975
              DOI:10.1145/1383422

              Copyright © 2008 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 23 June 2008

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              Overall Acceptance Rate166of966submissions,17%

              Upcoming Conference

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader