skip to main content
10.1145/2608020.2608026acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

A performance and energy analysis of I/O management approaches for exascale systems

Authors Info & Claims
Published:23 June 2014Publication History

ABSTRACT

The advent of fast, unprecedentedly scalable, yet energy-hungry exascale supercomputers poses a major challenge consisting in sustaining a high performance per watt ratio. While much recent work has explored new approaches to I/O management, aiming to reduce the I/O performance bottleneck exhibited by HPC applications (and hence to improve application performance), there is comparatively little work investigating the impact of I/O management approaches on energy consumption.

In this work, we explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. We closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. We implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences between these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption.

References

  1. James Hamilton, Cost of Power in Large-Scale Data Centers . http://perspectives.mvdirona.com/2008/11/28/ CostOfPowerInLargeScaleDataCenters.aspx, November2008.Google ScholarGoogle Scholar
  2. R. Bolze, F. Cappello, E. Caron, M. Daydé F. Desprez, E. Jeannot, Y. Jégou, S. Lanteri, J. Leduc, N. Melab, et al. Grid '5000: a large scale and highly reconfigurable experimental grid testbed. International Journal of High Performance Computing Applications, 20(4):481,2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. G. H. Bryan and J. M. Fritsch. A benchmark simulation for moist nonhydrostatic numerical models. Monthly Weather Review,130(12):2917--2928,2002.Google ScholarGoogle ScholarCross RefCross Ref
  4. P. H. Carns, W. B. Ligon, III, R. B. Ross, and R. Thakur. PVFS: a parallel file system for linux clusters. In Proceedings of the 4th annual Linux Showcase & Conference - Volume 4, Berkeley, CA, USA, 2000. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Dorier, G. Antoniu, F. Cappello, M. Snir, and L. Orf. Damaris: Leveraging Multicore Parallelism to Mask I/O Jitter. Research report RR-7706, INRIA, Dec 2011.Google ScholarGoogle Scholar
  6. M. Dorier, G. Antoniu, F. Cappello, M. Snir, and L. Orf. Damaris: How to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitter-free I/O. In Proceedings of the 2012 IEEE International Conference on Cluster Computing, Cluster'14, pages 155--163, Sept. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Gamell, I. Rodero, M. Parashar, J. C. Bennett, H. Kolla, J. Chen, P.-T. Bremer, A. G. Landge, A. Gyulassy, P. McCormick, S. Pakin, V. Pascucci, and S. Klasky. Exploringpower behaviors and trade-offs of in-situ data analytics. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC '13, pages 77:1--77:12, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Hoisie and V. Getov. Extreme-Scale Computing - Where 'Just More of the Same' Does Not Work. Computer, 42(11):24--26, Nov. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. H. Laros, III, K. T. Pedretti, S. M. Kelly, W. Shu, and C. T. Vaughan. Energy based performance tuning for large scale high performance computing systems. In Proceedings of the 2012 Symposium on High Performance Computing, HPC '12, pages 6:1--6:10, San Diego, CA, USA, 2012. Society for Computer Simulation International. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Lofstead, F. Zheng, Q. Liu, S. Klasky, R. Oldfield, T. Kordenbrock, K. Schwan, and M. Wolf. Managing Variability in the IO Performance of Petascale Storage Systems. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC'10, pages1--12,Washington,DC, USA, 2010. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. NCSA. BlueWaters project, http://www.ncsa.illinois.edu/BlueWaters/.Google ScholarGoogle Scholar
  12. C. Patel, R. Sharma, C. Bash, and S. Graupner. Energy aware grid: Global workload placement based on energy efficiency. HPL Technical Report, HPL-2002--329, Nov. 2002.Google ScholarGoogle Scholar
  13. D. Skinner and W. Kramer. Understanding the Causes of Performance Variability in HPC Workloads. In Proceedings of the IEEE International Workload Characterization Symposium, pages 137--149,Oct.2005.Google ScholarGoogle ScholarCross RefCross Ref
  14. F. Zheng, H. Abbasi, C. Docan, J. Lofstead, Q. Liu, S. Klasky, M. Parashar, N. Podhorszki, K. Schwan, and M. Wolf. PreDatA - Preparatory Data Analytics on Peta-Scale Machines. In Proceedings of the 2010 IEEE International Symposium on Parallel Distributed Processing, IPDPS'10, pages 1--12, April 2010.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A performance and energy analysis of I/O management approaches for exascale systems

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      DIDC '14: Proceedings of the sixth international workshop on Data intensive distributed computing
      June 2014
      62 pages
      ISBN:9781450329132
      DOI:10.1145/2608020

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 23 June 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      DIDC '14 Paper Acceptance Rate7of12submissions,58%Overall Acceptance Rate7of12submissions,58%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader