skip to main content
10.1145/1713072.1713080acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

Scalable I/O tracing and analysis

Authors Info & Claims
Published:14 November 2009Publication History

ABSTRACT

As supercomputer performance approached and then surpassed the petaflop level, I/O performance has become a major performance bottleneck for many scientific applications. Several tools exist to collect I/O traces to assist in the analysis of I/O performance problems. However, these tools either produce extremely large trace files that complicate performance analysis, or sacrifice accuracy to collect high-level statistical information. We propose a multi-level trace generator tool, ScalaIOTrace, that collects traces at several levels in the HPC I/O stack. ScalaIOTrace features aggressive trace compression that generates trace files of near constant size for regular I/O patterns and orders of magnitudes smaller for less regular ones. This enables the collection of I/O and communication traces of applications running on thousands of processors.

Our contributions also include automated trace analysis to collect selected statistical information of I/O calls by parsing the compressed trace on-the-fly and time-accurate replay of communication events with MPI-IO calls. We evaluated our approach with the Parallel Ocean Program (POP) climate simulation and the FLASH parallel I/O benchmark. POP uses NetCDF as an I/O library while FLASH I/O uses the parallel HDF5 I/O library, which internally maps onto MPI-IO. We collected MPI-IO and low-level POSIX I/O traces to study application I/O behavior. Our results show constant size trace files of only 145KB irrespective of the number of nodes for FLASH I/O benchmark, which exhibits regular I/O and communication pattern. For POP, we observe up to two orders of magnitude reduction in trace file sizes compared to flat traces. Statistical information gathered reveals insight on the number of I/O and communication calls issued in the POP and FLASH I/O. Such concise traces are unprecedented for isolated I/O and combined I/O plus communication tracing.

References

  1. FLASH I/O benchmark routine. http://www.ucolick.org/zingale/flash_benchmark_io.Google ScholarGoogle Scholar
  2. Hierarchical data format. http://www.hdfgroup.org/HDF5.Google ScholarGoogle Scholar
  3. network common data form. http://www.unidata.ucar.edu/software/netcdf/.Google ScholarGoogle Scholar
  4. X. Gao, A. Snavely, and L. Carter. Path grammar guided trace compression and trace approximation. High-Performance Distributed Computing, International Symposium on, 0:57--68, 2006.Google ScholarGoogle Scholar
  5. M. Geimer, F. Wolf, B. J. N. Wylie, E. Abraham, D. Becker, and B. Mohr. The scalasca performance toolset architecture. In International Workshop on Scalable Tools for High-End Computing, June 2008.Google ScholarGoogle Scholar
  6. P. Havlak and K. Kennedy. An implementation of interprocedural bounded regular section analysis. IEEE Transactions on Parallel and Distributed Systems, 2(3):350--360, July 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. W. Jones, P. H. Worley, Y. Yoshida, J. B. White, III, and J. Levesque. Practical performance portability in the parallel ocean program (pop): Research articles. Concurr. Comput. Pract. Exper., 17(10):1317--1327, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Knüpfer, R. Brendel, H. Brunst, H. Mix, and W. E. Nagel. Introducing the open trace format (OTF). In International Conference on Computational Science, pages 526--533, May 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. Lu and K. Shen. Multi-layer event trace analysis for parallel i/o performance tuning. In ICPP '07: Proceedings of the 2007 International Conference on Parallel Processing, page 12, Washington, DC, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Marathe and F. Mueller. Detecting memory performance bottlenecks via binary rewriting. In Workshop on Binary Translation, Sept. 2002.Google ScholarGoogle Scholar
  11. MPI-2: Extensions to the message-passing interface. July 1997.Google ScholarGoogle Scholar
  12. W. E. Nagel, A. Arnold, M. Weber, H. C. Hoppe, and K. Solchenbach. VAMPIR: Visualization and analysis of MPI resources. Supercomputer, 12(1):69--80, 1996.Google ScholarGoogle Scholar
  13. M. Noeth, F. Mueller, M. Schulz, and B. R. de Supinski. Scalable compression and replay of communication traces in massively parallel environments. In International Parallel and Distributed Processing Symposium, Apr. 2007.Google ScholarGoogle Scholar
  14. M. Noeth, F. Mueller, M. Schulz, and B. R. de Supinski. Scalatrace: Scalable compression and replay of communication traces in high performance computing. Journal of Parallel Distributed Computing, 69(8):969--710, Aug. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. V. Pillet, J. Labarta, T. Cortes, and S. Girona. PARAVER: A tool to visualise and analyze parallel code. In Proceedings of WoTUG-18: Transputer and occam Developments, volume 44 of Transputer and Occam Engineering, pages 17--31, Apr. 1995.Google ScholarGoogle Scholar
  16. P. Ratn, F. Mueller, B. R. de Supinski, and M. Schulz. Preserving time in large-scale communication traces. In International Conference on Supercomputing, pages 46--55, June 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. S. Shende and A. D. Malony. The tau parallel performance system. Int. J. High Perform. Comput. Appl., 20(2):287--311, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. S. Vetter and B. R. de Supinski. Dynamic software testing of mpi applications with umpire. In Supercomputing, page 51, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Scalable I/O tracing and analysis

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        PDSW '09: Proceedings of the 4th Annual Workshop on Petascale Data Storage
        November 2009
        58 pages
        ISBN:9781605588834
        DOI:10.1145/1713072
        • Conference Chair:
        • Garth A. Gibson

        Copyright © 2009 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 November 2009

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate17of41submissions,41%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader