skip to main content
10.1145/2095050.2095100acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

A microbenchmark case study and lessons learned

Authors Info & Claims
Published:23 October 2011Publication History

ABSTRACT

The extra abstraction layer posed by the virtual machine, the JIT compilation cycles and the asynchronous garbage collection are the main reasons that make the benchmarking of Java code a delicate task. The primary weapon in battling these is replication: "billions and billions of runs", is phrase sometimes used by practitioners. This paper describes a case study, which consumed hundreds of hours of CPU time, and tries to characterize the inconsistencies in the results we encountered.

References

  1. K. Arnold and J. Gosling. The Java Programming Language. Addison-Wesley, Reading, Massachusetts, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Arnold, M. Hind, and B. G. Ryder. Online feedback-directed optimization of java. In Proc. of the 17th Ann. Conf. on OO Prog. Sys., Lang., & Appl. (OOPSLA'02). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. H. Berry, D. Gracia Pérez, and O. Temam. Chaos in computer performance. Chaos, 16(1):013110--013110-15, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  4. S. M. Blackburn, R. Garner, C. Hoffman, A. M. Khan, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanović, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In P. L. Tarr and W. R. Cook, eds., Proc. of the 21st Ann. Conf. on OO Prog. Sys., Lang., & Appl. (OOPSLA'06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. M. Blackburn, K. S. McKinley, R. Garner, C. Hoffmann, A. M. Khan, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanovik, T. VanDrunen, D. von Dincklage, and B. Wiedermann. Wake up and smell the coffee: evaluation methodology for the 21st century. Communications of the ACM, 51(8):83--89, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. L. Eeckhout, A. Georges, and K. De Bosschere. How java programs interact with virtual machines at the microarchitectural level. In R. Crocker and G. L. S. Jr., eds., Proc. of the 18th Ann. Conf. on OO Prog. Sys., Lang., & Appl. (OOPSLA'03). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous Java performance evaluation. In R. P. Gabriel and D. Bacon, eds., Proc. of the 22nd Ann. Conf. on OO Prog. Sys., Lang., & Appl. (OOPSLA'07). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Georges, D. Buytaert, L. Eeckhout, and K. De Bosschere. Method-level phase behavior in Java workloads. In Vlissides and Schmidt {13}, pp. 270--287. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. S. Hazelhurst. Truth in advertising : reporting performance of computer programs, algorithms and the impact of architecture and systems environment. South African Computer Journal, 46:24--37, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  10. X. Huang, S. M. Blackburn, K. S. McKinley, J. E. B. Moss, Z. Wang, and P. Cheng. The garbage collection advantage: improving program locality. In Vlissides and Schmidt {13}, pp. 69--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Lea, D. F. Bacon, and D. Grove. Languages and performance engineering: method, instrumentation, and pedagogy. Sigplan Notices, 43:87--92, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Mytkowicz, A. Diwan, M. Hauswirth, and P. F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proceeding of the 14th international conference on Architectural support for programming languages and operating systems. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. M. Vlissides and D. C. Schmidt, eds. Proc. of the 19th Ann. Conf. on OO Prog. Sys., Lang., & Appl. (OOPSLA'04), Vancouver, BC, Canada, Oct. 2004. ACM SIGPLAN Notices 39 (10).Google ScholarGoogle Scholar

Index Terms

  1. A microbenchmark case study and lessons learned

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SPLASH '11 Workshops: Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE! 2011, AOOPES'11, NEAT'11, & VMIL'11
      October 2011
      358 pages
      ISBN:9781450311830
      DOI:10.1145/2095050

      Copyright © 2011 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 23 October 2011

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader