skip to main content
10.1145/1356058.1356060acmconferencesArticle/Chapter ViewAbstractPublication PagescgoConference Proceedingsconference-collections
research-article

Perfdiff: a framework for performance difference analysis in a virtual machine environment

Authors Info & Claims
Published:06 April 2008Publication History

ABSTRACT

Although applications running on virtual machines, such as Java, can achieve platform independence, performance evaluation and analysis becomes difficult due to extra intermediate layers and the dynamic nature of virtual execution environment.

We present a framework for analyzing performance across multiple runs of a program, possibly in dramatically different execution environments. Our framework is based upon our prior lightweight instrumentation technique for building a calling context tree (CCT) of methods at runtime. We first represent each run of a program by a CCT, annotating its edges and nodes with various performance attributes such as call counts or elapsed times. We then identify components of the CCTs that are topologically identical but with significant performance-attribute differences. Next, the topological differences of two CCTs are identified, while ignoring the performance attributes. Finally, we identify the differences in both topology and performance attributes that can be fed back to the software developers or performance analyzers for further scrutiny.

We have applied our methodology to a number of well-known Java benchmarks and a large J2EE application, using call counters as the performance attribute. Our results indicate that this approach can efficiently and effectively relate differences to a small percentage of nodes on the CCT. We present an iterative framework for program analysis, where topological changes are performed to identify differences in CCTs. For most of the test programs, applying a few topological changes such as deletion, addition, and renaming of nodes - are needed to make any two CCTs from the same program identical, whereas less than 2% of performance-attribute changes are needed to achieve a 90% overlap of any two CCTs in performance attributes, after the two CCTs are topologically matched. We have applied our framework to identify subtle configuration differences for complex server applications.

References

  1. Colorado bench. http://www-plan.cs.colorado.edu/henkel/projects/colorado bench.Google ScholarGoogle Scholar
  2. Java virtual machine profiler interface. http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html.Google ScholarGoogle Scholar
  3. Glenn Ammons, Thomas Ball, and James R. Larus. Exploiting hardware performance counters with flow and context sensitive profiling. In SIGPLAN Conference on Programming Language Design and Implementation, pages 85--96, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Taweesup Apiwattanapong, Alessandro Orso, and Mary Jean Harrold. A differencing algorithm for object-oriented programs. In 19th IEEE International Conference on Automated Software Engineering (ASE'04), pages 2--13. ACM Press, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Alberto Apostolico and Zvi Galil. Oxford University Press, 1997.Google ScholarGoogle Scholar
  6. M. Arnold and B. Ryder. A framework for reducing the cost of instrumented code. In SIGPLAN Conference on Programming Language Design and Implementation, pages 168--179, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Arnold and Peter F. Sweeney. Approximating the calling context tree via sampling. IBM Research Report, July 2000.Google ScholarGoogle Scholar
  8. Standard Performance Evaluation Corporation. Specjbb2000 java business benchmark. http://www.spec.org/jbb2000.Google ScholarGoogle Scholar
  9. Standard Performance Evaluation Corporation. Specjvm98 benchmarks. http://www.spec.org/jvm98.Google ScholarGoogle Scholar
  10. P. T. Feller. Value profiling for instructions and memory locations. Masters Thesis CS98-581, University of California San Diego, April 1998.Google ScholarGoogle Scholar
  11. Nikola Grcevski, Allan Kielstra, Kevin Stoodley, Mark Stoodley, and Vijay Sundaresan. Java just-in-time compiler and virtual machine improvements for server and middleware applications. In Usenix 3rd Virtual Machine Research and Technology Symposium (VM'04), 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Jackson and D.A. Ladd. Semantic diff: a tool for summarizing the effects of modifications. In Proceedings of the 21st IEEE International Conference on Software Maintenance, pages 243 -- 252. IEEE Press, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. W. Laski and J. Szermer. Identification of program modifications and its applications in software maintenance. In In Proceedings of the IEEE International Conference on Software Maintenance, pages 282--290. IEEE Press, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  14. Eugene W. Myers. An o(nd) difference algorithm and its variations. pages 251--266, 1986.Google ScholarGoogle Scholar
  15. Ueli Wahli, Gustavo Garcia Ochoa, Sharad Cocasse, and Markus Muetschard. Websphere version 5.1 application developer 5.1 web services handbook.Google ScholarGoogle Scholar
  16. N. Wilde. Faster reuse and maintenance using software reconnaissance. Technical report, 1994.Google ScholarGoogle Scholar
  17. Z. Xing and E. Stroulia. Umldiff: An algorithm for object-oriented design differencing. In 20th IEEE International Conference on Automated Software Engineering (ASE'05), pages 54--65. ACM Press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Xiangyu Zhang and Rajiv Gupta. Matching execution histories of program versions. In ESEC/FSE-13: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, pages 197--206. ACM Press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Xiaotong Zhuang, Mauricio J. Serrano, Harold W. Cain, and Jong-Deok Choi. Accurate, efficient, and adaptive calling context profiling. In In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI-06), 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Perfdiff: a framework for performance difference analysis in a virtual machine environment

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CGO '08: Proceedings of the 6th annual IEEE/ACM international symposium on Code generation and optimization
      April 2008
      235 pages
      ISBN:9781595939784
      DOI:10.1145/1356058

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 April 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CGO '08 Paper Acceptance Rate21of66submissions,32%Overall Acceptance Rate312of1,061submissions,29%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader