ABSTRACT
Although applications running on virtual machines, such as Java, can achieve platform independence, performance evaluation and analysis becomes difficult due to extra intermediate layers and the dynamic nature of virtual execution environment.
We present a framework for analyzing performance across multiple runs of a program, possibly in dramatically different execution environments. Our framework is based upon our prior lightweight instrumentation technique for building a calling context tree (CCT) of methods at runtime. We first represent each run of a program by a CCT, annotating its edges and nodes with various performance attributes such as call counts or elapsed times. We then identify components of the CCTs that are topologically identical but with significant performance-attribute differences. Next, the topological differences of two CCTs are identified, while ignoring the performance attributes. Finally, we identify the differences in both topology and performance attributes that can be fed back to the software developers or performance analyzers for further scrutiny.
We have applied our methodology to a number of well-known Java benchmarks and a large J2EE application, using call counters as the performance attribute. Our results indicate that this approach can efficiently and effectively relate differences to a small percentage of nodes on the CCT. We present an iterative framework for program analysis, where topological changes are performed to identify differences in CCTs. For most of the test programs, applying a few topological changes such as deletion, addition, and renaming of nodes - are needed to make any two CCTs from the same program identical, whereas less than 2% of performance-attribute changes are needed to achieve a 90% overlap of any two CCTs in performance attributes, after the two CCTs are topologically matched. We have applied our framework to identify subtle configuration differences for complex server applications.
- Colorado bench. http://www-plan.cs.colorado.edu/henkel/projects/colorado bench.Google Scholar
- Java virtual machine profiler interface. http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html.Google Scholar
- Glenn Ammons, Thomas Ball, and James R. Larus. Exploiting hardware performance counters with flow and context sensitive profiling. In SIGPLAN Conference on Programming Language Design and Implementation, pages 85--96, 1997. Google ScholarDigital Library
- Taweesup Apiwattanapong, Alessandro Orso, and Mary Jean Harrold. A differencing algorithm for object-oriented programs. In 19th IEEE International Conference on Automated Software Engineering (ASE'04), pages 2--13. ACM Press, 2004. Google ScholarDigital Library
- Alberto Apostolico and Zvi Galil. Oxford University Press, 1997.Google Scholar
- M. Arnold and B. Ryder. A framework for reducing the cost of instrumented code. In SIGPLAN Conference on Programming Language Design and Implementation, pages 168--179, 2001. Google ScholarDigital Library
- M. Arnold and Peter F. Sweeney. Approximating the calling context tree via sampling. IBM Research Report, July 2000.Google Scholar
- Standard Performance Evaluation Corporation. Specjbb2000 java business benchmark. http://www.spec.org/jbb2000.Google Scholar
- Standard Performance Evaluation Corporation. Specjvm98 benchmarks. http://www.spec.org/jvm98.Google Scholar
- P. T. Feller. Value profiling for instructions and memory locations. Masters Thesis CS98-581, University of California San Diego, April 1998.Google Scholar
- Nikola Grcevski, Allan Kielstra, Kevin Stoodley, Mark Stoodley, and Vijay Sundaresan. Java just-in-time compiler and virtual machine improvements for server and middleware applications. In Usenix 3rd Virtual Machine Research and Technology Symposium (VM'04), 2004. Google ScholarDigital Library
- D. Jackson and D.A. Ladd. Semantic diff: a tool for summarizing the effects of modifications. In Proceedings of the 21st IEEE International Conference on Software Maintenance, pages 243 -- 252. IEEE Press, 1994. Google ScholarDigital Library
- W. Laski and J. Szermer. Identification of program modifications and its applications in software maintenance. In In Proceedings of the IEEE International Conference on Software Maintenance, pages 282--290. IEEE Press, 1992.Google ScholarCross Ref
- Eugene W. Myers. An o(nd) difference algorithm and its variations. pages 251--266, 1986.Google Scholar
- Ueli Wahli, Gustavo Garcia Ochoa, Sharad Cocasse, and Markus Muetschard. Websphere version 5.1 application developer 5.1 web services handbook.Google Scholar
- N. Wilde. Faster reuse and maintenance using software reconnaissance. Technical report, 1994.Google Scholar
- Z. Xing and E. Stroulia. Umldiff: An algorithm for object-oriented design differencing. In 20th IEEE International Conference on Automated Software Engineering (ASE'05), pages 54--65. ACM Press, 2005. Google ScholarDigital Library
- Xiangyu Zhang and Rajiv Gupta. Matching execution histories of program versions. In ESEC/FSE-13: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, pages 197--206. ACM Press, 2005. Google ScholarDigital Library
- Xiaotong Zhuang, Mauricio J. Serrano, Harold W. Cain, and Jong-Deok Choi. Accurate, efficient, and adaptive calling context profiling. In In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI-06), 2006. Google ScholarDigital Library
Index Terms
- Perfdiff: a framework for performance difference analysis in a virtual machine environment
Recommendations
A performance comparison of DTN bundle protocol implementations on resource constrained nodes
CHANTS '12: Proceedings of the seventh ACM international workshop on Challenged networksAs delay-tolerant networking (DTN) finds applications in a wider variety of environments, DTN implementations have been ported to a variety of devices. Performance testing of DTN implementations has typically been carried out on powerful hardware, but ...
Performance comparison of novel object detection models on traffic data
ICMLT '23: Proceedings of the 2023 8th International Conference on Machine Learning TechnologiesThis research evaluated the performance of 14 novel object detection models on a traffic dataset to reveal significant differences between the selected models. The goal was to generate recommendations for best-performing models for researchers coming ...
Performance Overhead among Three Hypervisors: An Experimental Study Using Hadoop Benchmarks
BIGDATACONGRESS '13: Proceedings of the 2013 IEEE International Congress on Big DataHyper visors are widely used in cloud environments and their impact on application performance has been a topic of significant research and practical interest. We conducted experimental measurements of several benchmarks using Hadoop MapReduce to ...
Comments