ABSTRACT
Understanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ramifications for many architectural and compiler techniques, from thread scheduling, to feedback directed optimizations, to the way programs are simulated. However, in order to take advantage of time-varying behavior, we must first develop the analytical tools necessary to automatically and efficiently analyze program behavior over large sections of execution.Our goal is to develop automatic techniques that are capable of finding and exploiting the Large Scale Behavior of programs (behavior seen over billions of instructions). The first step towards this goal is the development of a hardware independent metric that can concisely summarize the behavior of an arbitrary section of execution in a program. To this end we examine the use of Basic Block Vectors. We quantify the effectiveness of Basic Block Vectors in capturing program behavior across several different architectural metrics, explore the large scale behavior of several programs, and develop a set of algorithms based on clustering capable of analyzing this behavior. We then demonstrate an application of this technology to automatically determine where to simulate for a program to help guide computer architecture research.
- A. Ben-Dor, R. Shamir, and Z. Yakhini. Clustering gene expression patterns. Journal of Computational Biology, 6:281-297, 1999.Google ScholarCross Ref
- C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995. Google ScholarDigital Library
- D. C. Burger and T. M. Austin. The simplescalar tool set, version 2.0. Technical Report CS-TR-97-1342, University of Wisconsin, Madison, June 1997.Google ScholarDigital Library
- T. M. Conte, M. A. Hirsch, and K. N. Menezes. Reducing state loss for effective trace sampling of superscalar processors. In Proceedings of the 1996 International Conference on Computer Design (ICCD), October 1996. Google ScholarDigital Library
- S. Dasgupta. Experiments with random projection. In Uncertainty in Artificial Intelligence: Proceedings of the Sixteenth Conference (UAI-2000), pages 143-151, San Francisco, CA, 2000. Morgan Kaufmann Publishers. Google ScholarDigital Library
- G. Hamerly and C. Elkan. Learning the k in k-means. Technical Report CS2002-0716, University of California, San Diego, 2002.Google Scholar
- J. Haskins and K. Skadron. Minimal subset evaluation: Rapid warm-up for simulated hardware state. In Proceedings of the 2001 International Conference on Computer Design, September 2001. Google ScholarDigital Library
- J. Haskins and K. Skadron. Memory reference reuse latency: Accelerating sampled microarchitecture simulations. Technical Report CS-2002-19, U of Virginia, July 2002. Google ScholarDigital Library
- A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Computing Surveys, 31(3):264-323, 1999. Google ScholarDigital Library
- J.-M. Jolion, P. Meer, and S. Bataouche. Robust clustering with applications in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(8):791-802, 1991. Google ScholarDigital Library
- R. E. Kass and L. Wasserman. A reference Bayesian test for nested hypotheses and its relationship to the schwarz criterion. Journal of the American Statistical Association, 90(431):928-934, 1995.Google ScholarCross Ref
- A. KleinOsowski, J. Flynn, N. Meares, and D. Lilja. Adapting the spec 2000 benchmark suite for simulation-based computer architecture research. In Proceedings of the International Conference on Computer Design, September 2000.Google Scholar
- T. Lafage and A. Seznec. Choosing representative slices of program execution for microarchitecture simulations: A preliminary application to the data stream. In Workload Characterization of Emerging Applications, Kluwer Academic Publishers, September 2000. Google ScholarDigital Library
- J. MacQueen. Some methods for classification and analysis of multivariate observations. In L. M. LeCam and J. Neyman, editors, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281-297, Berkeley, CA, 1967. University of California Press.Google Scholar
- S. Nussbaum and J. E. Smith. Modeling superscalar processors via statistical simulation. In International Conference on Parallel Architectures and Compilation Techniques, September 2001. Google ScholarDigital Library
- M. Oskin, F. T. Chong, and M. Farrens. HLS: Combining statistical and symbolic simulation to guide microprocessor designs. In 27th Annual International Symposium on Computer Architecture, June 2000. Google ScholarDigital Library
- D. Pelleg and A. Moore. X-means: Extending K-means with efficient estimation of the number of clusters. In Proceedings of the 17th International Conf. on Machine Learning, pages 727-734. Morgan Kaufmann, San Francisco, CA, 2000. Google ScholarDigital Library
- T. Sherwood and B. Calder. Time varying behavior of programs. Technical Report UCSD-CS99-630, UC San Diego, August 1999.Google Scholar
- T. Sherwood, E. Perelman, and B. Calder. Basic block distribution analysis to find periodic behavior and simulation points in applications. In International Conference on Parallel Architectures and Compilation Techniques, September 2001. Google ScholarDigital Library
- T. Sherwood, S. Sair, and B. Calder. Phase tracking and prediction. Technical Report CS2002-0710, UC San Diego, June 2002.Google Scholar
- A. Srivastava and A. Eustace. ATOM: A system for building customized program analysis tools. In Proceedings of the Conference on Programming Language Design and Implementation, pages 196-205. ACM, 1994. Google ScholarDigital Library
- O. Zamir and O. Etzioni. Web document clustering: A feasibility demonstration. In Research and Development in Information Retrieval, pages 46-54, 1998. Google ScholarDigital Library
- Automatically characterizing large scale program behavior
Recommendations
Automatically characterizing large scale program behavior
Understanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ...
Automatically characterizing large scale program behavior
Understanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ...
Automatically characterizing large scale program behavior
Special Issue: Proceedings of the 10th annual conference on Architectural Support for Programming Languages and Operating SystemsUnderstanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ...
Comments