ABSTRACT
Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In order to cope with this lack of information at compile-time, adaptive and dynamic systems can be used to perform optimization at runtime when complete knowledge of input and machine parameters is available. This paper presents a compiler-supported high-level adaptive optimization system. Users describe, in a domain specific language, optimizations performed by stand-alone optimization tools and backend compiler flags, as well as heuristics for applying these optimizations dynamically at runtime. The ADAPT compiler reads these descriptions and generates application-specific runtime systems to apply the heuristics. To facilitate the usage of existing tools and compilers, overheads are minimized by decoupling optimization from execution. Our system, ADAPT, supports a range of paradigms proposed recently, including dynamic compilation, parameterization and runtime sampling. We demonstrate our system by applying several optimization techniques to a suite of benchmarks on two target machines. ADAPT is shown to consistently outperform statically generated executables, improving performance by as much as 70%.
- 1.M. Litzkow, M. Livny, and M. W. Mutka. Condor - a hunter of idle workstations. In Proc. of the 8th Int'l Conf. of Distributed Computing Systems, pages 104-111, June 1988.Google ScholarCross Ref
- 2.Nirav H. Kapadia and Jose A.B. Fortes. On the Design of a Demand-Based Network-Computing System: The Purdue University Network Computing Hubs. In Proc. of IEEE Symposium on High Performance Distributed Computing, pages 71-80, Chicago, IL, 1998. Google ScholarDigital Library
- 3.Ian Foster and Carl Kesselmann. Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputing Applications, 11(2):115-128, January 1997.Google ScholarDigital Library
- 4.M. Byler, J.R.B. Davies, C. Huson, B. Leasure, and M. Wolfe. Multiple version loops. In International Conf. on Parallel Processing, pages 312-318, August 1987.Google Scholar
- 5.Pedro Diniz and Matrin Rinard. Dynamic feedback: An effective technique for adaptive computing. In Proc. of the ACM SIGPLAN '97 Conf. on Programming Language Design and Implementation, pages 71-84, Las Vegas, NV, May 1997. Google ScholarDigital Library
- 6.Rajiv Gupta and Rastislav Bodik. Adaptive loop transformations for scientific programs. In IEEE Symposium on Parallel and Distributed Processing, pages 368-375, San Antonio, Texas, October 1995. Google ScholarDigital Library
- 7.J. Auslander, M. Philipose, C. Chambers, S. Eggers, and B. Bershad. Fast, effective dynamic compilation. In Proc. of the SIGPLAN '96 Conf. on Programming Language Design and Implementation, pages 149-159, Philedelphia, PA, May 1996. Google ScholarDigital Library
- 8.Charles Consel and Francois Noel. A general approach for run-time specialization and its application to C. In Proc. of the SIGPLAN '96 Conf. on Principles of Programming Languages, January 1996. Google ScholarDigital Library
- 9.D. Engler. VCODE: a retargetable, extensible, very fast dynamic code generation system. In Proc. of the SIGPLAN '96 Conf. onProgramming Language Design and Implementation, pages 160-170, Philedelphia, PA, May 1996. Google ScholarDigital Library
- 10.P. Lee and M. Leone. Optimizing ML with run-time code generation. In Proc. of the SIGPLAN '96 Conf. on Programming Language Design and Implementation, pages 137-148, Philedelphia, PA, May 1996. Google ScholarDigital Library
- 11.Vasanth Bala, Evelyn Duesterwald, and Sanjeev Banerjia. Dynamo: A transparent runtime optimization system. In Proc. of the ACM SIGPLAN 2000 Conf. on Programming Language Design and Implementation, Vancouver, British Columbia, Canada, June 2000. Google ScholarDigital Library
- 12.Matthew Arnold, Stephen Fink, David Grove, Michael Hind, and Peter F. Sweeney. Adaptive optimization in the jalapeno jvm. In Proc. of the ACM SIGPLAN 2000 Conf. on Object-Oriented Programming Systems, Languages and Applications, Minneapolis, MN, October 2000. Google ScholarDigital Library
- 13.William Blume, Ramon Doallo, Rudolf Eigenmann, John Grout, Jay Hoe inger, Thomas Lawrence, Jaejin Lee, David Padua, Yunheung Paek, Bill Pottenger, Lawrence Rauchwerger, and Peng Tu. Parallel Programming with Polaris. IEEE Computer, pages 78-82, December 1996. Google ScholarDigital Library
- 14.R. Saavedraand D. Park. Improving the effectiveness of software prefetching with adaptive execution. In Proc. of the 1996 Conf. on Parallel Algorithms and Compilation Techniques, Boston, MA, October 1996. Google ScholarDigital Library
- 15.J. Saltz, R. Mirchandaney, and K. Crowley. Run time parallelization and scheduling of loops. IEEE Transactions on Computers, 40(5):603-612, May 1991. Google ScholarDigital Library
- 16.Lawrence Rauchwerger and David Padua. The PRIVATIZING DOALL Test: A Run-Time Technique for DOALL Loop Identification and Array Privatization . Proceedings of the 8th ACM International Conference on Supercomputing, Manchester, England, pages 33-43, July 1994. Google ScholarDigital Library
- 17.L. Rauchwerger and D. Padua. The LRPD Test: speculative run-time parallelization of loops with privatization and reduction parallelization. In Proceedings of the SIGPLAN 1995 Conference on Programming Languages Design and Implementation, pages 218-232, June 1995. Google ScholarDigital Library
- 18.Mary W. Hall and Margaret Martonosi. Adaptive parallelism in compiler-parallelized code. In Proc. of the 2nd SUIF Compiler Workshop, August 1997.Google Scholar
- 19.Brian Grant, Matthai Philipose, Markus Mock, Craig Chambers, and Susan J. Eggers. An evaluation of staged run-time optimizations in DyC. InProc. of the SIGPLAN '99 Conf. on Programming Language Design and Implementation, pages 293-304, Atlanta, GA, May 1999. Google ScholarDigital Library
- 20.Renaud Marlet, Charles Consel, and Philippe Boinot. Efficient incremental run-time specialization for free. In Proc. of the SIGPLAN '99 Conf. on Programming Language Design and Implementation, pages 281-292, Atlanta, GA, May 1999. Google ScholarDigital Library
- 21.Massimiliano Polettto, Wilson C Hsieh, Dawson R Engler, and M. Frans Kaashoek. 'C and tcc: A language and compiler for dynamic code generation. ACM Transactions on Programming Languages and Systems, 21(2):324-369, March 1999. Google ScholarDigital Library
- 22.Michael P. Plezbert andRon K. Cytron. Does "just in time" = "better late than never"? In Proc. of the ACM SIGPLAN-SIGACT '97 Symposium on Principles of Programming Languages, pages 120-131, Paris, France, January 1997. Google ScholarDigital Library
- 23.Sun Microsystems. The Java HotSpot Performance Engine Architecture. Technical White Paper, http://java.sun.com/products/hotspot/whitepaper.html, April 1999.Google Scholar
- 24.Michael Voss and Rudolf Eigenmann. ADAPT: Automated De-Coupled Adaptive Program Transformation. In Proc. of theInternational Conf. on Parallel Processing, Toronto, Ontario, August 2000. Google ScholarDigital Library
Index Terms
- High-level adaptive program optimization with ADAPT
Recommendations
High-level adaptive program optimization with ADAPT
Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In ...
Adaptive parameter selection of quantum-behaved particle swarm optimization on global level
ICIC'05: Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part IIn this paper, we formulate the philosophy of Quantum-behaved Particle Swarm Optimization (QPSO) Algorithm, and suggest a parameter control method based on the population level. After that, we introduce a diversity-guided model into the QPSO to make the ...
Comments