skip to main content
10.1145/503272.503284acmconferencesArticle/Chapter ViewAbstractPublication PagespoplConference Proceedingsconference-collections
Article

Towards automatic construction of staged compilers

Published:01 January 2002Publication History

ABSTRACT

Some compilation systems, such as offline partial evaluators and selective dynamic compilation systems, support staged optimizations. A staged optimization is one where a logically single optimization is broken up into stages, with the early stage(s) performing preplanning set-up work, given any available partial knowledge about the program to be compiled, and the final stage completing the optimization. The final stage can be much faster than the original optimization by having much of its work performed by the early stages. A key limitation of current staged optimizers is that they are written by hand, sometimes in an ad hoc manner. We have developed a framework called the Staged Compilation Framework (SCF) for systematically and automatically converting single-stage optimizations into staged versions. The framework is based on a combination of aggressive partial evaluation and dead-assignment elimination. We have implemented SCF in Standard ML. A preliminary evaluation shows that SCF can speed up classical optimization of some commonly used C functions by up to 12× (and typically between 4.5× and 5.5×).

References

  1. 1.A. Aho, R.Sethi, and J. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, 1986.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2.A. Aiken and B. R. Murphy: Static type inference in a dynamically typed language. In Symposium on Principles of Programming Languages, pages 279- 290, Jan. 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3.A. Aiken and B. Murphy. Implementing Regular Tree Expressions. In J. Hughes, editor, 5th ACM Conference on Functional Programming Languages and Computer Architecture, number 523, Cambridge, MA, USA, August 26-30, 1991. Springer.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.A. Aiken and B. R. Murphy. Implementing regular tree expressions. In Proceedings of the Fifth Conference on Functional Programming Languages and Computer Architecture, pages 427-447, Berlin, West Germany, Sept. 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.V. Bala and E. Duesterwald. Dynamo: A transparent runtime optimization system. In Conference on Programming Language Design and Implementation, pages 1-12, June 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.C. Chambers, J. Dean, and D. Grove. Whole-program optimization of objectoriented languages. Technical Report TR-96-06-02, Department of Computer Science and Engineering. University of Washington, June 1996.]]Google ScholarGoogle Scholar
  7. 7.C. Consel and O. Danvy. From interpreting to compiling binding times. In 3rd European Symposium on Programming, LNCS 432, pages 88-105. Springer- Verlag, May 1990.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.C. Consel and F. Noel. A general approach for run-time specialization and its application to C. In Symposium on Principles of Programming Languages, pages 145-156, Jan. 1996.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.J. Dean, G. DeFouw, D. Grove, V. Litvinov, and C. Chambers. Vortex: An optimizing compiler for object-oriented languages. In OOPSLA'96 Conference Proceedings, pages 83-100, Oct. 1996.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.A. Diwan, E. Moss, and K. McKinley. Simple and effective analysis of statically-typed object-oriented programs. In OOPSLA'96 Conference Proceedings, Oct. 1996.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.M. Fernandez. Simple and effective link-time optimization of modula-3 programs. In Conference on Programming Language Design and Implementation, pages 103-115, June 1995.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.R. Fitzgerald, T. Knoblock, E. Ruf, B. Steensgaard, and D. Tarditi. Marmot: An optimizing compiler for Java. Software: Practice and Experience, 30(3):199- 232, Mar. 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.R. Gluck and J. Jorgensen. An automatic program generator for multilevel specialization. Lisp and Symbolic Computation, 10(2):113-158, 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14.B. Grant, M. Mock, M. Philipose, C. Chambers, and S. Eggers. Annotationdirected run-time specialization in C. In Symposium on Partial Evaluation and Semantics-Based Program Manipulation, pages 163-178, June 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15.B. Grant, M. Philipose, M. Mock, C. Chambers, and S. Eggers. An evaluation of staged, run-time optimizations in DyC. In Conference on Programming Language Design and Implementation, pages 293-304, May 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.M. Hall, J. Mellor-Crummey, A. Carle, and R. Rodriguez. Fiat: A framework for interprocedural analysis and transformation. In The Sixth Annual Workshop on Parallel Languages and Compilers, Aug. 1993.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. 17.N. Heintze. Set-based analysis of ML programs. In ACM Conference on Lisp and Functional Programming, pages 306-317, 1994.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.C. Holst. Finiteness analysis. In Functional Programming Languages and Computer Architecture, LNCS 523, pages 473-495. Springer-Verlag, Aug. 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.U. Holzle and D. Ungar. Optimizing dynamically-dispatched calls with runtime type feedback. In Conference on Programming Language Design and Implementation, pages 326-336, June 1994.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. 20.N. Jones and S. Muchnick. Flow analysis and optimization of lisp-like structures. In Symposium on Principles of Programming Languages, pages 244- 256, Jan. 1979.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. 21.M. Leone and P. Lee. Optimizing ML with run-time code generation. Technical report CMU-CS-95-205, School of Computer Science, Carnegie Mellon University, December 1995.]]Google ScholarGoogle Scholar
  22. 22.Y. A. Liu and S. D. Stoller. Eliminating dead code on recursive data. In Static Analysis Symposium, pages 211-231, 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23.R. Marlet, C. Consel, and P. Boinot. Efficient incremental run-time specialization for free. In Conference on Programming Language Design and Implementation, pages 281-292, May 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. 24.R. Milner, M. Tofte, R. Harper, and D. MacQueen. The Definition of Standard ML (Revised). MIT Press, Cambridge, MA, 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25.F. Noel, L. Hornof, C. Consel, and J. L. Lawall. Automatic, template-based runtime specialization: Implementation and experimental study. In International Conference on Computer Languages, pages 132-142, May 1998.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. 26.T. Proebsting. Optimizing an ANSI C interpreter with superoperators. In Symposium on Principles of Programming Languages, pages 322-332, Jan. 1995.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. 27.T. Reps and T. Turnidge. Program specialization via program slicing. In O. Danvy, R. Gluck, and P. Thiemann, editors, Proceedings of the Dagstuhl Seminar on Partial Evaluation, pages 409-429, Schloss Dagstuhl, Wadern, Germany, 12-16 1996. Springer-Verlag, New York, NY.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. 28.J. C. Reynolds. Automatic computation of data set definitions. In A. J. H. Morrell, editor, Information Processing 68, volume 1, pages 456-461, Amsterdam, 1969. North-Holland.]]Google ScholarGoogle Scholar
  29. 29.E. Ruf. Topics in Online Partial Evaluation. PhD thesis, Stanford University, February 1993. Technical report CSL-TR-93-563.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. 30.M. Sperber and P. Thiemann. Two for the price of one: Composing partial evaluation and compilation. In Conference on Programming Language Design and Implementation, pages 215-225, June 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. 31.F. Tip, C. Laffra, P. Sweeney, and D. Streeter. Practical experience with an application extractor for Java. In OOPSLA'99 Conference Proceedings, pages 292-305, Oct. 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. 32.P. Wadler. Deforestation: transforming programs to eliminate trees. Theoretical Computer Science, 73:231-248, 1990.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. 33.D. Weise, R. Conybeare, E. Ruf, and S. Seligman. Automatic online partial evaluation. In Conference on Functional Programming Languages and Computer Architecture, LNCS 523, pages 165-191. Springer-Verlag, 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Towards automatic construction of staged compilers

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      POPL '02: Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
      January 2002
      351 pages
      ISBN:1581134509
      DOI:10.1145/503272

      Copyright © 2002 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 January 2002

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      POPL '02 Paper Acceptance Rate28of128submissions,22%Overall Acceptance Rate824of4,130submissions,20%

      Upcoming Conference

      POPL '25

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader