ABSTRACT
We present an automated technique for generating compiler optimizations from examples of concrete programs before and after improvements have been made to them. The key technical insight of our technique is that a proof of equivalence between the original and transformed concrete programs informs us which aspects of the programs are important and which can be discarded. Our technique therefore uses these proofs, which can be produced by translation validation or a proof-carrying compiler, as a guide to generalize the original and transformed programs into broadly applicable optimization rules.
We present a category-theoretic formalization of our proof generalization technique. This abstraction makes our technique applicable to logics besides our own. In particular, we demonstrate how our technique can also be used to learn query optimizations for relational databases or to aid programmers in debugging type errors.
Finally, we show experimentally that our technique enables programmers to train a compiler with application-specific optimizations by providing concrete examples of original programs and the desired transformed programs. We also show how it enables a compiler to learn efficient-to-run optimizations from expensive-to-run super-optimizers.
- J. Adámek, H. Herrlich, and G.E. Strecker. Abstract and Concrete Categories: The Joy of Cats. John Wiley & Sons, 1990. Google ScholarDigital Library
- S. Bansal and A. Aiken. Automatic generation of peephole superoptimizers. In ASPLOS, 2006. Google ScholarDigital Library
- R. Bergmann. Explanation-based learning for the automated reuse of programs. In CompEuro, 1992.Google ScholarCross Ref
- K.D. Cooper, D. Subramanian, and L. Torczon. Adaptive optimizing compilers for the 21st century. The Journal of Supercomputing, 23(1):7--22, Aug. 2002. Google ScholarDigital Library
- A. Deutsch. Author of {6}. Personal communication, July 2009.Google Scholar
- A. Deutsch, A. Nash, and J. Remmel. The chase revisited. In PODS, 2008. Google ScholarDigital Library
- S. Dietzen and F. Pfenning. Higher-order and modal logic as a framework for explanation-based generalization. Machine Learning, 9(1):23--55, June 1992. Google ScholarDigital Library
- T. Ellman. Generalizing logic circuit designs by analyzing proofs of correctness. In IJCAI, volume 1, pages 643--646, 1985. Google ScholarDigital Library
- T. Ellman. Explanation-based learning: a survey of programs and perspectives. ACM Computing Surveys, 21(2):163--221, June 1989. Google ScholarDigital Library
- M. Gandhe and G. Venkatesh. Improving prolog performance by inductive proof generalizations. In Knowledge Based Computer Systems, 1990. Google ScholarDigital Library
- S.Z. Guyer and C. Lin. Broadway: A compiler for exploiting the domain-specific semantics of software libraries. Proceedings of IEEE, 93(2), 2005.Google ScholarCross Ref
- C. Haack and J.B. Wells. Type error slicing in implicitly typed higher-order languages. Science of Computer Programming, 50(1-3):189--224, 2004. Google ScholarDigital Library
- R. Joshi, G. Nelson, and K. Randall. Denali: a goal-directed superoptimizer. In PLDI, June 2002. Google ScholarDigital Library
- S. Kundu, Z. Tatlock, and S. Lerner. Proving optimizations correct using parameterized program equivalence. In PLDI, 2009. Google ScholarDigital Library
- S. Lerner, T. Millstein, E. Rice, and C. Chambers. Automated soundness proofs for dataflow analyses and transformations via local rules. In POPL, 2005. Google ScholarDigital Library
- H. Massalin. Superoptimizer: a look at the smallest program. In ASPLOS, 1987. Google ScholarCross Ref
- A. McGovern, J.E.B. Moss, and A.G. Barto. Building a basic block instruction scheduler with reinforcement learning and rollouts. Machine Learning, Special Issue on Reinforcement Learning, 49(2/3):141--160, May 2002. Google ScholarDigital Library
- R. Millner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 1978.Google ScholarCross Ref
- M. Stephenson and S. Amarasinghe. Predicting unroll factors using supervised classification. In CGO, Mar. 2005. Google ScholarDigital Library
- M. Stephenson, S. Amarasinghe, M. Martin, and U.-M. O'Reilly. Meta optimization: Improving compiler heuristics with machine learning. In PLDI, June 2003. Google ScholarDigital Library
- R. Tate, M. Stepp, and S. Lerner. Generating compiler optimizations from proofs. Technical report, University of California, San Diego, Nov. 2009.Google Scholar
- R. Tate, M. Stepp, Z. Tatlock, and S. Lerner. Equality saturation: a new approach to optimization. In POPL, Jan. 2009. Google ScholarDigital Library
- S.W.K. Tjiang and J.L. Hennessy. Sharlit -- A tool for building optimizers. In PLDI, 1992. Google ScholarDigital Library
- D.L. Whitfield and M.L. Soffa. An approach for exploring code improving transformations. ACM Transactions on Programming Languages and Systems, 19(6):1053--1084, Nov. 1997. Google ScholarDigital Library
Index Terms
- Generating compiler optimizations from proofs
Recommendations
Generating compiler optimizations from proofs
POPL '10We present an automated technique for generating compiler optimizations from examples of concrete programs before and after improvements have been made to them. The key technical insight of our technique is that a proof of equivalence between the ...
The Intel labs Haskell research compiler
Haskell '13The Glasgow Haskell Compiler (GHC) is a well supported optimizing compiler for the Haskell programming language, along with its own extensions to the language and libraries. Haskell's lazy semantics imposes a runtime model which is in general difficult ...
The Intel labs Haskell research compiler
Haskell '13: Proceedings of the 2013 ACM SIGPLAN symposium on HaskellThe Glasgow Haskell Compiler (GHC) is a well supported optimizing compiler for the Haskell programming language, along with its own extensions to the language and libraries. Haskell's lazy semantics imposes a runtime model which is in general difficult ...
Comments