skip to main content
10.1145/2038698.2038711acmconferencesArticle/Chapter ViewAbstractPublication PagesesweekConference Proceedingsconference-collections
research-article

An evaluation of different modeling techniques for iterative compilation

Published:09 October 2011Publication History

ABSTRACT

Iterative compilation techniques, which involve iterating over different sets of optimizations, have proven useful in helping compilers choose the right set of optimizations for a given program. However, compilers typically have a large number of optimizations to choose from, making it impossible to iterate over a significant fraction of the entire optimization search space. Recent research has proposed to "intelligently" iterate over the optimization search space using predictive methods. In particular, state-the-art methods in iterative compilation use characteristics of the code being optimized to predict good optimization sequences to evaluate. Thus, an important step in developing predictive methods for compilation is deciding how to model the problem of choosing the right optimizations.

In this paper, we evaluate three different ways of modeling the problem of choosing the right optimization sequences using machine learning techniques. We evaluate a novel prediction modeling technique, namely a tournament predictor, that is able to effectively predict good optimization sequences. We show that our tournament predictor can outperform current state-of-the-art predictors and the most aggressive setting of the Open64 compiler (-Ofast) on an average by 75% in just 10 iterations over a set of embedded and scientific kernels. Moreover, using our tournament predictor, we achieved on average 10% improvement over -Ofast for a set of MiBench applications.

References

  1. Weka 3: Data mining software in java. http://www.cs.waikato.ac.nz/ml/weka.Google ScholarGoogle Scholar
  2. Open64, inc. http://www.open64.net, 2006.Google ScholarGoogle Scholar
  3. L. Adhianto, S. Banerjee, M. Fagan, M. Krentel, G. Marin, J. Mellor-Crummey, and N. R. Tallent. Hpctoolkit: Tools for performance analysis of optimized parallel programs. Concurrency: Practice and Experience, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M. O'Boyle, J. Thomson, M. Toussaint, and C. Williams. Using machine learning to focus iterative optimization. In Proceedings of the International Symposium on Code Generation and Optimization, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. Almagor, K. D. Cooper, A. Grosul, T. J. Harvey, S. W. Reeves, D. Subramanian, L. Torczon, and T. Waterman. Finding effective compilation sequences. In Proceedings of the Conference on Languages, Compilers, and Tools for Embedded Systems, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Cavazos, C. Dubach, F. Agakov, E. Bonilla, M. F. O'Boyle, G. Fursin, and O. Temam. Automatic performance model construction for the fast software exploration of new hardware designs. In Proceedings of the International Conference on Compilers, Architecture, And Synthesis For Embedded Systems, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Cavazos, G. Fursin, F. V. Agakov, E. V. Bonilla, M. F. P. O'Boyle, and O. Temam. Rapidly selecting good compiler optimizations using performance counters. In Proceedings of the International Symposium on Code Generation and Optimization, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Cavazos and J. E. B. Moss. Inducing heuristics to decide whether to schedule. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Cavazos and M. O'Boyle. Method-specific dynamic compilation using logistic regression. In Proceedings of the ACM SIGPLAN Conference on Object Oriented Programming, Systems, Languages, and Applications, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. D. Cooper, P. J. Schielke, and D. Subramanian. Optimizing for reduced code space using genetic algorithms. In Workshop on Languages, Compilers, and Tools for Embedded Systems, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. F. de Mesmay, Y. Voronenko, and M. Püschel. Offline library adaptation using automatically generated heuristics. In International Parallel and Distributed Processing Symposium, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  12. C. Dubach, J. Cavazos, B. Franke, M. F. O'Boyle, G. Fursin, and O. Temam. Fast compiler optimisation evaluation using code-feature based performance prediction. In Proceedings of the ACM International Conference on Computing Frontiers, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Dubach, T. M. Jones, E. V. Bonilla, G. Fursin, and M. F. O'Boyle. Portable compiler optimization across embedded programs and microarchitectures using machine learning. In Proceedings of the IEEE/ACM International Symposium on Microarchitecture, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. B. Franke, M. O'Boyle, J. Thomson, and G. Fursin. Probabilistic source-level optimisation of embedded programs. In Proceedings of the ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Frigo and S. G. Johnson. The design and implementation of FFTW3. Proceedings of the IEEE, 93(2), 2005. special issue on "Program Generation, Optimization, and Platform Adaptation".Google ScholarGoogle ScholarCross RefCross Ref
  16. G. Fursin, Y. Kashnikov, A. Memon, Z. Chamski, O. Temam, M. Namolaru, E. Yom-Tov, B. Mendelson, A. Zaks, E. Courtois, F. Bodin, P. Barnard, E. Ashton, E. Bonilla, J. Thomson, C. Williams, and M. O'Boyle. Milepost gcc: Machine learning enabled self-tuning compiler. International Journal of Parallel Programming, 39:296--327, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  17. M. R. Guthaus, J. S. Ringenberg, D. Ernst, T. M. Austin, T. Mudge, and R. B. Brown. Mibench: A free, commercially representative embedded benchmark suite. In IEEE Annual Workshop on Workload Characterization, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Haneda, P. M. W. Knijnenburg, and H. A. G. Wijshoff. Automatic selection of compiler options using non-parametric inferential statistics. In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. Kisuki, P. M. W. Knijnenburg, and M. F. P. O'Boyle. Combined selection of tile sizes and unroll factors using iterative compilation. In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Kulkarni, S. Hines, J. Hiser, D. Whalley, J. Davidson, and D. Jones. Fast searches for effective optimization phase sequences. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. Lau, M. Arnold, M. Hind, and B. Calder. Online performance auditing: using hot optimizations without getting burned. SIGPLAN Not., 41(6):239--251, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Monsifrot, F. Bodin, and R. Quiniou. A machine learning approach to automatic production of compiler heuristics. In Proceedings of the International Conference on Artificial Intelligence: Methodology, Systems, and Applications, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. Mucci. Papi - the performance application programming interface. http://icl.cs.utk.edu/papi/index.html, 2000.Google ScholarGoogle Scholar
  24. Polybench. http://www-roc.inria.fr/~pouchet/software/polybench.Google ScholarGoogle Scholar
  25. M. Puschel, J. Moura, J. Johnson, D. Padua, M. Veloso, B. Singer, J. Xiong, F. Franchetti, A. Gacic, Y. Voronenko, K. Chen, R. W. Johnson, and N. Rizzolo. Spiral: Code generation for dsp transforms. Proceedings of the IEEE, 93(2), 2005. special issue on "Program Generation, Optimization, and Platform Adaptation".Google ScholarGoogle ScholarCross RefCross Ref
  26. M. Stephenson, S. Amarasinghe, M. Martin, and U.-M. O'Reilly. Meta optimization: Improving compiler heuristics with machine learning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. C. Whaley and J. J. Dongarra. Automatically tuned linear algebra software. In Proceedings of the ACM/IEEE conference on Supercomputing, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An evaluation of different modeling techniques for iterative compilation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CASES '11: Proceedings of the 14th international conference on Compilers, architectures and synthesis for embedded systems
        October 2011
        250 pages
        ISBN:9781450307130
        DOI:10.1145/2038698

        Copyright © 2011 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 October 2011

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate52of230submissions,23%

        Upcoming Conference

        ESWEEK '24
        Twentieth Embedded Systems Week
        September 29 - October 4, 2024
        Raleigh , NC , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader