skip to main content
10.1145/1854273.1854298acmconferencesArticle/Chapter ViewAbstractPublication PagespactConference Proceedingsconference-collections
research-article

Reducing task creation and termination overhead in explicitly parallel programs

Published:11 September 2010Publication History

ABSTRACT

There has been a proliferation of task-parallel programming systems to address the requirements of multicore programmers. Current production task-parallel systems include Cilk++, Intel Threading Building Blocks, Java Concurrency, .Net Task Parallel Library, OpenMP 3.0, and current research task-parallel languages include Cilk, Chapel, Fortress, X10, and Habanero-Java (HJ). It is desirable for the programmer to express all the parallelism intrinsic to their algorithm in their code for forward scalability and portability, but the overhead incurred by doing so can be prohibitively large in today's systems. In this paper, we address the problem of reducing the total amount of overhead incurred by a program due to excessive task creation and termination. We introduce a transformation framework to optimize task-parallel programs with finish, forall and next statements. Our approach includes elimination of redundant task creation and termination operations as well as strength reduction of termination operations (finish) to lighter-weight synchronizations (next). Experimental results were obtained on three platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-way Intel Xeon SMP and a quad-socket 32-way Power7 SMP. The results showed maximum speedup 66.7x, 11.25x and 23.1x respectively on each platform and 4.6x, 2.1x and 6.4x performance improvements respectively in geometric mean related to non-optimized parallel codes. The original benchmarks in this study were written with medium-grained parallelism; a larger relative improvement can be expected for programs written with finer-grained parallelism. However, even for the medium-grained parallel benchmarks studied in this paper, the significant improvement obtained by the transformation framework underscores the importance of the compiler optimizations introduced in this paper.

References

  1. }}E. Allan et al. The Fortress language specification version 0.618. Technical report, Sun Microsystems, April 2005.Google ScholarGoogle Scholar
  2. }}S. P. Amarasinghe and M. S. Lam. Communication Optimization and Code Generation for Distributed Memory Machines. In Proceedings of the conference on Programming language design and implementation, pages 126--138. ACM, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. }}R. Barik et al. Experiences with an SMP implementation for X10 based on the Java concurrency utilities. In Proceedings of the Workshop on Programming Models for Ubiqui- tous Parallelism, Seattle, Washington, 2006.Google ScholarGoogle Scholar
  4. }}G. Bikshandi et al. Efficient, portable implementation of asynchronous multi-place programs. In Proceedings of the symposium on Principles and practice of parallel programming, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. }}R. D. Blumofe et al. CILK: An efficient multithreaded runtime system. Proceedings of Symposium on Principles and Practice of Parallel Programming, pages 207--216, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. }}R. Cytron, J. Lipkis, and E. Schonberg. A Compiler-Assisted Approach to SPMD Execution. Supercomputing, Nov 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. }}P. C. Diniz and M. C. Rinard. Synchronization transformations for parallel computing. In Proceedings of the ACM Symposium on the Principles of Programming Languages, pages 187--200. ACM, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. }}D. H. Bailey et al. The nas parallel benchmarks, 1994.Google ScholarGoogle Scholar
  9. }}P. Charles et al. X10: an object-oriented approach to non-uniform cluster computing. In Proceedings of the conference on Object oriented programming, systems, languages, and applications, pages 519--538, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. }}R. Ferrer, A. Duran, X. Martorell, and E. Ayguadé. Unrolling Loops Containing Task Parallelism. In Proceedings of the 22nd International Workshop on Languages and Compilers for Parallel Computing, Sep 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. }}Andy Georges et al. Statistically Rigorous Java Performance Evaluation. SIGPLAN Not., 42(10):57--76, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. }}Habanero Java. http://habanero.rice.edu/hj, Dec 2009.Google ScholarGoogle Scholar
  13. }}Cray Inc. The Chapel language specification version 0.4. Technical report, Cray Inc., February 2005.Google ScholarGoogle Scholar
  14. }}The Java Grande Forum benchmark suite. http://www.epcc.ed.ac.uk/javagrande/javag.html.Google ScholarGoogle Scholar
  15. }}J.Shirako, D.M.Peixotto, V.Sarkar, and W.N.Scherer III. Phasers: a unified deadlock-free construct for collective and point-to-point synchronization. In Proceedings of the 22nd annual international conference on Supercomputing, pages 277--288, New York, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. }}K. Kennedy and J. R. Allen. Optimizing compilers for modern architectures: a dependence-based approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. }}Calvin Lin and Lawrence Snyder. Principles of Parallel Programming. Addison-Wesley, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. }}A. Nicolau, G. Li, A.V. Veidenbaum, and A. Kejariwal. Synchronization optimizations for efficient execution on multi-cores. In Proceedings of the 23rd international conference on Supercomputing, pages 169--180, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. }}R.J.Cytron, J.T.Lipkis, and E.T.Schonberg. A compiler-assisted approach to SPMD execution. In Proceedings of Supercomputing, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. }}Jun Shirako, Jisheng Zhao, V. Krishna Nandivada, and Vivek Sarkar. Chunking parallel loops in the presence of synchronization. In ICS, pages 181--192, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. }}C. Tseng. Compiler optimizations for eliminating barrier synchronization. In Proceedings of the symposium on Principles and practice of parallel programming, pages 144--155, New York, NY, USA, 1995. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. }}R. Vallée-Rai et al. Soot - a Java Optimization Framework. In Proceedings of CASCON 1999, pages 125--135, 1999.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. }}K. Yelick et al. Productivity and performance using partitioned global address space languages. In Proceedings of the international workshop on Parallel symbolic computation, pages 24--32, New York, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. }}J. Zhao and V. Sarkar. A hierarchical region-based static single assignment form. Technical Report TR09-9, Department of Computer Science, Rice University, December 2009.Google ScholarGoogle Scholar

Index Terms

  1. Reducing task creation and termination overhead in explicitly parallel programs

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      PACT '10: Proceedings of the 19th international conference on Parallel architectures and compilation techniques
      September 2010
      596 pages
      ISBN:9781450301787
      DOI:10.1145/1854273

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 September 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate121of471submissions,26%

      Upcoming Conference

      PACT '24
      International Conference on Parallel Architectures and Compilation Techniques
      October 14 - 16, 2024
      Southern California , CA , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader