skip to main content
10.1145/3427081.3427088acmotherconferencesArticle/Chapter ViewAbstractPublication PagessblpConference Proceedingsconference-collections
research-article

Stream Parallelism Annotations for Multi-Core Frameworks

Published:22 October 2020Publication History

ABSTRACT

Data generation, collection, and processing is an important workload of modern computer architectures. Stream or high-intensity data flow applications are commonly employed in extracting and interpreting the information contained in this data. Due to the computational complexity of these applications, high-performance ought to be achieved using parallel computing. Indeed, the efficient exploitation of available parallel resources from the architecture remains a challenging task for the programmers. Techniques and methodologies are required to help shift the efforts from the complexity of parallelism exploitation to specific algorithmic solutions. To tackle this problem, we propose a methodology that provides the developer with a suitable abstraction layer between a clean and effective parallel programming interface targeting different multi-core parallel programming frameworks. We used standard C++ code annotations that may be inserted in the source code by the programmer. Then, a compiler parses C++ code with the annotations and generates calls to the desired parallel runtime API. Our experiments demonstrate the feasibility of our methodology and the performance of the abstraction layer, where the difference is negligible in four applications with respect to the state-of-the-art C++ parallel programming frameworks. Additionally, our methodology allows improving the application performance since the developers can choose the runtime that best performs in their system.

References

  1. Marco Aldinucci, Marco Danelutto, Peter Kilpatrick, and Massimo Torquati. 2014. FastFlow: High-Level and Efficient Streaming on Multi-core. In Programming Multi-core and Many-core Computing Systems (PDC), Vol. 1. Wiley, 14.Google ScholarGoogle Scholar
  2. Henrique Andrade, Buğa Gedik, and Deepak S Turaga. 2014. Fundamentals of Stream Processing: Application Design, Systems, and Analytics. Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Marco Danelutto, Tiziano De Matteis, Daniele De Sensi, Gabriele Mencagli, Massimo Torquati, Marco Aldinucci, and Peter Kilpatrick. 2019. The RePhrase Extended Pattern Set for Data Intensive Parallel Computing. International Journal of Parallel Programming 47, 1 (01 Feb 2019), 74--93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. M. Danelutto and M. Torquati. 2014. Loop Parallelism: A New Skeleton Perspective on Data Parallel Patterns. In 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing. 52--59. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. David del Rio Astorga, Manuel F. Dolz, Luis Miguel Sanchez, Javier García Blas, and J. Daniel García. 2016. A C++ Generic Parallel Pattern Interface for Stream Processing. In Algorithms and Architectures for Parallel Processing, Jesus Carretero, Javier Garcia-Blas, Ryan K.L. Ko, Peter Mueller, and Koji Nakano (Eds.). Springer International Publishing, Cham, 74--87.Google ScholarGoogle Scholar
  6. David del Rio Astorga, Manuel F Dolz, Luis Miguel SÃąnchez, J Daniel GarcÃŋa, Marco Danelutto, and Massimo Torquati. 2018. Finding parallel patterns through static analysis in C++ applications. The International Journal of High Performance Computing Applications 32, 6 (2018), 779--788. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. August Ernstsson, Lu Li, and Christoph Kessler. 2018. SkePU2:Flexible and Type-Safe Skeleton Programming for Heterogeneous Parallel Systems. International Journal of Parallel Programming 46 (2018), 62--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Dalvan Griebler. 2016. Domain-Specific Language & Support Tool for High-Level Stream Parallelism. Ph.D. Dissertation. Faculdade de Informática - PPGCC - PUCRS, Porto Alegre, Brazil.Google ScholarGoogle Scholar
  9. Dalvan Griebler, Marco Danelutto, Massimo Torquati, and Luiz Gustavo Fernandes. 2017. SPar: ADSL for High-Level and Productive Stream Parallelism. Parallel Processing Letters 27, 01 (March 2017), 1740005.Google ScholarGoogle ScholarCross RefCross Ref
  10. Dalvan Griebler, Renato B. Hoffmann, Marco Danelutto, and Luiz Gustavo Fernandes. 2018. High-Level and Productive Stream Parallelism for Dedup, Ferret, and Bzip2. International Journal of Parallel Programming 47, 1 (February 2018), 253--271. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Dalvan Griebler, Renato B. Hoffmann, Marco Danelutto, and Luiz Gustavo Fernandes. 2018. Stream Parallelism with Ordered Data Constraints on Multi-Core Systems. Journal of Supercomputing 75, 8 (July 2018), 4042--4061.Google ScholarGoogle Scholar
  12. Re'Em Harel, Idan Mosseri, Harel Levin, Lee-Or Alon, Matan Rusanovsky, and Gal Oren. 2019. Source-to-Source Parallelization Compilers for Scientific Shared-Memory Multi-core and Accelerated Multiprocessing: Analysis, Pitfalls, Enhancement and Potential. International Journal of Parallel Programming (08 2019).Google ScholarGoogle Scholar
  13. John L. Hennessy and David A. Patterson. 2019. Computer Architecture: A Quantitative Approach. Morgan Kaufman. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14882:2014 ISO/IEC. 2014. Information Technology - Programming Languages - C++. Technical Report. International Standard, Geneva, Switzerland.Google ScholarGoogle Scholar
  15. Timothy G. Mattson, Beverly A. Sanders, and Berna L. Massingill. 2005. Patterns for Parallel Programming. Addison-Wesley, Boston, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Michael McCool, Arch Robison, and James Reinders. 2012. Structured Parallel Programming: Patterns for Efficient Computation. Elsevier Science. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Navarro, R. Asenjo, S. Tabik, and C. Cascaval. 2009. Analytical Modeling of Pipeline Parallelism. In 2009 18th International Conference on Parallel Architectures and Compilation Techniques. 281--290. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. James Reinders. 2007. Intel Threading Building Blocks. O'Reilly, Sebastopol, CA, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Rephrase. accessed october 1, 2019. The Rephrase project. https://rephrase-eu.weebly.com/Google ScholarGoogle Scholar
  1. Stream Parallelism Annotations for Multi-Core Frameworks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        SBLP '20: Proceedings of the 24th Brazilian Symposium on Context-Oriented Programming and Advanced Modularity
        October 2020
        81 pages
        ISBN:9781450389433
        DOI:10.1145/3427081

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 October 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate22of50submissions,44%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader