ABSTRACT
Freeing the user from the tedious task of generating explicit communication is one of the primary goals of numerous research projects on compilers for distributed memory machines. In the process of synthesis of communication, the effective use of collective communication routines offers a considerable scope for improving the program performance. This paper presents a methodology for determining the collective communication primitives that should be used for implementing the data movement at various points in the program. We introduce the notion of certain synchronous properties between array references in statements inside loops, and present tests to determine the presence of these properties. These tests enable the compiler to analyze quite precisely the communication requirements of those statements, and implement communication using appropriate primitives. These results not only lay down a framework for synthesis of communication on multicomputers, they also form the basis of our implementation of a system that statically estimates the communication costs of programs on multicomputers.
- 1.V. Balasundaram. A mechanism for keeping useful internal information in parallel programming tools: the data access descriptor. Journal of Parallel and Distributed Computing, 9(2):154-170, June 1990. Google ScholarDigital Library
- 2.V. Balasundaram, G. Fox, K. Kennedy, and U. Kremer. An interactive environment for data partitioning and distribution. In Proc. Fifth Dislributed Memory Computing Conference, April 1990.Google ScholarCross Ref
- 3.D. Callahan and K. Kennedy. Analysis of' interprocedural side effects in a parallel progranaming environment, in Proc. First International Conference on Supercomputing, Athens, Greece, 1987. Google ScholarDigital Library
- 4.M. Chen, Y. Choo, and J. Li. Theory and pragmattes of compiling efficient parallel code. Technical Report YALEU/DCS/TR-760, Yale University, December 1989.Google Scholar
- 5.G. Fox, M. Johnson, G. Lyzenga, S. Otto, J. Salmon, and D. Walker. Solving Problems on Concurrent Processors. Prentice Hall, 1988. Google ScholarDigital Library
- 6.M. Gerndt. Updating distributed variables in local computations. Concurrency- Practice 8J Experience, 2(3):171-193, September 1990. Google ScholarDigital Library
- 7.M. Gupta and P. Banerjee. Compile-time estimation of communication costs in multicomputers. In Proc. 6th International Parallel Processing Symposium, Beverly Hills, California, March 1992. Google ScholarDigital Library
- 8.M. Gupta and P. Banerjee. Demonstration of automatic data partitioning techniques for parallelizing compilers on multicomputers. IEEE Transactions on Parallel and Distributed Systems, 3(2):179-193, March 1992. Google ScholarDigital Library
- 9.S. Hiranandani, K. Kennedy, and C. Tseng. Compiler optimizations for Fortran D on MIMD distributed-memory machines. In Proc. Supercom. puting '91, Albuquerque, NM, November 1991. Google ScholarDigital Library
- 10.K. Ikudome, G. Fox, A. Kolawa, and J. Flower. An automatic and symbolic parallelization system for distributed memory parallel computers. In Proc. Fifth Distributed Memory Computing Conference, April 1990.Google ScholarCross Ref
- 11.S. L. Johnsson. Performance modeling of distributed memory architectures. Journal of Parallel and Distributed Computing, pages 300-312, August 1991. Google ScholarDigital Library
- 12.C. Koelbel and P. Mehrotra. Compiling global name-space parallel loops for distributed execution. 1EEE Transactions on Parallel and Distributed Systems, 2(4):440-451, October 1991. Google ScholarDigital Library
- 13.J. Li and M. Chen. Generating explicit communication from shared-memory program references. In Proc. Supercomputing '90, New York, NY, November 1990. Google ScholarDigital Library
- 14.Parasoft Corporation. Express User's Manual, 1989.Google Scholar
- 15.C. Polychronopoulos, M. Girkar, M. Haghighat, C. Lee, B. Leung, and D. Sehouten. Parafrase- 2: An environment for parallelizing, partitioning, synchronizing and scheduling programs on multiprocessors. In Proc. 1989 International Conference on Parallel Processing, August 1989.Google ScholarDigital Library
- 16.M.J. Quinn and P. J. Hatcher. Data-parallel programming on multicomputers. IEEE Software, 7:69-76, September 1990. Google ScholarDigital Library
- 17.A. Rogers and K. Pingali. Process decomposition through locality of reference. In Proc. SIGPLAN '89 Conference on Programming Language Design and Implementation, pages 69-80, June 1989. Google ScholarDigital Library
- 18.M. Rosing, R. B. Schnabel, and R. P. Weaver. The DINO parallel programming language. Technical Report CU-CS-457-90, University of Colorado at Boulder, April 1990.Google ScholarCross Ref
- 19.J. Saltz, H. Berryman, and J. Wu. Multiprocessors and runtime compilation. Technical Report ICASE 90-59, Institute for Computer Applications in Science and Engineering, Hampton, VA, September 1990.Google Scholar
- 20.M. J. Wolfe. Loop rotation. In Proc. 2nd Workshop on Languages and Compzlers for Parallel Processing, Urbana, IL, August 1989. Google ScholarDigital Library
- 21.M. J. Wolfe. More iteration space tiling. In Proc. Supercomputing 89, Reno, Nevada, November 1989. Google ScholarDigital Library
- 22.H. Zima, H. Bast, and M. Gerndt. SUPERB: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computzng, 6"1-18, 1988.Google Scholar
Index Terms
- A methodology for high-level synthesis of communication on multicomputers
Recommendations
High-Level Test Synthesis: A Survey from Synthesis Process Flow Perspective
High-level test synthesis is a special class of high-level synthesis having testability as one of the important components. This article presents a detailed survey on recent developments in high-level test synthesis from a synthesis process flow ...
A Design Methodology for the Exploitation of High Level Communication Synthesis
DATE '04: Proceedings of the conference on Design, automation and test in Europe - Volume 3In this paper we analyse some methodological concerns that have to be faced in a design flow which contains automatic synthesis phases from high-level, system descriptions. Inparticular, the issues related to the synthesis of the communication between ...
Comments