skip to main content
article
Free Access

Data parallel algorithms

Published:01 December 1986Publication History
Skip Abstract Section

Abstract

Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.

References

  1. 1 Backus. 1. Can programming be liberated from the van Neumann style? A functional style and its algebra of programs (1977 ACM Turing Award Lecture). Con~nruu. ACM 21, 8 (Aug. 1978), 613-641. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2 Batcher. K.E. Sorting networks and their applications. In Proceedings of fhe 1968 Sprirlg Joirlf Compufer Corlferetxe (Reston, Va. Apr.) AFIPS, Reston. Va. 1968, pp. 307-314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3 Batcher. K.E. Design of a massively parallel processor. IEEE Trans. Coqmf. C-29, 9 (Sept. 1980). 836-840.Google ScholarGoogle Scholar
  4. 4 Bawden. A. A programming language for massively parallel computers. Master's thesis. Dept. of Electrical Engineering and Computer Science. MIT, Cambridge. Mass. Sept. 1984.Google ScholarGoogle Scholar
  5. 5 Bawden. A. Connection graphs. In Proceedings of fhe 1986 ACM Conftwrm ou Lisp arld Furrfioml Programming. ACM, (Cambridge, Mass., Aug. 4-6). New York. 1986. pp. 258-265. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6 Blelloch. G. AFL-I: A programming language for massively concurrent computers. Master's thesis, Dept. of Electrical Engineering and Computer Science, MIT, Cambridge, Mass., June 1986.Google ScholarGoogle Scholar
  7. 7 Blelloch, G. Parallel prefix versus concurrent memory access. Tech. Rep. Thinking Machines Corp., Cambridge, Mass. 1986.Google ScholarGoogle Scholar
  8. 8 Bouknight, W.J., Denenberg, S.A., McIntyre. D.E. Randall, 1.M. Sameh, A.H. and Slotnick, D.L. The ILLIAC IV system. Proc. IEEE 60.4 (Apr. 1972). 369-388.Google ScholarGoogle ScholarCross RefCross Ref
  9. 9 Christman. D.P. Programming the Connection Machine. Master's thesis. Dept. of Electrical Engineering and Computer Science, MIT, Cambridge. Mass., Jan. 1983.Google ScholarGoogle Scholar
  10. 10 Christman, D.P. Programming the Connection Machine. Tech. Rep. ISL-84-3, Xerox Palo Alto Research Center, Palo Alto, Calif. Apr. 1984. (Reprint of the author's master's thesis at MIT.)Google ScholarGoogle Scholar
  11. 11 Falkoff. A.D., and Orth. D.L. Development of an APL standard. In APL 79 Cmfemce Proceedings (Rochester. N.Y., June). ACM, New York. pp. 409-453. Published as APL Quote Quad 9.4 (June 1979). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12 Flanders, P.M., et al. Efficient high speed computing with the distributed array processor. In High Speed Computer and Algorithm Orgarlizafiorr. Koch, Lawrie. and Sameh. Eds. Academic Press, New York, 1977, pp. 113-127.Google ScholarGoogle Scholar
  13. 13 Haynes, L.S. Lao. R.L. Siewiorek. D.P., and Mizell. D.W. A survey of highly parallel computing. Compufer ()an. 1982). 9-24.Google ScholarGoogle Scholar
  14. 14 Hillis, W.D. Tile Comertim Machine. MIT Press, Cambridge, Mass. 1985.Google ScholarGoogle Scholar
  15. 15 Iverson, K.E. A Progranmrit~g Language. Wiley. New York, 1962. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16 Knuth. D.E. The Arf of Con~pufer Programmi?~g. Vol. 3. Sorfing and Searrhirig. Addison-Wesley, Reading, Mass. 1973.Google ScholarGoogle Scholar
  17. 17 Knuth, D. E. Tile Art of Compufer Progranmif~g. Vol. 2, Semitwnerical Algorifhm (Sccmd Edifim). Addison-Wesley, Reading, Mass. 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18 Kong. H.T., and Lieserson. C.E. Algorithms for VLSI processor arrays. In Infroducfiorf fo VLSI Systems, L. Carver and L. Conway. Eds. Addison-Wesley, New York. 1980. pp. 271-292.Google ScholarGoogle Scholar
  19. 19 Lim, W. Fast algorithms for labeling connected components in 2-D arrays. Tech. Rep. 86.22, Thinking Machines Corp., Cambridge, Mass., July 1986.Google ScholarGoogle Scholar
  20. 20 Minsky. M. and Papert. S. Percepfrorrs. 2nd ed. MIT Press, Cambridge, Mass. 1979.Google ScholarGoogle Scholar
  21. 21 Pan, V., and Reif. J. Efficient parallel solution of linear systems. Tech. Rep. TR-02-85. Aiken Computation Laboratory, Harvard Univ. Cambridge. Mass., 1985.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. 22 Schwartz. J.T. Ultracomputers. ACM Trans. Progran~. Lang. Sysf. 2, 4 (Oct. 1980). 484-521. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23 Shaw. D.E. Tire NON-VON Supercomputer. Tech. Rep., Dept. of Computer Science, Columbia Univ. New York. Aug. 1982.Google ScholarGoogle Scholar
  24. 24 Steele, G.L. Jr. and Hillis. W.D. Connection machine Lisp: Finegrained parallel symbolic processing. In Proceedings of fhe 1986 ACM Cmfmwe on Lisp and Fmcfimal Programming (Cambridge, Mass. Aug. 4-6). ACM, New York. 1986. pp. 279-297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25 Turner. D.A. A new implementation technique for applicative languages. Soffw. Pracf. &per. 9 (1979). 31-49.Google ScholarGoogle Scholar

Index Terms

  1. Data parallel algorithms

                        Recommendations

                        Reviews

                        Frans J. Peters

                        This expository paper presents a number of algorithms that have been implemented on the Connection Machine, a system consisting of tens of thousands of processors. The paper expresses the view that such fine-grained SIMD (Single Instruction stream, Multiple Data stream) systems are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. Programs are described in terms of virtual processors. The programming model allows each processor to communicate directly with any other processor, while other processors also communicate concurrently. To show the possibilities of data-parallel programming, several interesting algorithms are presented. The authors remark that most of the algorithms are not new. Beginning with some very simple examples to familiarize the reader with the model and the notation, the paper then proceeds to more elaborate and less obvious examples. The algorithms described all use O( N) processors to solve problems of size N, typically in O(log N) time. In some cases, the actual run-times measured on a 65,635 processor system are given. Algorithms presented include: sum of an array of numbers, all partial sums of an array (“sum-prefix” operation), counting and enumerating active processors, radix sort, parsing a regular language, parallel combinator reduction, finding the end of a linked list, all partial sums of a linked list, and region labeling. This lucidly written paper can be recommended to everyone with an interest in parallel programming and parallel algorithms.

                        Access critical reviews of Computing literature here

                        Become a reviewer for Computing Reviews.

                        Comments

                        Login options

                        Check if you have access through your login credentials or your institution to get full access on this article.

                        Sign in

                        Full Access

                        • Published in

                          cover image Communications of the ACM
                          Communications of the ACM  Volume 29, Issue 12
                          Special issue on parallelism
                          Dec. 1986
                          95 pages
                          ISSN:0001-0782
                          EISSN:1557-7317
                          DOI:10.1145/7902
                          Issue’s Table of Contents

                          Copyright © 1986 ACM

                          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                          Publisher

                          Association for Computing Machinery

                          New York, NY, United States

                          Publication History

                          • Published: 1 December 1986

                          Permissions

                          Request permissions about this article.

                          Request Permissions

                          Check for updates

                          Qualifiers

                          • article

                        PDF Format

                        View or Download as a PDF file.

                        PDF

                        eReader

                        View online with eReader.

                        eReader