Skip to main content

1990 | Buch

Hypercube Algorithms

with Applications to Image Processing and Pattern Recognition

verfasst von: Prof. Sanjay Ranka, Prof. Sartaj Sahni

Verlag: Springer New York

Buchreihe : Bilkent University Lecture Series

insite
SUCHEN

Über dieses Buch

Fundamentals algorithms for SIMD and MIMD hypercubes are developed. These include algorithms for such problems as data broadcasting, data sum, prefix sum, shift, data circulation, data accumulation, sorting, random access reads and writes and data permutation. The fundamental algorithms are then used to obtain efficient hypercube algorithms for matrix multiplication, image processing problems such as convolution, template matching, hough transform, clustering and image processing transformation, and string editing. Most of the algorithms in this book are for hypercubes with the number of processors being a function of problems size. However, for image processing problems, the book also includes algorithms for and MIMD hypercube with a small number of processes. Experimental results on an NCUBE/77 MIMD hypercube are also presented. The book is suitable for use in a one-semester or one-quarter course on hypercube algorithms. For students with no prior exposure to parallel algorithms, it is recommended that one week will be spent on the material in chapter 1, about six weeks on chapter 2 and one week on chapter 3. The remainder of the term can be spent covering topics from the rest of the book.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Parallel computers may be classified by taking into account their memory organization, processor organization, and the number of instruction streams supported.
Sanjay Ranka, Sartaj Sahni
Chapter 2. Fundamental Operations
Abstract
In a data broadcast we begin with data in the A register of processor zero of the hypercube. This data is to be transmitted to the A register of each of the remaining processors in the hypercube. This may be accomplished using the binary tree transmitting scheme of Figure 2.1. This figure is for the case of a dimension 3 hypercube. The data to be broadcast is initially only in processor 0 (root of the broadcast tree). It is transmitted along bit 0 to processor 1 (001). This is denoted by the arrow from the root to its right child. Now processors 0 and 1 (level 2 processors) have the data. Both transmit along bit 1. The data is now contained in all level 3 processors (0, 2, 1, 3). These processors now send the data to their neighbor processors along bit 2. As a result, all processors in the hypercube have the data.
Sanjay Ranka, Sartaj Sahni
Chapter 3. SIMD Matrix Multiplication
Abstract
Suppose that two n×n matrices A [0:n-1, 0:n-1] and B[0:n-1, 0:n-1] are to be multiplied on an SIMD hypercube to get the product matrix C where
$$ C[i,j] = \sum\limits_{k = 0}^{n - 1} {A[i,k]*B[k,j],0 \le i,j < n} $$
Sanjay Ranka, Sartaj Sahni
Chapter 4. One Dimensional Convolution
Abstract
The inputs to the one dimensional convolution problem are vectors I[0..N − 1] and T[0..M − 1]. The output is the vector C1D where
$$ C1D[i] = \sum\limits_{v = 0}^{m - 1} {I[(i + v)mod N]*T[v],0 \le i < N} $$
Sanjay Ranka, Sartaj Sahni
Chapter 5. Template Matching
Abstract
The inputs to the image template matching problem are an N×N image matrix I[0..N − 1, 0..N − 1] and an M×M template T[O..M − 1, 0..M − 1]. The output is an N×N matrix C 2D where
$$ C2D\left[ {i,j} \right] = \sum\limits_{u = 0}^{M - 1} {\sum\limits_{v = 0}^{M - 1} {I[(i + u)mod} } N,(j + v)modN]*T[u,v],0 \le i,j < N $$
C 2D is called the two dimensional convolution of I and T. Template matching, i.e., computing C 2D, is a fundamental operation in computer vision and image processing. It is often used for edge and object detection; filtering; and image registration (Rosenfeld and Kak 1982, and Ballard and Brown 1985).
Sanjay Ranka, Sartaj Sahni
Chapter 6. Hough Transform
Abstract
Hough transforms are used to detect straight lines or edges in an image. Lei L be a straighl line in the x-y plane. The normal to this line is another straight line that is perpendicular to it and passes through the origin (0, 0) (Figure 6.1). Let 6 be the angle the normal makes with the x-axis and let r be the length of the normal. All points (x i ,y i ) on L satisfy the equality
$$ {x_i}\cos {\rm{\theta + }}{y_i}\sin {\rm{\theta = }}r $$
The Hough transform utilizes this cquality to detect straight lines or edges in an image. We try out a set {θ j , I 0≤j <p} of p possible values for the angle θ. This is equivalent to trying out a set of p possible slopes for the lines being detected. A θ j and a point (x i ,y i ) together uniquely define a line L through this point. The length of the normal to this line is given by the above equality. Furthermore, for any θ j , all image points (x i ,y i ) that have the same normal length, x i cosθ j + y i sinθj. lie on the same line L. (This line is uniquely defined by the angle, θ j , and the length of the normal.) For each θ j , we determine the number of image points that have the same normal length. By knowing how many image points have the same normal (i.e., same normal length and same normal angle) we can determine the probabilty that the image has a line with that normal.
Sanjay Ranka, Sartaj Sahni
Chapter 7. Clustering
Abstract
A feature vector v is a set of measurements (v1, v2,..., v M ) which map the important properties of an image into a Euclidean space of dimension M (Ballard and Brown 1985). Clustering partitions a set of feature vectors into groups. It is a valuable tool in exploratory pattern analysis and helps making hypotheses about the structure of data. It is important in syntactic pattern recognition, image segmentation and registration. There are many methods for clustering feature vectors (Ballard and Brown 1985, Duda and Hart 1973, Fukunaga 1972, Fu 1974, Rosenfeld and Kak 1982, and Tou and Gonzalez 1974). One popular technique is squared error clustering.
Sanjay Ranka, Sartaj Sahni
Chapter 8. Image Transformations
Abstract
In this chapter we develop hypercube algorithms for shrinking, expanding, translation, rotation, and scaling of an N×N image. We assume that N is a power of 2 and that N2 processors are available. These are viewed as forming an N×N array. Row major indexing is used for SIMD hypercubes and the Gray code scheme for MIMD hypercubes. The image pixel I[i, j] is mapped to the hypercube processor in position (i, j), 0 ≤ i, j <N, of the array view of the hypercube.
Sanjay Ranka, Sartaj Sahni
Chapter 9. SIMD String Editing
Abstract
The input to the string editing problem consists of two strings A =a1a2a3 ...an-1 and B =b1b2b3 ...bm−1; and three cost functions C,D, and I where:
$$ C({a_j},{b_j}) = \cos t\,of\,changing\,{a_j}to\,{b_j} $$
$$ D({a_j}) = \cos t\,of\,deleting\,{a_i}\,from\,A $$
$$ I({b_i}) = \cos t\,of\,inserting\,{b_i}\,into\,A $$
Sanjay Ranka, Sartaj Sahni
Backmatter
Metadaten
Titel
Hypercube Algorithms
verfasst von
Prof. Sanjay Ranka
Prof. Sartaj Sahni
Copyright-Jahr
1990
Verlag
Springer New York
Electronic ISBN
978-1-4613-9692-5
Print ISBN
978-1-4613-9694-9
DOI
https://doi.org/10.1007/978-1-4613-9692-5