Skip to main content

1982 | Buch

Algorithms for Graphics and Image Processing

verfasst von: Theo Pavlidis

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

The technological developments of the last ten years have made com­ puter graphics and image processing by computer popular. Pictorial pat­ tern recognition has also shown significant progress. Clearly, there exist overlapping interests among the three areas of research. Graphic displays are of concern to anyone involved in image processing or pic­ torial pattern recognition and many problems in graphics require methodologies from image processing for their solutions. The data structures used in all three areas are similar. It seems that there is a common body of knowledge underlying all three areas, pictorial informa­ tion processing by computer. The novelty of these fields makes it difficult to design a course or to a write a book covering their basic concepts. Some of the treatises on graphics focus on the hardware and methods of current interest while treatises on image processing often emphasize applications and classical signal processing. The fast evolution of technology causes such material to lose its relevance. For example, the development of optical fibers has reduced the importance of bandwidth compression.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Processing of pictorial data by computer takes different forms in a variety of applications. It has been customary to classify such work into three areas: graphics, image processing, and pictorial pattern recognition.
Theo Pavlidis
Chapter 2. Digitization of Gray Scale Images
Abstract
When a picture is to be processed by computer, it is often described as a matrix, or some other discrete data structure. But a picture is primarily a signal that conveys information to an observer, and there are many applications where this consideration is particularly important. We shall devote this and the next chapter to the discussion of such problems, especially for gray scale (class 1) images. The first problem is the conversion of a continuous picture into a discrete form, and this involves two processes: sampling, which is the selection of a discrete grid to represent an image, and quantization, which is the mapping of the brightness and color values into integers. In graphics one is concerned with similar problems: specifically the choice of the display resolution and number of gray levels or colors. These processes are also relevant in one-dimensional data and have been studied thoroughly in that case, but two-dimensional data present new problems. We shall devote the first part of this chapter to a review of transform techniques and then we shall discuss sampling for the one-dimensional case, followed by sampling for pictures. Quantization will be treated in the last section.
Theo Pavlidis
Chapter 3. Processing of Gray Scale Images
Abstract
There are two major types of gray scale image processing: within class transformations, such as filtering and image enhancement, and class 1 image to class 2 image transforms, such as segmentation. Most methods for performing such processing use, directly or indirectly, statistics computed on images. We shall discuss two of them, the histogram of distribution of gray levels (in Section 3.2), and the cooccurrence matrix of pairs of gray levels at pixel pairs (in Section 3.3). Their applications in filtering are discussed in Sections 3.4 and 3.5, while their use for segmentation is presented in the next chapter.
Theo Pavlidis
Chapter 4. Segmentation
Abstract
Segmentation identifies areas of an image that appear uniform to an observer, and subdivides the image into regions of uniform appearance. It is relatively easy to define uniformity in terms of gray level or color. It is far more difficult to specify what we mean by “uniform texture.”
Theo Pavlidis
Chapter 5. Projections
Abstract
In image processing, the term projection usually refers to mapping an image into a waveform whose values are the sums of the values of the image points along particular directions. (In graphics, as well as in some scene analysis problems, the term is commonly used to denote the mapping of a three-dimensional object to the plane with all depth information lost.) The reconstruction of gray scale images from their projections is one of the best known uses of computers for medical applications because it provides a strong diagnostic tool replacing some painful and dangerous techniques (see Bibliographical Notes). The technique can be extended to reconstruct a three-dimensional object from its two-dimensional projections. This is achieved by linking together a series of cross-sections. Typically, the object is a human organ (heart, brain, etc.) and the projections are radiographic or ultrasonic images, but the theory is applicable to any type of object and penetrating signal. Figure 5.1 (Plate 19) illustrates the results of such a reconstruction. We shall discuss here some of the simpler techniques, without going into any discussion of medical applications. Sections 5.2 and 5.3 describe a fundamental method for such reconstructions.
Theo Pavlidis
Chapter 6. Data Structures
Abstract
The large memory requirements associated with storing pictorial data are well known to anyone who has worked with them. For example, storing an ordinary frame of television requires at least 512×512 bytes, if we use three bits for two of the primary colors and two for the third. A black and white passport photograph requires at least a 64×64 matrix with six bits per element, well above the size of a record containing whatever other information is in a passport. (A page of single-spaced typewritten text requires about 3000 bytes.) Problems of storage, search, retrieval, transmission, etc. are particularly difficult whenever pictorial data are encountered. These difficulties are somewhat counterintuitive because humans often find it easier to deal with pictures than with text. It is far easier for us to remember the face of a new acquaintance than a page of typewritten text. The difficulty of matching such human performance on a computer can be appreciated by pointing out that, at least for some people, the recollection of the face is better when it belongs to a member of the opposite sex and that the text is remembered better if it is a piece of prose than if it is a list of names. Therefore, any data compaction techniques that depend only on signal processing are not likely to reduce the data volume to a size compatible with our intuitive expectations.
Theo Pavlidis
Chapter 7. Bilevel Pictures
Abstract
This chapter and the next two chapters deal with class 2 pictures where regions of fixed gray level or color are well defined. A major concern in the study of such images is the concept of shape, a term that is not easy to define quantitatively. One faces this problem when sampling an analog bilevel image. The size of the cells of the sampling grid must be small enough so that the shapes of regions of a given color remain unaltered in reconstructing the image. Sections 7.2, 7.4, and 7.6 deal with this and with other aspects of the digitization problem. Another set of problems in processing such images involves transforming them into a set of curves, or going back from a set of curves to plane regions. The tracing of the contour of a region is discussed in Section 7.5. Reconstructing the region from its contour, or filling, will be treated in Chapter 8. Instead of finding the contour, one may choose to thin a region to its skeleton, a matchstick type of figure whose curves or lines reflect the shape of the region. Thinning algorithms will be treated in Chapter 9.
Theo Pavlidis
Chapter 8. Contour Filling
Abstract
One of the most common problems in graphics and picture analysis is finding the interior of a region when its contour is given, i.e., transforming a class 3 picture into a class 2 picture. For example, any shading algorithm presumes the solution of this problem. In pattern recognition, many algorithms compute integrals over the area of a region and require knowledge of the interior. In photoScannedting, fonts are often described in terms of contours which are then filled to produce the final copy. The problem can be solved in many ways, which can be divided into two broad classes. In the first, one has a precise description of the contour as a polygon and decides which parts of the plane lie in the interior by considering, in effect, the line equations. Such techniques could be called polygon based but we prefer the shorter term edge filling and describe them in Section 8.2. Methods of the second class map the contour onto the discrete plane and then locate the interior by examining the values of pixels. Such pixel based techniques are discussed in Sections 8.3 to 8.5.
Theo Pavlidis
Chapter 9. Thinning Algorithms
Abstract
Thinning algorithms have been studied widely in picture processing and pattern recognition because they offer a way of simplifying pictorial forms. Figure 9.1 illustrates a motivation for a thinning algorithm. The shaded pixels represent a quantization of a line drawing to be mapped back into a set of lines. In Sections 7.6 and 7.7, we have already discussed how the concept of thinness can be defined over a discrete grid. We shall use that analysis here as the basis for thinning algorithms.
Theo Pavlidis
Chapter 10. Curve Fitting and Curve Displaying
Abstract
In Section 7.6 we discussed how one can define a curve on a discrete grid, and in Section 9.7 we discussed how to derive curves from skeletons. The representation of a curve as a sequence of pixels may be adequate for some applications, but for others we would like to have a mathematical expression for it. Such an expression may be far more compact than the discrete forms, as a comparison between Figure 9.8 and Table 9.1 shows. Finding a curve that passes through a set of given points is the problem of interpolation, while finding a curve that passes near a set of given points is the problem of approximation. We shall use the term curve fitting to refer collectively to both of them. The problem of curve displaying is also of interest: given a mathematical expression, identify the pixels that must be marked so that an image of the curve described by the expression is displayed. The problem is by no means trivial, even in the case of straight line displays (Section 7.6).
Theo Pavlidis
Chapter 11. Curve Fitting with Splines
Abstract
In many applications where curve fitting is used, one would like to modify parts of the curve without affecting other parts. We shall say that a scheme has a local property if local modifications do not propagate. Clearly, the polynomials discussed in Section 10.2 do not have this feature and Bezier polynomials exhibit it only approximately. A change in the location or multiplicity of one of the guiding points requires the recalculation of the whole curve, even though the changes will have little effect far from the changed guiding point. Piecewise polynomial functions offer a direct way of achieving local control. We shall discuss such functions first in the y = y (x) form and later in parametric representations. The following is a general expression for a piecewise polynomial function:
$$ p(x) = {p_i}(x)\quad {x_i} \leqslant x \leqslant {x_{i + 1}}\quad i = 0,1...,k - 1 $$
(11.1a)
$$ {p_i}^{(j)}({x_i}) = {p_i}{_{ + 1}^{(j)}}({x_i})\quad j = 0,1...r - 1;i = 1,...k - 1 $$
(11.1b)
Theo Pavlidis
Chapter 12. Approximation of Curves
Abstract
The last two chapters dealt mostly with interpolation where a curve must pass through all the data points. (Guiding points are part of the design parameters, so that a curve defined by them can still be used for interpolation.) There are many applications where interpolation is not necessary, or even desirable, and one wants the curve to pass only near the data points. This is the approximation problem. If the curve fitting is done in an interactive way the distinction between the two problems is not essential. The user modifies parameters (such as guiding points) until the curve looks right. “Looking right” may mean that the curve passes through all the data points, or through most of them, or near all of them, and so on. If the curve fitting is done automatically, such subjective criteria must be replaced by mathematically precise measures of closeness. The most common such measures are the maximum error and the integral square error (ISE). The error can be measured either along a coordinate or along a normal to the approximating curve. The latter is intuitively more appealing but more difficult to compute. Let e i denote the pointwise error at the i th point, i.e. the distance between the curve and the point (measured by either of the above techniques).
Theo Pavlidis
Chapter 13. Surface Fitting and Surface Displaying
Abstract
The display of the surface of a three-dimensional object is an important problem in applications of graphics systems where the goal is to provide the user with different views of a group of solid objects. Surface fitting is also necessary when the objects are given initially as sets of points. Surface fitting and displaying are also of interest in other areas. In computerized cartography or geography one deals with models of terrain that are expressed as mathematical surfaces. In picture processing and pattern recognition, surfaces enter into image analysis in at least two ways. First, one can think of a picture as a surface by using the brightness values for the third coordinate. Second, one may wish to consider the surfaces of the objects that appear in a picture. Then the segmentation problem (Chapter 4) is reduced to identifying groups of pixels that are images of a single surface.
Theo Pavlidis
Chapter 14. The Mathematics of Two-Dimensional Graphics
Abstract
A major goal in computer graphics is to create pictorial displays out of mathematical descriptions of images. This requires the creation (usually by the host computer) of a display file that is then sent to the graphic device. In vector graphics where the device expects commands such as “draw vector,” the contents of the display file are for the most part primitive instructions. In raster graphics, instructions such as “draw vector” are not primitive and they are translated by the device into pixel configurations in the refresh memory. Strictly speaking, most raster graphics devices do not have a display file. However, it is useful to think in terms of such a file, regardless of how the display device handles it. The creation of a display file is trivial for very simple displays but becomes quite challenging as the complexity of the desired display increases.
Theo Pavlidis
Chapter 15. Polygon Clipping
Abstract
The term clipping is used to describe the process of finding whether a line (or polygon) is intersected by another polygon. A major application of clipping is to decide whether certain constructs fall within the display window. We should point out here that one cannot rely on the display device to achieve clipping for two reasons. First, the display area may be only a subset of the display screen (for example, a part of the screen may be reserved for the display of text). Second, there is no uniform way by which points with invalid coordinates are displayed. Quite often a display device ignores the high order bits, so that the constructs wrap around the screen. Therefore, there is a need for algorithms to solve this problem explicitly.
Theo Pavlidis
Chapter 16. The Mathematics of Three-Dimensional Graphics
Abstract
Computer graphics offers an interesting example of revival of some almost forgotten methodologies and branches of mathematics. The rendering of three-dimensional scenes on a plane surface preoccupied painters for many centuries and the study of perspective projections occupied a prominent position in art schools. Around the beginning of the nineteenth century the study of descriptive geometry became a central subject in engineering and dealt with problems such as finding the intersection of cylinders by graphical means, etc. The invention of photography reduced the significance of realistic art, and the advent of abstract art placed studies of projections in the background. Similarly, descriptive geometry lost its prominence in engineering and by 1960 few engineering students in the United States were studying the subject.
Theo Pavlidis
Chapter 17. Creating Three-Dimensional Graphic Displays
Abstract
We use the term three-dimensional display for displays that are projections of three-dimensional scenes rather than for displays in three dimensions.† However, the use of proper projection operations, elimination of surfaces that are not supposed to be visible, and shading can create a realistic impression of depth, as one can see from Figures 17.1 (Plate 28) and 17.2 (Plate 29). Visibility problems are central in the creation of such displays, as are shading algorithms (Section 13.9) and randomization techniques for introducing the appearance of texture, as shown in Figure 17.2. The efficient solution of visibility problems is probably one of the major steps in creating such a display. These are I often referred to as the hidden line or hidden surface problems. Section 17.2 is a general discussion of the subject. Section 17.3 deals with a visibility algorithm using a quad tree and Section 17.4 with an algorithm particularly appropriate for raster graphics. Section 17.5 discusses the subject of coherence: how to take advantage of the interdependence of different parts of a scene to speed up the solution of the visibility problem. Even though there exist a large number of visibility algorithms in the literature, we present here only two of the simplest. The reason is that the choice of a particular method is strongly application dependent. The reason is that the choice of a particular methotd is strongly application dependent.
Theo Pavlidis
Backmatter
Metadaten
Titel
Algorithms for Graphics and Image Processing
verfasst von
Theo Pavlidis
Copyright-Jahr
1982
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-93208-3
Print ISBN
978-3-642-93210-6
DOI
https://doi.org/10.1007/978-3-642-93208-3