Skip to main content

1995 | Buch | 3. Auflage

Digital Image Processing

Concepts, Algorithms, and Scientific Applications

verfasst von: Dr. Bernd Jähne

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

From the reviews of the first edition:
"I recommend this book to anyone seriously engaged in image processing. It will clearly stretch the horizon of some readers and be a good reference for others. This is not just another image processing book; it is a book worth owning and a book worth reading several times ..." #J. Electronic Imaging#
This practical guidebook uses the concepts and mathematics familiar to students of the natural sciences to provide them with a working knowledge of modern techniques of digital image processing. It takes readers from basic concepts to current research topics and demonstrates how digital image processing can be used for data gathering in research. Detailed examples of applications on PC-based systems and ready-to-use algorithms enhance the text, as do nearly 200 illustrations (16 in color). The book also includes the most exciting recent advances such as reconstruction of 3-D objects from projections and the analysis of stereo images and image sequences.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
From the beginning of science, visual observation has played a major role. At that time, the only way to document the results of an experiment was by verbal description and manual drawings. The next major step was the invention of photography which enabled results to be documented objectively. Three prominent examples of scientific applications of photography are astronomy, photogrammetry, and particle physics. Astronomers were able to measure positions and magnitudes of stars accurately. Aerial images were used to produce topographic maps. Searching through countless images from hydrogen bubble chambers led to the discovery of many elementary particles in physics. These manual evaluation procedures, however, were time consuming. Some semi- or even fully automated optomechanical devices were designed. However, they were adapted to a single specific purpose. This is why quantitative evaluation of images never found widespread application at that time. Generally, images were only used for documentation, qualitative description and illustration of the phenomena observed.
Bernd Jähne
2. Image Formation and Digitization
Abstract
Image acquisition is the first step of digital image processing and is often not properly taken into account. However, quantitative analysis of any images requires a good understanding of the image formation process. Only with a profound knowledge of all the steps involved in image acquisition, is it possible to interpret the contents of an image correctly. The steps necessary for an object in the three-dimensional world to become a digital image in the memory of a computer are as follows:
  • Becoming visible. An object becomes visible by the interaction with light or, more generally, electromagnetic radiation. The four basic types of interaction are reflection, refraction, absorption, and scattering. These effects depend on the optical properties of the material from which the object is made and on its surface structure. The light collected by a camera system is determined by these optical properties as well as by the illumination, i. e., position and nature of the light or, more generally, radiation sources.
  • Projection. An optical system collects the light rays reflected from the objects and projects the three-dimensional world onto a two-dimensional image plane.
  • Digitization. The continuous image on the image plane must be converted into image points on a discrete grid. Furthermore, the intensity at each point must be represented by a suitable finite number of gray values (Quantization).
Bernd Jähne
3. Space and Wave Number Domain
Abstract
Fourier transform, i.e., decomposition of an image into periodic structures, proved to be an extremely helpful tool to understanding image formation and digitization. Throughout the whole discussion in the last chapter we used the continuous Fourier transform. Proceeding now to discrete imagery, the question arises whether there is a discrete analogue to the continuous Fourier transform. Such a transformation would allow us to decompose a discrete image directly into its periodic components.
Bernd Jähne
4. Pixels
Abstract
Discrete images are composed of individual image points, which we denoted in section 2.3.1 as pixels. Pixels are the elementary units in digital image processing. The simplest processing is to handle these pixels as individual objects or measuring points. This approach enables us to regard image formation as a measuring process which is corrupted by noise and systematic errors. Thus we learn to handle image data as statistical quantities. As long as we are confined to individual pixels, we can apply the classical concepts of statistics which are used to handle point measurements, e. g., the measurement of meteorological parameters at a weather station such as air temperature, wind speed and direction, relative humidity, and air pressure.
Bernd Jähne
5. Neighborhoods
Abstract
The contents of an image can only be revealed when we analyze the spatial relations of the gray values. If the gray value does not change in a small neighborhood, we are within an area of constant gray values. This could mean that the neighborhood is included in an object. If the gray value changes, we might be at the edge of an object. In this way, we recognize areas of constant gray values and edges.
Bernd Jähne
6. Mean and Edges
Abstract
In this chapter we will apply neighborhood operations to analyze two elementary structures: the mean gray value and changes in the gray values. The determination of a correct mean value also includes the suppression of distortions in the gray values caused by sensor noise or transmission errors. Changes in the gray value mean, in the simplest case, the edges of objects. Thus edge detection and smoothing are complementary operations. While smoothing gives adequate averages for the gray values within the objects, edge detection aims at estimating the boundaries of objects.
Bernd Jähne
7. Local Orientation
Abstract
In the last chapter we became acquainted with neighborhood operations. In fact, we only studied very simple structures in a local neighborhood, namely the edges. We concentrated on the detection of edges, but we did not consider how to determine their orientation. Orientation is a significant property not only of edges but also of any pattern that shows a preferred direction. The local orientation of a pattern is the property which leads the way to a description of more complex image features. Local orientation is also a key feature in motion analysis (chapter 17). Furthermore, there is a close relationship between orientation and projection (section 13.4.2).
Bernd Jähne
8. Scales
Abstract
The effect of all the operators discussed so far — except for recursive filters — is restricted to local neighborhoods which are significantly smaller than the size of the image. This inevitably means that they can only extract local features. We have already seen a tendency that analysis of a more complex feature such as local orientation (chapter 7) requires larger neighborhoods than computing, for example, a simple property such as the Laplacian (section 6.2). It is quite obvious that a larger neighborhood can show a larger set of features which requires more complex operations to reveal them. If we extrapolate our approach by analyzing larger scales in the image with larger filter kernels, we inevitably run into a dead end. The computation of the more complex operators will become so tedious that they are not longer useful.
Bernd Jähne
9. Texture
Abstract
Local orientation (chapter 7) was the first example of a more complex feature describing the structure of the gray values in a local neighborhood. It enabled us to distinguish objects not only because of their gray values but also because of the orientation of the patterns (compare figure 7.1). Real-world objects often carry patterns which differ not only in their orientation, but also in many other parameters. Our visual system is capable of recognizing and distinguishing such patterns with ease, but it is difficult to describe the differences precisely (figure 9.1). Patterns which characterize objects are called textures in image processing. Actually, textures demonstrate the difference between an artificial world of objects whose surfaces are only characterized by the color and reflectivity properties to that of real-world imagery. We can see a similar trend in computer graphics. If we place a texture on the surface of objects, a process called texture mapping, we obtain much more realistic images (see also plate 3).
Bernd Jähne
10. Segmentation
Abstract
All image processing operations discussed so far have helped us to “recognize” objects of interest, i.e., to find suitable local features which allow us to distinguish them from other objects and from the background. The next step is to check each individual pixel whether it belongs to an object of interest or not. This operation is called segmentation and produces a binary image. A pixel has the value one if it belongs to the object; otherwise it is zero. Segmentation is the operation at the threshold between low-level image processing and the operations which analyze the shape of objects, such as those discussed in chapter 11. In this chapter, we discuss several types of segmentation methods. Basically we can think of three concepts for segmentation. Pixel-based methods only use the gray values of the individual pixels. Edge-based methods detect edges and then try to follow the edges. Finally, region-based methods analyze the gray values in larger areas.
Bernd Jähne
11. Shape
Abstract
After the segmentation process, which we discussed in the previous chapter, we know which pixels belong to the object of interest. Now we can perform the next step and analyze the shape of the objects. This is the topic of this chapter. First we will discuss a class of neighborhood operations, the morphological operators on binary images, which work on the form of objects. Second, we will consider the question how to represent a segmented object. Third, we will discuss parameters to describe the form of objects.
Bernd Jähne
12. Classification
Abstract
When objects are detected with suitable operators and their shape is described (see chapter 11), image processing has reached its goal for some applications. For other applications, further tasks remain to be solved. In this introduction we explore several examples which illustrate how the image processing tasks depend on the questions we pose.
Bernd Jähne
13. Reconstruction from Projections
Abstract
In chapter 2 we discussed in detail how a discrete two-dimensional image is formed from a three-dimensional scene by an optical system. In this chapter we discuss the inverse process, the reconstruction of a three-dimensional scene from two-dimensional projections. Reconstruction from only one projection is an underdetermined inverse problem which generally shows an infinite number of solutions. As an illustration, figure 13.1 shows the perspective projection of a bar onto an image plane. We will obtain identical projections at the image plane, whenever the endpoints of a bar lie on the same projection beams. Even if the bar shows a curvature in the projection plane, we will still see a straight line at the image plane.
Bernd Jähne
14. Motion
Abstract
In this chapter we extend our considerations from single images to image sequences. We may compare this step with the transition from still photography to motion pictures. Only in image sequences can we recognize and analyze dynamic processes. Thus the analysis of image sequences opens up far-reaching possibilities in science and engineering. A few examples serve as illustration:
  • Flow visualization is an old tool in fluid dynamics but has been used for a long time mainly for qualitative description, because manual quantitative evaluation has been prohibitively laborious. Digital image sequence analysis allows area-extended velocity data to be extracted automatically. In section 2.2.8 we discussed an example of flow visualization by particle tracking. Some results are shown in plate 4.
  • Satellite image sequences of the sea surface temperature (see section 1.2.1 and plate 1) can be used to determine near-surface ocean currents [Wahl and Simpson, 1990].
  • In the industrial environment, motion sensors based on image sequence analysis are beginning to play an important role. Their usage covers a wide spectrum starting with remote velocity measurements in industrial processes [Massen et al., 1987] to the control of autonomous vehicles and robots [Dickmanns, 1987].
Bernd Jähne
15. Displacement Vectors
Abstract
In the last chapter we worked out the basic knowledge which is necessary for a successful motion analysis. Depending on the motion model used, we either need to determine the displacement vectors (DV) at single points, or the displacement vector field (DVF) in order to compute the first-order spatial derivatives (rotation and deformation terms).
Bernd Jähne
16. Displacement Vector Fields
Abstract
So far we have discussed the problem how displacement vectors (DV) can be determined at single points in the image. Now we turn to the question how a continuous displacement vector field (DVF) can be estimated. The idea is to collect the sparse velocity information obtained with the local operations discussed in the last chapter and to compose it into a consistent picture of the motion in the scene observed.
Bernd Jähne
17. Space-Time Images
Abstract
So far, we have analyzed motion from only two consecutive images of a sequence, but did not consider the whole sequence. This stemmed from a limited capacity to handle image sequence data. Nowadays, video and computer hardware can record, store, and evaluate long image sequences (see section 1.2.2 and appendix B). It is much more important, however, to recognize that there is no principal reason to limit image sequence processing to an image pair. On the contrary, it seems to be an unjustified restriction. That is certainly true for the concepts developed so far. In the differential approach (section 15.2) temporal derivatives play an essential role (see (15.5), (15.12), and (15.27)). With only two consecutive images of a sequence, we can approximate the temporal derivative just by the difference between the two images. This may be the simplest approximation, but not necessarily the best (see section 6.3.5).
Bernd Jähne
Backmatter
Metadaten
Titel
Digital Image Processing
verfasst von
Dr. Bernd Jähne
Copyright-Jahr
1995
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-03174-2
Print ISBN
978-3-540-59298-3
DOI
https://doi.org/10.1007/978-3-662-03174-2