Skip to main content
main-content

Über dieses Buch

Possibly the greatest change confronting the practitioner and student of remote sensing in the period since the first edition of this text appeared in 1986 has been the enormous improvement in accessibility to image processing technology. Falling hardware and software costs, combined with an increase in functionality through the development of extremely versatile user interfaces, has meant that even the user unskilled in computing now has immediate and ready access to powerful and flexible means for digital image analysis and enhancement. An understanding, at algorithmic level, of the various methods for image processing has become therefore even more important in the past few years to ensure the full capability of digital image processing is utilised. This period has also been a busy one in relation to digital data supply. Several nations have become satellite data gatherers and providers, using both optical and microwave technology. Practitioners and researchers are now faced, therefore, with the need to be able to process imagery from several sensors, together with other forms of spatial data. This has been driven, to an extent, by developments in Geographic Information Systems (GIS) which, in tum, have led to the appearance of newer image processing procedures as adjuncts to more traditional approaches.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Sources and Characteristics of Remote Sensing Image Data

Abstract
Remote sensing image data of the earth’s surface acquired from either aircraft or spacecraft platforms is readily available in digital format; spatially the data is composed of discrete picture elements, or pixels, and radiometrically it is quantised into discrete brightness levels. Even data that is not recorded in digital form initially can be converted into discrete data by use of digitising equipment such as scanning microdensitometers.
John A. Richards

Chapter 2. Error Correction and Registration of Image Data

Abstract
When image data is recorded by sensors on satellites and aircraft it can contain errors in geometry and in the measured brightness values of the pixels. The latter are referred to as radiometric errors and can result from the instrumentation used to record the data and from the effect of the atmosphere. Image geometry errors can arise in many ways. The relative motions of a satellite, its scanners and the earth, for example, can lead to errors of a skewing nature in an image product. Non-idealities in the sensors themselves, the curvature of the earth and uncontrolled variations in the position and attitude of the remote sensing platform can all lead to geometric errors of varying degrees of severity.
John A. Richards

Chapter 3. The Interpretation of Digital Image Data

Abstract
When image data is available in digital form, spatially ąuantised into pixels and radiometrically quantised into discrete brightness levels, there are two approaches that may be adopted in endeavouring to extract information. One involves the use of a computer to examine each pixel in the image individually with a view to making judgements about pixels specifically based upon their attributes. This is referred to as quantitative analysis since pixels with like attributes are often counted to give area estimates. Means for doing this are described in Sect. 3.4. The other approach involves a human analyst/interpreter extracting information by visual inspection of an image composed from the image data. In this he or she notes generally large scale features and is often unavvare of the spatial and radiometric digitisations of the data. This is referred to as photointerpretation or sometimes image interpretation; its success depends upon the analyst exploiting effectively the spatial, spectral and temporal elements present in the composed image product. Information spatially, for example, is present in the qualities of shape, size, orientation and texture. Roads, coastlines and river systems, fracture patterns, and lineaments generally, are usually readily identified by their spatial disposition. Temporal data, such as the change in a particular object or cover type in an image from one date to another can often be used by the photointerpreter as, for example, in discriminating deciduous or ephemeral vegetation from perennial types. Spectral clues are utilised in photointerpretation based upon the analyst’s foreknowledge of, and experience vvith, the spectral reflectance characteristics of typical ground cover types, and how those characteristics are sampled by the sensor on the satellite or aircraft used to acąuire the image data.
John A. Richards

Chapter 4. Radiometric Enhancement Techniques

Abstract
Image analysis by photointerpretation is often facilitated when the radiometric nature of the image is enhanced to improve its visual impact. Specific differences in vegetation and soil types, for example, may be brought out by increasing the contrast of an image. In a similar manner subtle differences in brightness value can be highlighted either by contrast modification or by assigning quite different colours to those levels. The latter method is known as colour density slicing.
John A. Richards

Chapter 5. Geometric Enhancement Using Image Domain Techniques

Abstract
This chapter presents methods by which the geometric detail in an image may be modified and enhanced. The specific techniques covered are applied to the image data directly and could be called image domain techniques. These are alternatives to procedures used in the spatial frequency domain which require Fourier transformation of the image beforehand. Those are treated in Chap. 7.
John A. Richards

Chapter 6. Multispectral Transformations of Image Data

Abstract
The multispectral or vector character of most remote sensing image data renders it amenable to spectral transformations that generate new sets of image components or bands. These components then represent an alternative description of the data, in which the new components of a pixel vector are related to its old brightness values in the original set of spectral bands via a linear operation. The transformed image may make evident features not discernable in the original data or alternatively it might be possible to preserve the essential information content of the image (for a given application) with a reduced number of the transformed dimensions. The last point has significance for displaying data in the three dimensions available on a colour monitor or in colour hardcopy, and for transmission and storage of data.
John A. Richards

Chapter 7. Fourier Transformation of Image Data

Abstract
Many of the geometric enhancement techniques used with remote sensing image data can be carried out using the simple template-based techniques of Chap. 5. More flexibility is offered however if procedures are implemented in the so-called spatial frequency domain by means of the Fourier transformation. As a simple illustration, filters can be designed to extract periodic noise from an image that is unable to be removed by practical templates. As demonstrated in Sect. 5.4 the computational cost of using Fourier transformation for geometric operations is high by comparison to the template methods usually employed. However with the computational capacity of modem workstations, and the flexibility available in Fourier transform processing, this approach is one that should not be ignored.
John A. Richards

Chapter 8. Supervised Classification Techniques

Abstract
The principal purpose of this Chapter is to present the algorithms used regularly for the supervised classification of single sensor remote sensing image data. These are collected in Part I. When data from a variety of sensors or sources (such as found in the integrated spatial data base of a Geographical Information System) requires analysis, or when the spatial resolution of a sensor is sufficiently high to warrant attention being paid to neighbouring pixels when performing a classification, more sophisticated analysis tools may be required. A range of these is presented in Part II, along with a treatment of the neural network method for image analysis. These techniques are conceptually more difficult than the standard procedures and have been grouped separately for that reason. It is suggested that only Part I be covered on a first reading of the material of this book; Part II can be left safely until the need arises without affecting an understanding of the remaining chapters.
John A. Richards

Chapter 9. Clustering and Unsupervised Classification

Abstract
The successful application of maximum likelihood classification is dependent upon having delineated correctly the spectral classes in the image data of interest. This is necessary since each class is to be modelled by a normal probability distribution, as discussed in Chap. 8. If a class happens to be multimodal, and this is not resolved, then clearly the modelling cannot be very effective.
John A. Richards

Chapter 10. Feature Reduction

Abstract
Classification cost increases with the number of features used to describe pixel vectors in multispectral space — i. e. with the number of spectral bands associated with a pixel. For classifiers such as the parallelepiped and minimum distance procedures this is a linear increase with features; however for maximum likelihood classification, the procedure most often preferred, the cost increase with features is quadratic. Therefore it is sensible economically to ensure that no more features than necessary are utilised when performing a classification. Features which do not aid discrimination, by contributing little to the separability of spectral classes, should be discarded since they will represent a cost burden. Removal of least effective features is referred to as feature selection, this being one form of feature reduction. The other is to transform the pixel vector into a new set of co-ordinates in which the features that can be removed are made more evident. Both procedures are considered in some detail in this Chapter.
John A. Richards

Chapter 11. Image Classification Methodologies

Abstract
In principle, classification of multispectral image data should be straightforward. However to achieve results of acceptable accuracy care is required first in choosing the analytical tools to be used and then in applying them. In the following the classical analytical procedures of supervised and unsupervised classification are examined from an operational point of view, with their strengths and weaknesses highlighted. These approaches are often acceptable; however more often a judicious combination of the two will be necessary to attain optimal results. A hybrid supervised/unsupervised strategy is therefore also presented.
John A. Richards

Chapter 12. Knowledge-Based Image Analysis

Abstract
The image analysis procedures presented in the previous chapters were devised largely for handling image data from a single sensor. Thus, provided spectral classes have been appropriately identified, maximum likelihood classification, for example, can be expected to perform well on Landsat imagery or on multispectral data from other sources or systems such as SPOT and MOS-1. Similarly, provided speckle noise has been reduced to an acceptable level, simple minimum distance and parallelepiped labelling techniques could be used with radar data. The hyperspectral data sets generated by imaging spectrometers present particular challenges to image analysis because of the large data volumes; in principle however the numerical techniques developed in the preceding chapters can be applied to that data, although usually after a sensible degree of data reduction has been performed.
John A. Richards

Backmatter

Weitere Informationen