Skip to main content

2001 | Buch

Fault Detection and Diagnosis in Industrial Systems

verfasst von: Leo H. Chiang, MS, Evan L. Russell, PhD, Richard D. Braatz, PhD

Verlag: Springer London

Buchreihe : Advanced Textbooks in Control and Signal Processing

insite
SUCHEN

Über dieses Buch

Early and accurate fault detection and diagnosis for modern chemical plants can minimise downtime, increase the safety of plant operations, and reduce manufacturing costs. The process monitoring techniques that have been most effective in practice are based on models constructed almost entirely from process data.
The goal of the book is to present the theoretical background and practical techniques for data-driven process monitoring. Process monitoring techniques presented include: Data-driven methods - principal component analysis, Fisher discriminant analysis, partial least squares and canonical variate analysis; Analytical Methods - parameter estimation, observer-based methods and parity relations; Knowledge-based methods - causal analysis, expert systems and pattern recognition.
The text demonstrates the application of all of the data-driven process monitoring techniques to the Tennessee Eastman plant simulator - demonstrating the strengths and weaknesses of each approach in detail. This aids the reader in selecting the right method for his process application. Plant simulator and homework problems in which students apply the process monitoring techniques to a non-trivial simulated process, and can compare their performance with that obtained in the case studies in the text are included. A number of additional homework problems encourage the reader to implement and obtain a deeper understanding of the techniques. The reader will obtain a background in data-driven techniques for fault detection and diagnosis, including the ability to implement the techniques and to know how to select the right technique for a particular application.

Inhaltsverzeichnis

Frontmatter

Introduction

Frontmatter
1. Introduction
Abstract
In the process and manufacturing industries, there has been a large push to produce higher quality products, to reduce product rejection rates, and to satisfy increasingly stringent safety and environmental regulations. Process operations that were at one time considered acceptable are no longer adequate. To meet the higher standards, modern industrial processes contain a large number of variables operating under closed-loop control. The standard process controllers (PID controllers, model predictive controllers, etc.) are designed to maintain satisfactory operations by compensating for the effects of disturbances and changes occurring in the process. While these controllers can compensate for many types of disturbances, there are changes in the process which the controllers cannot handle adequately. These changes are called faults. More precisely, a fault is defined as an unpermitted deviation of at least one characteristic property or variable of the system [140].
Leo H. Chiang, Evan L. Russell, Richard D. Braatz

Background

Frontmatter
2. Multivariate Statistics
Abstract
The effectiveness of the data-driven measures depends on the characterization of the process data variations. There are two types of variations for process data: common cause and special cause [245]. The common cause variations are those due entirely to random noise (e.g., associated with sensor readings), whereas special cause variations account for all the data variations not attributed to common cause. Standard process control strategies may be able to remove most of the special cause variations, but these strategies are unable to remove the common cause variations, which are inherent to process data. Since variations in the process data are inevitable, statistical theory plays a large role in most process monitoring schemes.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
3. Pattern Classification
Abstract
Today’s processes are heavily instrumented, with a large amount of data collected on-line and stored in computer databases. Much of the data are usually collected during out-of-control operations. When the data collected during the out-of-control operations have been previously diagnosed, the data can be categorized into separate classes where each class pertains to a particular fault. When the data have not been previously diagnosed, cluster analysis may aid the diagnoses of the operations during which the data were collected [299], and the data can be categorized into separate classes accordingly. If hyperplanes can separate the data in the classes as shown in Figure 3.1, these separating planes can define the boundaries for each of the fault regions. Once a fault is detected using on-line data observations, the fault can be diagnosed by determining the fault region in which the observations are located. Assuming the detected fault is represented in the database, the fault can be properly diagnosed in this manner.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz

Data-driven Methods

Frontmatter
4. Principal Component Analysis
Abstract
By projecting the data into a lower-dimensional space that accurately characterizes the state of the process, dimensionality reduction techniques can greatly simplify and improve process monitoring procedures. Principal component analysis (PCA) is such a dimensionality reduction technique. It produces a lower-dimensional representation in a way that preserves the correlation structure between the process variables, and is optimal in terms of capturing the variability in the data.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
5. Fisher Discriminant Analysis
Abstract
In the pattern classification approach to fault diagnosis outlined in Chapter , it was described how the dimensionality reduction of the feature extraction step can be a key factor in reducing the misclassification rate when a pattern classification system is applied to new data (data independent of the training set). The dimensionality reduction is especially important when the dimensionality of the observation space is large while the numbers of observations in the classes are relatively small. A PCA approach to dimensionality reduction was discussed in the previous chapter. Although PCA contains certain optimality properties in terms of fault detection, it is not as well-suited for fault diagnosis because it does not take into account the information between the classes when determining the lower-dimensional representation. Fisher discriminant analysis (FDA), a dimensionality reduction technique that has been extensively studied in the pattern classification literature, takes into account the information between the classes and has advantages over PCA for fault diagnosis [46, 277].
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
6. Partial Least Squares
Abstract
Partial least squares (PLS), also known as projection to latent structures, is a dimensionality reduction technique for maximizing the covariance between the predictor (independent) matrix X and the predicted (dependent) matrix Y for each component of the reduced space [98, 350]. A popular application of PLS is to select the matrix Y to contain only product quality data which can even include off-line measurement data, and the matrix X to contain all other process variables [207]. Such inferential models (also known as soft sensors) can be used for the on-line prediction of the product quality data [215, 222, 223], for incorporation into process control algorithms [158, 259, 260], as well as for process monitoring [207, 259, 260]. Discriminant PLS selects the matrix X to contain all process variables and selects the Y matrix to focus PLS on the task of fault diagnosis [46].
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
7. Canonical Variate Analysis
Abstract
In Section 4.7, it was shown how DPCA can be applied to develop an autoregressive with input ARX model and to monitor the process using the ARX model. The weakness of this approach is the inflexibility of the ARX model for representing linear dynamical systems. For instance, a low order autoregressive moving average ARMA (or autoregressive moving average with input ARMAX) model with relatively few estimated parameters can accurately represent a high order ARX model containing a large number of parameters [199].
Leo H. Chiang, Evan L. Russell, Richard D. Braatz

Application

Frontmatter
8. Tennessee Eastman Process
Abstract
In Part IV the various data-driven process monitoring statistics are compared through application to a simulation of an industrial plant. The methods would ideally be illustrated on data collected during specific known faults from an actual industrial process, but this type of data is not publicly available for any large-scale industrial plant. Instead, many academics in process monitoring perform studies based on data collected from computer simulations of an industrial process. The process monitoring methods in this book are tested on the data collected from the process simulation for the Tennessee Eastman process (TEP). The TEP has been widely used by the process monitoring community as a source of data for comparing various approaches [16, 39, 40, 46, 99, 100, 113, 117, 183, 191, 270, 272, 271, 278, 279].
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
9. Application Description
Abstract
Chapter describes the process, the control system, and the type of faults for the Tennessee Eastman plant simulator. In Chapter , this simulator will be used to demonstrate and compare the various process monitoring methods presented in Part III. The process monitoring methods are tested on data generated by the TEP simulation code, operating under closed loop with the plant-wide control structure discussed in Section 8.6. The original simulation code allows 20 preprogrammed faults to be selectively introduced to the process [72]. We have added an additional fault simulation, which results in a total of 21 faults as shown in Table 8.4. In addition to the aforementioned aspects of the process, the process monitoring performance is dependent on the way in which the data axe collected, such as the sampling interval and the size of the data sets.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
10. Results and Discussion
Abstract
In this chapter, the process monitoring methods in Part III are compared and contrasted through application to the Tennessee Eastman process (TEP). The proficiencies of the process monitoring statistics listed in Tables 9.2-9.4 are investigated for fault detection, identification, and diagnosis. The evaluation and comparison of the statistics are based on criteria that quantify the process monitoring performance. To illustrate the strengths and weaknesses of each statistic, Faults 1, 4, 5, and 11 are selected as specific case studies in Sections 10.2, 10.3, 10.4, and 10.5, respectively. Sections 10.6, 10.7, and 10.8 present and apply the quantitative criteria for evaluating the fault detection, identification, and diagnosis statistics, respectively. The overall results of the statistics are evaluated and compared. Results corresponding to the case studies are highlighted in boldface in Tables 10.6 to 10.20.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz

Analytical and Knowledge-based Methods

Frontmatter
11. Analytical Methods
Abstract
As discussed in Section 1.2, process monitoring measures can be characterized as being data-driven, analytical, or knowledge-based. Part III focused mostly on the data-driven methods, which include control charts (Shewhart, CUSUM, and EWMA charts) and dimensionality reduction techniques (PCA, PLS, FDA, and CVA). A well-trained engineer should also have some familiarity with the analytical and knowledge-based approaches since they have advantages for some process monitoring problems. Also, many measures can be associated with more than one approach. For example, the CVA method, while being entirely data driven, can also be characterized as being an analytical method since a state-space model can be constructed from the Kaiman states (see Chapter ). Other measures at the intersection of more than one approach are discussed in Chapter 12.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
12. Knowledge-based Methods
Abstract
As discussed in Chapter , the analytical approach requires a detailed quantitative mathematical model in order to be effective. For large-scale systems, such information may not be available or may be too costly and timeconsuming to obtain. An alternative method for process monitoring is to use knowledge-based methods such as causal analysis, expert systems, and pattern recognition. These techniques are based on qualitative models, which can be obtained through causal modeling of the system, expert knowledge, a detailed description of the system, or fault-symptom examples. Causal analysis techniques are based on the causal modeling of fault-symptom relationships. Qualitative and semi-quantitative relationships in these causal models can be obtained without using first principles. Causal analysis techniques including signed directed graphs and symptom trees are primarily used for diagnosing faults. These techniques are described in Section 12.2.
Leo H. Chiang, Evan L. Russell, Richard D. Braatz
Backmatter
Metadaten
Titel
Fault Detection and Diagnosis in Industrial Systems
verfasst von
Leo H. Chiang, MS
Evan L. Russell, PhD
Richard D. Braatz, PhD
Copyright-Jahr
2001
Verlag
Springer London
Electronic ISBN
978-1-4471-0347-9
Print ISBN
978-1-85233-327-0
DOI
https://doi.org/10.1007/978-1-4471-0347-9