Skip to main content
Top

1998 | Book

Quality Improvement Through Statistical Methods

Editor: Bovas Abraham

Publisher: Birkhäuser Boston

Book Series : Statistics for Industry and Technology

insite
SEARCH

About this book

This book is based on the papers presented at the International Conference 'Quality Improvement through Statistical Methods' in Cochin, India during December 28-31, 1996. The Conference was hosted by the Cochin University of Science and Technology, Cochin, India; and sponsored by the Institute for Improvement in Quality and Productivity (IIQP) at the University of Waterloo, Canada, the Statistics in Industry Committee of the International Statistical Institute (lSI) and by the Indian Statistical Institute. There has been an increased interest in Quality Improvement (QI) activities in many organizations during the last several years since the airing of the NBC television program, "If Japan can ... why can't we?" Implementation of QI meth­ ods requires statistical thinking and the utilization of statistical tools, thus there has been a renewed interest in statistical methods applicable to industry and technology. This revitalized enthusiasm has created worldwide discussions on Industrial Statistics Research and QI ideas at several international conferences in recent years. The purpose of this conference was to provide a forum for presenting and ex­ changing ideas in Statistical Methods and for enhancing the transference of such technologies to quality improvement efforts in various sectors. It also provided an opportunity for interaction between industrial practitioners and academia. It was intended that the exchange of experiences and ideas would foster new international collaborations in research and other technology transfers.

Table of Contents

Frontmatter

Statistics and Quality

Frontmatter
1. Scientific Learning

It is argued that the domination of Statistics by Mathematics rather than by Science has greatly reduced the value and the status of the subject. The mathematical “theorem-proof paradigm” has supplanted the “iterative learning paradigm” of scientific method. This misunderstanding has affected university teaching, research, the granting of tenure to faculty and the distributions of grants by funding agencies. Possible ways in which some of these problems might be overcome and the role that computers can play in this reformation are discussed.

George E. P. Box
2. Industrial Statistics and Innovation

Industrial Statistics has been intimately linked with Quality and the Quality Revolution. There are very good historical reasons for this, but caution is indicated; there has recently been a strong backlash against the Quality movement, often because Quality concepts have been misunderstood and misapplied. For example, the measurement component of major Quality awards is usually the worst handled. Deep mastery of variation has failed to take root.The criteria for the Australian Quality Awards are analysed to provide a template for the creative application of Statistics to learning and to the creation, codification, sharing and deployment of knowledge. This process is intimately linked to innovation, which is increasingly seen as a crucial factor in industrial wealth creation.The current Industrial Statistics paradigm is critically examined and a program for the re-invention of Industrial Statistics is suggested. The program is based on linkages with product and process innovation, and with technology strategies.

R. L. Sandland
3. Understanding QS-9000 With Its Preventive and Statistical Focus

The automotive Quality System Requirements standard QS-9000 is often presented as “ISO 9000 plus specific automotive interpretations”. However, QS-9000 and ISO 9000 are fundamentally different in their approach and philosophy. While ISO 9000 defines quality system requirements to control product quality through inspection and control, QS-9000’s approach is to prevent poor quality first by focusing on the design and then by controlling the manufacturing processes that produce the product.Statistics play a very important role in the effective implementation of QS-9000. Data collection and analysis are vital in maximizing the benefits from the quality system. This includes the use of company level data which are to be used to manage and prioritize the business activities. It also includes the collection and analysis of data on customer satisfaction and benchmarks on competition. Manufacturing processes are to be managed and controlled through the extensive use of Statistical Process Control (SPC) data. Suppliers are required to demonstrate minimum process capability and stability requirements in addition to showing continuous improvement in those same characteristics. Measurement systems and process machinery are to be monitored and maintained by continuous monitoring and analysis of measurement data.

Bovas Abraham, G. Dennis Beecroft
4. A Conceptual Framework for Variation Reduction

The application of statistical tools has a long history of successful use at the operational level of both research and manufacturing organisations. The development of the Total Quality Management (TQM) concept brought with it a renewed emphasis on statistical thinking and its role in the overall management of organisations. Statistical thinking encompasses three important components: knowledge of systems; understanding variation; and use of data to guide decisions. Managers must strive to understand, acquire, and internalise an organisational system based on statistical thinking in order to transform their organisations. This paper provides a conceptual framework to understand and reduce variation in product quality through the use of system thinking and statistical methods.

Shams-ur Rahman
5. The Role of Academic Statisticians in Quality Improvement Strategies of Small and Medium Sized Companies

This paper will describe the relationships and interactions that have developed between academic statisticians in the Department of Statistics at the University of Manitoba and small to medium sized companies. The size of the local community and the limited resources available to many of these small companies mean that few knowledgeable commercial consultants are readily available. As a result, these companies have turned to the statisticians at the University of Manitoba for assistance. The paper will provide some details on the type and range of activities that the Department has undertaken. The benefits of these activities to the local industrial community as well as the benefits to the academic statisticians and their students are described.

John F. Brewster, Smiley W. Cheng, Brian D. Macpherson, Fred A. Spiring

Statistical Process Control

Frontmatter
6. Developing Optimal Regulation and Monitoring Strategies to Control a Continuous Petrochemical Process

The purpose of this paper is to discuss the development of optimal strategies for regulating and monitoring a continuous petrochemical process using the Algorithmic Statistical Process Control (ASPC) methodology. The Single-Input/Single-Output (SISO) case is discussed, using the polymer viscosity as controlled variable (output), and reactor temperature as manipulated variable (input). Alternative regulation strategies are considered. Robustness properties and stability conditions of the controllers are discussed. The performance and the adequacy of the regulation schemes in a simulation of realistic assignable causes affecting the process are studied. In this context the benefits of implementing an integrated EPC/SPC system (ASPC system) are compared with those of using an EPC system alone.

A. J. Ferrer, C. Capilla, R. Romero, J. Martin
7. Capability Indices When Tolerances Are Asymmetric

Different process capability indices, which have been suggested for asymmetric tolerances, are discussed and, furthermore, two new classes of appropriate indices are suggested. We study the properties of these new classes of indices. We provide, under the assumption of normality, explicit forms of the distributions of these new classes of the estimated indices. We suggest a decision rule to be used to determine if the process can be considered capable. Using the results regarding the distributions it is possible to calculate the probability that the process is to be considered capable. Based on these results we suggest criteria for choosing an index from the class under investigation.

Kerstin Vännman
8. Robustness of On-line Control Procedures

In this paper, robustness of on-line control procedures for process adjustment using feedback is considered. Most commonly used models are random walk models and integrated moving average (IMA) process of order one in which the disturbances are assumed to be normally distributed. When the cost of inspection is zero, there is no sampling interval. For this case, the optimum value of the action limit, where the adjustment needs to be made, is obtained for any disturbance that is distributed symmetrically around zero. Numerical values of average adjustment interval and mean squared deviations are given for symmetric uniform distribution and symmetric mixture normal distributions. However, when the cost of inspection is not zero, and sampling interval is present, optimum values of the control parameters, sampling interval and action limit, are given only for the symmetric random walk model. It is shown that when there is no sampling interval, the optimum action limit is robust for moderate departure form normality. However, when sampling interval is present, the optimum values of the control parameters are not as robust. Since the random walk model is a special case of the IMA process, the same conclusion applies to this case as well.

M. S. Srivastava
9. An Application of Filtering to Statistical Process Control

There has been growing interest in the Kalman filter as an estimation technique in statistical process control. In cases where prior information about the process is available, procedures based on the `optimal’ [Godambe (1985)] smoother can be superior to the classical procedures like Shewhart and CUSUM control charts We also discuss the relationship among EWMA, Kalman filtering and the `optimal’ smoother. This smoother and its applications are also illustrated through an example.

A. Thavaneswaran, B. D. Macpherson, B. Abraham
10. Statistical Process Control and Its Applications in Korean Industries

The principle of Statistical Process Control (SPC) and the construction of an SPC system are introduced in this paper. The goal of SPC activities is explained, and the steps for continuous improvement effort are described. Also the flow of an SPC system and a flow-chart of problem-solving in this system are sketched with figures and explained. Korean experience in industry for SPC is illustrated with a real case study.

Sung H. Park, Jae J. Kim
11. Multivariate Statistical Process Control of a Mineral Processing Industry

Multivariate Statitical Process Control procedures are becoming increasingly popular in process industries due to the need for monitoring a large number of process variables simultaneously. Although extensions to classical univariate control charts such as Shewhart, CUSUM and EWMA to multivariate situations are possible, more recently introduced statistical projection methods such as the Principal Component Analysis (PCA) and Partial Least Squares (PLS) seem to be more suitable for dynamic processes with input-output relationships. These methods not only utilize the product quality data (Y), but also the process variable data (X). This paper gives an overview of the PCA and the PLS methods and their use in monitoring operating performance of a crusher used in a mineral processing plant.

Nihal Yatawara, Jeff Harrison
12. Developing Objective Strategies for Monitoring Multi Input/Single Output Chemical Process

Processes with multi input and single output are common in chemical industries. Such a process is monitored by taking samples in fixed intervals of time and corrective action is taken whenever the output is found deviating from the specifications. It is required to have an objective criteria for deciding the sampling intervals and the quantum of adjustments to meet the target output. Also if there is a large time lag associated with the change of levels for the output, it is desiarable to get the action in a small interval of time. The methodology is explained through a case study. In the case study the methodology is developed using the mass balancing of the process and the statistical tools like ANOVA, β-correction, regression etc.

B. K. Pal, V. Roshan Joseph
13. Properties of the Cpm Index for Seemingly Unstable Production Processes

In this paper the use of the C pm index is extended to cover situations where the process mean, for example, due to shift-to-shift effects, varies on or around the target value. Approximate lower confidence intervals for the C pm index are derived for cases involving fixed and random factors and a simple relation between C pm and the minimum probability of conforming items is given. Finally, a sensitivity analysis is performed to examine the importance of the presence of a random process mean on the index and its lower confidence level.

Olle Carlsson
14. Process Capability Indices for Contaminated Bivariate Normal Populations

In this paper we examine the robustness of bivariate generalizations (CP 1 ,CP 2 ) of the process capability index when sampling is from a bivariate contaminated normal distribution. The effect of the contaminated model is studied in terms of expected values, variances and correlation coefficient of $$\left( {{{\widehat {CP}}_1},{{\widehat {CP}}_2}} \right)$$ The behavior of the test of hypothesis suggested in an earlier paper [Kocherlakota and Kocherlakota (1991)] is also considered.

Subrahmaniam Kocherlakota, Kathleen Kocherlakota
15. Quality Improvements by Likelihood Ratio Methods for Surveillance

One way to improve quality is to control the process. Control makes it possible to detect deteriorations and deviations from targets as soon as possible after they have occurred. Control methods based on likelihood ratios are known to have several optimality properties. When the methods are used in practice, knowledge about several characteristics of the method is important for the judgement of which action is appropriate at an alarm. Evaluations are made of the “The Likelihood Ratio Method” which utilizes an assumption on the intensity and has the Shiryaev optimality. Also, the Shiryaev-Roberts and the CUSUM methods are evaluated. These two methods combine the likelihood ratios in other ways. A comparison is also made with the Shewhart method, which is a commonly used method.

Marianne Frisén, Peter Wessman
16. Properties of the Taguchi Capability Index for Markov Dependent Quality Characteristics

Most process capability indices have been constructed under the assumption that quality characteristics are measurements of independent, identically distributed normal variables. Confidence intervals for the Taguchi process capability index, commonly denoted C pm , is derived in this paper when consecutive measurements are observations of dependent variables according to a Markov process in discrete time.

Erik Wallgren

Design and Analysis of Experiments

Frontmatter
17. Fast Model Search for Designed Experiments with Complex Aliasing

In screening experiments, run size considerations can necessitate the use of designs with complex aliasing patterns. Such designs provide an opportunity to examine interactions and other higher order terms as possible predictors, as llamada and Wu (1992) propose. The large number of model terms mean that many models may describe the data well. Bayesian stochastic search methods that incorporate preferences for certain model structures (Chipman, Hamada, and Wu, 1997) are one effective method for identifying a variety of models. This paper introduces priors that allow the posterior to be simplified, speeding up the search and eliminating Monte Carlo error in the evaluation of model probabilities. Default choices of prior parameters are proposed. Several new plots and summaries are illustrated using simulated data in a Plackett-Burman 12-run layout.

Hugh A. Chipman
18. Optimal 12 Run Designs

In this paper we consider twelve run designs with four factors each at 2 levels. We present a specific design and show that it is optimal when one wishes to estimate all the main effects and a subset of two factor interactions. The optimality criteria used are D-optimality and a form of E-optimality

K. Vijayan, K. R. Shah
19. Robustness of D-optimal Experimental Designs for Mixture Studies

The principal objective of mixture experiments is to assess the influences of the proportions and amounts of mixture ingredients, along with the effects of relevant processing variables, on performance characteristics of a mixture. Experimental designs, called “mixture designs”, have been proposed to guide such investigations. This paper explores the effectiveness of some commonly used designs, including D-optimal designs, in providing predictions with relatively uniform variance from alternative model forms.

David W. Bacon, Rodney Lott
20. A Bivariate Plot Useful in Selecting a Robust Design

A sound engineering practice for improving quality and productivity is to design quality into products and processes. The ideas of G.Taguchi about parameter design were introduced in the US some ten years ago. Despite some strong controversy about some aspects of it, they play a vital role in the concept of robustness in the design of industrial products and processes. In this paper, we present a new methodology for designing products and processes that are robust to variations in environmental or internal variables. First, a tentative model for the response as a function of the design and noise factors is assumed. This model is then estimated using a single design matrix and the expected value and variance of the response are calculated over the space of the design factors. Finally, the best setting of the parameters values can be located in a newly developed bivariate plot where the distance to the target is plotted against the variance of the response.

Albert Prat, Pere Grima
21. Process Optimization Through Designed Experiments: Two Case Studies

This paper contains two case studies in process optimization using designed experiments. The first case study shows an application of Design of Experiments to reduce the defect level in a Frame Soldering Process and the second case study describes the reduction in plating thickness variation in an Electro-Plating Process.

A. R. Chowdhury, J. Rajesh, G. K. Prasad
22. Technological Aspects of TQM

A Beyond Analysis of Variance technique is introduced as an off-line technology for Total Quality Management. It takes into account the characteristics of factors involved in an experiment for a more satisfactory analysis of data. It also efficiently utilizes the intrinsic natural ordering appearing frequently in relation to time, temperature, dose and so on.

C. Hirotsu
23. On Robust Design for Multiple Quality Characteristics

In product and process design we have multiple quality characteristics very frequently. It is not easy to find optimal design parameter setting when there are multiple quality characteristics, since there will be conflict among the selected levels of the design parameters for each individual quality characteristic. In this paper we propose a modified desirability function approach and devise a scheme which gives a systematic way of solving mutiple quality characteristic problem.

Jai Hyun Byun, Kwang-Jae Kim
24. Simultaneous Optimization of Multiple Responses Using a Weighted Desirability Function

The object of multiresponse optimization is to determine conditions on the independent variables that lead to optimal or nearly optimal values of the response variables. Derringer and Suich (1980) extended Harrington’s (1965) procedure by introducing more general transformations of the response into desirability functions. The core of the desirability approach condenses a multivariate optimization into a univariate one. But because of the subjective nature of this approach, inexperience on the part of the user in assessing a product’s desirability value may lead to inaccurate results. To compensate for this defect, a weighted desirability function is introduced which takes into consideration the variances of the responses.

Sung H. Park, Jun O. Park
25. Evaluating Statistical Methods Practiced in Two Important Areas of Quality Improvement

This paper presents evaluations and comparisons of statistical methods practiced in factor screening as well as in analysis with missing data. In quality performance evaluation studies using factorial experiments, the method of a normal probability plot is routinely used for screening out factors with insignificant effect on the quality characteristic. We demonstrate that the method of a normal probability plot for factor screening can be misleading. A screened out factor may turn out to be significant in the presence of certain interactions. Three methods of analysis in the presence of missing data for explanatory or response variables are considered under the multiple linear regression model by ignoring the missing data rows, substituting the corresponding column means from the data rows with no missing data, and substituting the Yates/EM algorithm estimates for the missing data. Five illustrative examples are given. In all examples, the third method has turned out to be the winner.

Subir Ghosh, Luis A. Lopez

Statistical Methods for Reliability

Frontmatter
26. Applications of Generalized Linear Models in Reliability Studies for Composite Materials

In this paper, the applications of generalized linear models for studying the stochastic behavior of the fatigue life of a composite material are discussed. The composite materials are used in many engineering structures such as aircraft. The physical parameters of different composite materials are estimated using experimental data. The generalized linear models, in general, provide a wide class of probability models that are useful for modeling the collected data. In particular, the concept of link function, is useful for modeling the physical parameters of composite materials. To illustrate the methodology developed in this paper we have included the analysis of fatigue data reported in Kim and Park (1980).

Makarand V. Ratnaparkhi, Won J. Park
27. Bivariate Failure Rates in Discrete Time

In this paper an expository analysis of three definitions of bivariate failure rates in the discrete time domain is attempted. Conditions under which each rate determines the distribution of failure times uniquely are investigated. Further, some characterizations of bivariate distributions based on the functional forms of these rates are established.

G. Asha, N. Unnikrishnan Nair
28. A General Approach of Studying Random Environmental Models

The consequences of introducing random effects have been a major issue in models for survival analysis. Survival models that incorporate random effects provide a useful framework for determining the effectiveness of interventions. In this paper we shall present a general approach of incorporating the random environmental effect in both univariate and multivariate models. The reliability measures, namely, the failure rate, the survival function and the mean residual life function are compared with or without the environmental effect. Examples are presented to illustrate the results.

Pushpa L. Gupta, Ramesh C. Gupta
29. Testing for Change Points Expressed in Terms of the Mean Residual Life Function

Change point problems have received considerable attention in the last two decades. They are often encountered in the fields of quality control, reliability and survival analysis. A typical situation occurs in quality control when one observes the output of a production line and is interested in detecting any change in the quality of a product. In this article we consider the problem of testing against change points when the change is expressed in terms of the mean residual life function as described next. Let X1, X2,…, Xn be independent observations. We consider the problem of testing the null hypothesis of no change, H o : The X’s have an unknown common distribution function F against the at most one-change point alternative, H1: Xi,1 ≤ i ≤ [nλ] have common distribution function F1 and Xi[nλ] < i ≤ n have common distribution function F2, where λ ∈ (0,1) and F1≠ F2 are unknown and $${F_1}\mathop \succ \limits_{MR} {F_2}$$ (i.e., M1(t) ≥ M2(t) for all t, where $${M_i}(t) = \smallint _t^\infty \overline {{F_i}} (x)dx/\overline {{F_i}} (t)$$ ). We propose some nonparametric tests for this problem. We obtain the limiting distributions of the proposed tests.

Emad-Eldin A. A. Aly
30. Empirical Bayes Procedures For Testing The Quality and Reliability With Respect To Mean Life

This paper considers a problem where lots are coming independently and sequentially for testing their quality, and at each stage a lot is accepted as reliable if and only if the mean life 0 of items is at least 00, a specified standard required by the practitioner, and the penality for making an incorrect decision is taken as proportional to the distance the true 0 is away from 00. When the prior distribution of 0 remains unknown at each stage (hence the minimum risk Bayes optimal test procedure is not available for use at any stage), this paper provides asymptotically optimal empirical Bayes procedures for situations where life time distribution has negative exponential or gamma density and investigates the rates of convergence to optimality. We also consider the case where the life time distribution has density belonging to general exponential family, and the case where one has to deal with k varieties of lots simultaneously at each stage.

Radhey S. Singh
31. On a Test of Independence in a Multivariate Exponential Distribution

In this article the problem of testing independence in a multivariate exponential distribution with identical marginals is considered. Following Bhattacharyya and Johnson (1973) the null hypothesis of independence is transformed into a hypothesis concerning the equality of scale parameters of several exponential distributions. The conditional test for the transformed hypothesis proposed here is the likelihood ratio test. The powers of this test are estimated for selected values of the parameters, using Monte Carlo simulation and non-central chi-square approximation. The powers of the overall test are estimated using a simple formula involving the power function of the conditional test. Application to reliability problems is also discussed in some detail.

M. Samanta, A. Thavaneswaran

Statistical Methods for Quality Improvement

Frontmatter
32. Random Walk Approximation of Confidence Intervals

Confidence intervals and confidence regions are commonly used in process improvement studies to indicate measurement uncertainty. When the model of the system is nonlinear, confidence regions based on sums of squares are often the most accurate, but their calculation is a computationally intensive task. With up to 2 or 3 parameters, an effective method is to evaluate the sums of squares over a dense grid. In higher dimensions profile-based methods [Bates and Watts (1988)] are effective when it is practical to parametrize the model in terms of the quantities of interest, but it is difficult to construct confidence intervals for general functions of the parameters. In this paper we develop an algorithm for approximation of confidence intervals which is computationally efficient in models with up to about 10 parameters. The algorithm is based on a variation on Gibbs sampling [Gelfand and Smith (1990)] of a uniform distribution on a confidence region in the full parameter space and uses extrapolated QQ plots to adjust the borders of the resulting regions.

D. J. Murdoch
33. A Study of Quality Costs in a Multi Component and Low Volume Products Automated Manufacturing System

In this paper, we illustrate how to design screening inspections for minimizing quality costs in a multi-stage automated manufacturing system. The total quality cost model consists of inspection costs, internal failure costs, external failure costs, and Taguchi’s loss function. Although, the use of an automatic test equipment such as machine vision and CMM (coordinate measuring machine) have greatly increased the inspection speed and its accuracy, screening (100% inspection) could be considered only as a short-term method to remove nonconforming items from the population, and not as a long-term quality improvement strategy. However, screening may be used before costly operations, or after unsatisfactory operations.

Young-Hyun Park
34. Estimating Dose Response Curves

There are several companies in the United States that manufacture devices for detecting the presence of a variety of antibiotics in milk. All such devices that are manufactured and marketed in the USA require approval from the Food and Drug Administration (FDA), USA. For each drug whose presence in milk is to be tested, FDA has determined a critical level, and requires that all test devices show a sensitivity of at least 90% with a confidence level of at least 95% at the critical levels. This paper discusses several statistical techniques such as one-sided confidence intervals for the binomial p, and logit/probit regression analyses that are commonly used to check regulatory compliance in such cases.

Sat N. Gupta, Jacqueline Iannuzzi
35. On the Quality of Preterm Infants Formula and the Longitudinal Change in Mineral Contents in Human Milk

Human milk is often fortified with appropriate nutrients including minerals to allow premature infants and their families to enjoy the benefits conveyed by the feeding of breast milk while delivering an optimal nutrient supply to the baby. There is, however, considerable controversy about when, how and with what human milk should be fortified. Currently, pre-term infants’ formula are prepared according to the nutrient requirements at the first or transition stage (birth to 10 days of age), a stable growing stage (10 days to 6-8 weeks following birth), and finally a post discharge stage (6-8 weeks to 12 months following birth). But, the quality of the pre-term infants’ formula will be affected if there is any weekly longitudinal change in mineral content in mothers milk mainly during the first 8 to 12 weeks of lactation period following birth. Very little is known about such longitudinal changes which (if there are any) might lead to the need for appropriate changes in the preparation of pre-term infants’ formula in order to meet the nutrient requirements mainly during their stable-growing period. In this paper, we consider this important issue and analyze the longitudinal change in mineral content in milk from mothers of pre-term and full-term infants. Forty three mothers from St. John’s, Newfoundland participated in the study and the data were collected from them for a period of 12 weeks.

Brajendra C. Sutradhar, Barbara Dawson, James Friel
Backmatter
Metadata
Title
Quality Improvement Through Statistical Methods
Editor
Bovas Abraham
Copyright Year
1998
Publisher
Birkhäuser Boston
Electronic ISBN
978-1-4612-1776-3
Print ISBN
978-1-4612-7277-9
DOI
https://doi.org/10.1007/978-1-4612-1776-3