Skip to main content

2008 | Buch

Recent Advances in Reliability and Quality in Design

insite
SUCHEN

Über dieses Buch

"Recent Advances in Reliability and Quality in Design" presents the latest theories and methods of reliability and quality, with emphasis on reliability and quality in design and modelling. Each chapter is written by active researchers and professionals with international reputations, providing material which bridges the gap between theory and practice to trigger new practices and research challenges.

Postgraduates, researchers, and practitioners in reliability engineering, maintenance engineering, quality engineering, operations research, industrial and systems engineering, mechanical engineering, computer engineering, management, and statistics will find this book a state-of-the-art survey of reliability and quality in design and practices.

Inhaltsverzeichnis

Frontmatter

System Reliability Computing

Frontmatter
1. Central Limit Theorem for a Family of Reliability Measures
Abstract
The main objective of this chapter is to present and prove a Central Limit Theorem for a measure of reliability, called gauge measure, which was introduced in an earlier paper. This measure is derived from a marked point process where the base process is a random point process and the marks are fuzzy random variables. The underlying point process represents the locations where faults are located and the fuzzy marks quantify the subjective assessment when remedial actions are implemented to restore the system to its former working condition. Several examples including an application are provided which serve to illustrate the many results presented in this chapter. Finally, we conclude this chapter with suggestions for future work.
P. Zeephongsekul
2. Modeling and Reliability Evaluation of Multi-state k-out-of-n Systems
Abstract
The k-out-of-n structure is a very popular type of redundancy in fault tolerant systems, with wide applications in various types of systems. In many real applications, components and systems have multiple states. This chapter reports the recent advances in the modelling and reliability evaluation of k-out-of-n systems.
Zhigang Tian, Wei Li, Ming J. Zuo
3. On Weighted Least Squares Estimation for the Parameters of Weibull Distribution
Abstract
The two-parameter Weibull distribution is one of the most widely used life distributions in reliability studies. It has shown to be satisfactory in modeling the phenomena of fatigue and life of many devices such as ball bearings, electric bulbs, capacitors, transistors, motors and automotive radiators. In recent years, a number of modifications of the traditional Weibull distribution have been proposed and applied to model complex failure data sets.
L.F. Zhang, M. Xie, L.C. Tang
4. Periodic and Sequential Imperfect Preventive Maintenance Policies for Cumulative Damage Models
Abstract
This chapter applies periodic and sequential preventive maintenance (PM) policies to cumulative damage models where the total damage is additive. First, the PM is done at periodic times kT (k=1, 2,...) and an amount of damage incurred for each periodic interval has an identical distribution. A unit fails when the total damage has exceeded a failure level K. The PM reduces the total damage according to its improvement factor. Two replacement policies, where a unit is replaced at time nT and when the total damage has exceeded a managerial level Z are considered. An example is shown when an amount of damage is distributed exponentially. Next, the PM is done at sequential times T k (k=1, 2,...): Shocks occur in a Poisson process and a unit fails with probability p(x) when the total damage is x. If a unit fails, it undergoes a minimal repair. The expected cost rate until replacement is derived when p(x) is exponential. Optimal PM times that minimize the expected cost rates are numerically computed for an infinite time and a finite time intervals, by solving simultaneous equations.
Toshio Nakagawa, Satoshi Mizutani
5. Some Alternative Approaches to System Reliability Modeling
Abstract
This work contains a review and development of the ideas concerning some alternative methods in stochastic modeling of reliability we worked on the last decade. Part of the presented material is published for the first time. Another part which has already appeared or will appear in the coming months, now is presented in a newer, “simpler” way, as our teaching experience on that has grown during many professional discussions.
Jerzy K. Filus, Lidia Z. Filus
6. The Optimal Burn-in: State of the Art and New Advances for Cost Function Formulation
Abstract
Burn-in is a quality screening technique used to induce early failures that would be costly if experienced by the customer. As a method to screen out the earlier failures of the products, burn-in testing has been widely used in electronic manufacturing as well as many other areas such as the military and aerospace industries since the 1950s. Burn-in has proven to be a very effective quality control procedure which can improve products’ quality, enhance their reliability for operational life, and bring both profit and goodwill to the manufacturers.
Xin Liu, Thomas A. Mazzuchi

Reliability Engineering in Design

Frontmatter
7. Optimum Threshold Level of Degrading Systems Based on Sensor Observation
Abstract
Degradation models are normally used to predict the system’s failure under condition-based predictive maintenance policies. Repairs or replacements of the system are performed once the degradation level reaches a predetermined threshold level. This results in a significant time and cost savings compared to the situation when the system is repaired upon failure. In the former case, the maintenance is planned and the necessary spare parts and manpower requirements are readily available, while in the latter case it is difficult to predict the failure time and plan the necessary resources to perform the maintenance action immediately upon failure. Recent developments in sensors, chemical and physical nondestructive testing, and sophisticated measurement techniques have facilitated the continuous monitoring of the system condition. A condition parameter could be any critical characteristic such as crack growth, vibration, corrosion, wear and lubricant condition. With the measured data, the predictive maintenance policy determines the optimum threshold level at which maintenance action is performed to bring the system to a “better” condition, if not as good as new, in order to maximize system availability or minimize the average maintenance cost.
Elsayed A. Elsayed, Hao Zhang
8. Weibull Data Analysis with Few or no Failures
Abstract
Laboratory testing is a critical step in the development of vehicle components or systems. It allows the design engineer to evaluate the design early in the reliability development phase. A good lab test will shorten the product development cycles and minimizes cost and part failures at the PG or field testing before the vehicle volume production. Appropriate testing is available to correlate test time in the lab (or lab test bogey) to the real world survival time (or field design life). The testing must be in some accelerated fashion or typically called accelerated testing. The failure mechanism(s) that the accelerated test will bring out is of great importance. No one test can surface all potential failure mechanisms of the part. Certain failure mechanisms dominate throughout the useful lifetime of the part, and some may never occur. To verify a new product design meeting a reliability target requirement, one can perform data analysis on the life testing data by using Weibull life distribution. However, in fitting a Weibull distribution to reliability data, one may have only few or no failures. This paper presents method to estimate the reliability and confidence limits that apply to few or no failures with an assumed Weibull slope value of β.
Ming-Wei Lu, Cheng Julius Wang
9. A Load-weighted Statistical Average Model of Fatigue Reliability
Abstract
Stress-strength interference (SSI) analysis method has been applied to reliability estimation of a broad range of structural components under a variety of loading conditions. This method is successful for static strength failure. For fatigue failure, some of the current methods use SSI technique directly, assuming that the distributions of applied stress and fatigue strength corresponding to specific number of cycles to failure are known. However, the exact distribution of fatigue strength at specific number of cycles to failure cannot be obtained from a test. Alternatively, methods to determine fatigue strength distribution from fatigue life distribution were proposed.
Liyang Xie, Zheng Wang
10. Markovian Performance Evaluation for Software System Availability with Processing Time Limit
Abstract
In this paper, we discuss the software performance evaluation method considering the real-time property. The time-dependent behavior of the software system itself alternating between up and down states is described by the Markovian software availability model. Assuming that the software system can process the plural tasks simultaneously, we analyze the distribution of the number of tasks whose processes can be complete within a prespecified processing time limit with the infinite server queueing model. We derive several stochastic quantities for software performance measurement considering the real-time property; these are given as the functions of time and the number of debugging activities. Finally, we illustrate several numerical examples of the quantities to investigate the relationship between the software reliability/restoration characteristics and the system performance.
Masamitsu Fukuda, Koichi Tokuno, Shigeru Yamada
11. Failure Probability Estimation of Long Pipeline
Abstract
It is well known that, for the majority of pressurized pipelines, both the load and the resistance parameters show evident uncertainty, and a probabilistic approach should be applied to assess their behaviors. Concerning reliability estimation of passive components such as pressure vessel and pipeline, there are two kinds of approaches – direct estimation using statistics of historical failure event data, and indirect estimation using probabilistic analysis of the failure phenomena of consideration. The direct estimation method can be validated relatively easily. However, it suffers statistical uncertainty due to scarce data. Indirect estimation method relies on the statistics of material property and those of environment load which are more readily available. As to systems composed of passive components, statistical dependence among component failures is a complex issue that cannot be ignored in reliability estimation.
Liyang Xie, Zheng Wang, Guangbo Hao, Mingchuan Zhang

Software Reliability and Testing

Frontmatter
12. Software Fault Imputation in Noisy and Incomplete Measurement Data
Abstract
This study examines the impact of noise on the evaluation of software quality imputation techniques. The imputation procedures evaluated in this work include Bayesian multiple imputation, mean imputation, nearest neighbor imputation, regression imputation, and REPTree (decision tree) imputation. These techniques were used to impute missing software measurement data for a large military command, control, and communications system dataset (CCCS). A randomized three-way complete block design analysis of variance model using the average absolute error as the response variable was built to analyze the imputation results. Multiple pairwise comparisons using Fisher and Tukey-Kramer tests were conducted to demonstrate the performance differences amongst the significant experimental factors. The underlying quality of data was a significant factor affecting the accuracy of the imputation techniques. Bayesian multiple imputation and regression imputation were top performers, while mean imputation was ineffective.
Andres Folleco, Taghi M. Khoshgoftaar, Jason Van Hulse
13. A Linearized Growth Curve Model for Software Reliability Data Analysis
Abstract
This chapter discusses a method for applying a linear regression analysis to software reliability data. By expanding five traditional growth curve models, we propose a linearized growth curve model. The unknown parameters included in the model can be estimated by log-linear regression with the method of two-parameter numerical differentiation which is introduced in this study. This model and its estimation results can provide a control chart representing the degree of software reliability growth and testing progress in a software testing phase. Also the estimated growth curve can be used as a generalized growth curve model, which describes the future behavior of the cumulative number of detected software faults.
Mitsuhiro Kimura
14. Software Reliability Model Considering Time-delay Fault Removal
Abstract
Software reliability has proven to be one of the most useful indices in evaluating software applications quantitatively. Among many different methodologies for constructing software reliability models, the software reliability growth models (SRGMs) based on the non-homogeneous Poisson process (NHPP) has been widely used in practical software reliability engineering and, has attracted many engineers and researchers who assess software systems.
Seheon Hwang, Hoang Pham
15. Heuristic Component Placement for Maximizing Software Reliability
Abstract
In this chapter, we present a methodology for architecture-based software reliability analysis considering interface failures. The methodology generates an analytical reliability function that expresses application reliability in terms of the reliabilities and visit statistics of the components and interfaces comprising the application. Based on the analytical reliability function, we then present an optimization approach that produces a desirable deployment configuration of the application components given the application architecture and the component and interface reliabilities, subject to two types of constraints. The first type of constraint is the node size constraint and is concerned with the physical limit of the nodes, where a single node cannot accommodate more than a certain maximum number of components. The second type of constraint is the component location constraint, and is concerned with component deployment, where there are restrictions on which components can be deployed on which nodes due to reasons such as architectural mismatch. The optimization framework uses simulated annealing as the underlying optimization technique. We illustrate the value of the analysis and optimization methodologies using several examples.
Michael W. Lipton, Swapna S. Gokhale
16. Software Reliability Growth Models Based on Component Characteristics
Abstract
It is one of the important issues in modern software development to make a highly reliable software system by efficient and economical testing. As one of these solutions, many companies have taken in testing-progress management and quality/reliability assessment based on software reliability growth models. Consequently, many software reliability growth models reflecting various development factors and operational environment of the software system have been proposed. Most of these models have assumed the software system as one domain. That is, the models have assumed that the set of the testing-paths influenced by execution of test-cases is extended with testing-progress and it expands to whole software system finally. However, a software system has the structure that the main-component calls plural sub-components ordinarily. Therefore, unless the main-component is tested, the sub-component called from it is not tested. In this chapter, we propose software reliability growth models based on such components' characteristics. Especially, these models reflecting the different testing-environment for composed components are formulated by nonhomogeneous Poisson processes. Furthermore, using fault-detection data observed in actual development projects, we show numerical examples of software reliability assessment and results of goodness-of-fit comparisons with existing software reliability growth models.
Takaji Fujiwara, Shinji Inoue, Shigeru Yamada

Quality Engineering in Design

Frontmatter
17. Statistical Analysis of Appearance Quality for Automotive Rubber Products
Abstract
Sponge corner materials for automotive products are generally called weatherstrip. A bloom phenomenon characteristic of rubber products causes a quality problem in the production process for rubber companies. This quality problem in appearance prevents them from improving the product's quality and the process productivity. In this paper, we conduct several kinds of the design of experiment based on a quality engineering approach to identify the causes of the bloom phenomenon and to improve the appearance quality and the performance of product.
Shigeru Yamada, Kenji Takahashi
18. Present Worth Design of Engineering Systems with Degrading Components
Abstract
The ability of a manufacturer to design and produce a reliable and robust product that meets the customer’s short- and long-term expectations with low cost and short product development time is the key for success in today’s market. Customers’ expectations include quality at the start of a product’s life and both functionality and performance over a planned lifetime (e.g., warranty time). Quality may be defined as conformance of performance measures to specifications. Functionality is related to hard failures of components, meaning that the system ceases to function completely. Performance over time considers so-called soft failures wherein the system operates but performance measures do not meet their limit specifications. Often the design addresses only quality and it is hoped that performance and functionality will be acceptable]. However, performance and functionality over time are important to customers and must be ensured.
Young Kap Son, Gordon J. Savage
19. Economic-statistical Design of a Logarithmic Transformed S2 EWMA Chart
Abstract
Exponentially weighted moving average (EWMA) control charts are an efficient means in detecting small process shifts, both in position and dispersion of the collected data. Implementing an EWMA chart to control a manufacturing process requires the computation and plotting of a random variable which is a function of the current sample statistic and of the past samples collected from the process. This allows the EWMA to prevail over the traditional Shewhart chart in terms of statistical sensitivity when small shifts in the process position and/or dispersion are expected. The aim of this chapter is to present the economic-statistical design of a S2 EWMA control chart for the on-line control of the process dispersion. The investigated chart operates through a control statistic based on a logarithmic transformation of the sample variance to make possible working on an approximately standard normally distributed random variable. Since the implementation of control charts to monitor process stability has become normal practice within an industrial manufacturing environment, designing economically the control chart is an important managerial aspect of SPC that should be carefully taken into account by practitioners.
P. Castagliola, G. Celano, S. Fichera
20. Risk Management Techniques for Quality Software Development
Abstract
Due to the rapid growth of the IT community, users are demanding both very specific requirements and quick delivery time. Under this pressure, it is often difficult for project managers to meet these high expectations. Consequently, many of risks remain latent in almost all software development projects. The use of countermeasures after failures shows a certain degree of “management failure”. Therefore, we have to manage such risks at an early stage for projects to be successful.
Toshihiko Fukushima, Shigeru Yamada

Application in Engineering Design

Frontmatter
21. Recent Advances in Data Mining for Categorizing Text Records
Abstract
In a world with highly competitive markets, there is a great need in almost all business organizations to develop a highly effective coordination and decision support tool that can be used to become a daily life predictive enterprise to direct, optimize and automate specific decision-making processes. The improved decision-making support can help people to examine data on the past circumstances and present events, as well as project future actions, which will continually improve the quality of products or services. Such improvement has been driven by recent advances in digital data collection and storage technology. The new technology in data collection has resulted in the growth of massive databases, also known as data avalanches. These rapidly growing databases occur in various applications including service industry, global supply chain organizations, air traffic control, nuclear reactors, aircraft fly-by-wire, real time sensor networks, industrial process control, hospital healthcare, and security systems. The massive data, especially text records, on one hand, may contain a great wealth of knowledge and information, but on the other hand, contain other information that may not be reliable due to many uncertainty reasons in our changing environments. However, manually classifying thousands of text records according to their contents can be demanding and overwhelming. Data mining has gained a lot of attention from researchers and practitioners over the past decade as an emerging research area in finding meaningful patterns to make sense out of massive data sets.
W. Chaovalitwongse, Hoang Pham, Seheon Hwang, Z. Liang, C.H. Pham
22. Quality in Design: User-oriented Design of Public Toilets for Visually Impaired People
Abstract
According to United Nations statistics, about 1/30th of the world's population is visually impaired with different types, levels, and degrees of visual impairment. It is not difficult to realize that visually impaired people (VIP) face different kinds of difficulties and limitations in their daily lives, in particular when they need to interact with public environments and facilities with which they may not be familiar.In recent years, policymakers and researchers in different disciplines such as sociology, architecture, design, and engineering, have conducted more discussions and made increasing efforts to improve this situation. Among the various projects with different perspectives, directions, objectives, and targeted beneficiaries, the key approach is generally to apply technologies to provide convenience, or overcome existing “barriers” to use, for VIP. However, as many of the studies in Europe and America have shown, it remains clear that the situation is still unsatisfactory that complaints by VIP have frequently heard. Mass media also frequently report on accidents or unsatisfactory and unfair environments for VIP. This is not due to the "almighty" nature of the technologies or inventions, but to the fact that these technologies cannot fit the wants and needs of VIP (i.e., actual users) and functions as they were originally planned and intended.
Kin Wai Michael Siu
23. Assurance Cases for Reliability: Reducing Risks to Strengthen ROI for SCADA Systems
Abstract
Supervisory, Control, and Data Acquisition (SCADA) systems are crucial to many critical infrastructures, typically those that operate in real-time environments. In the past, these systems were based on proprietary protocols and were isolated from other networks. However, as the trend towards standard IT practices has grown, there have been serious concerns regarding the security, reliability and availability of these systems, since they are increasingly connected to an enterprise network. There are many positive business benefits to be gained from this trend; however, this connectivity has increased the risks of exploitation of security vulnerabilities in these non-proprietary systems. These vulnerabilities, if exploited, can result in serious consequences such as degraded performance, loss and/or compromise of critical information, or in the worst case scenario, making these critical systems completely unavailable.
Ann Miller, Rashi Gupta
24. Detecting Driver’s Emotion: A Step Toward Emotion-based Reliability Engineering
Abstract
In our traditional engineering design, systems are designed so that machines work today in the same way as yesterday, no matter how the situations may change. In short, our traditional goal was to build up a context-independent system. Up to now, situations have not changed appreciably, but today, they change very rapidly and very frequently. Therefore, the context independent approach is no longer effective. To cope with the rapid and frequent changes, a more context-dependent approach is called for. We could possibly develop many context-dependent approaches. But what should be pointed out is the importance of human cognition and decision making. We could possibly install many sensors and actuators so that a machine could respond to rapid changes. However, it should be stressed that human factors are getting more and more important in securing system reliability. This is not a human factor in traditional reliability engineering, where it is considered as a system element and in a more context-independent framework.
Shuichi Fukuda
25. Mortality Modeling Perspectives
Abstract
As the human lifespan increases, more and more people are becoming interested in mortality rates at higher ages. Since 1909, the birth rate in the United States has been decreasing except for a major significant increase after World War II, between the years 1946 and 1964, also known as the baby boom period. People born during the baby boom are now between the ages of 44 and 62. According to the National Center for Health Statistics, US Department Health and Human Services, in 1900–1902, one could expect to live for 49 years on average. Today, an infant can expect to live about 77 years. As of recent years and in prediction, the life expectancy for an infant born may be even higher. With the human lifespan increasing and a large part of the United States population aging, many researchers in various fields have recently become interested in studying quantitative models of mortality rates. Scientists in biological fields are not only interested in organisms and how they are made, they are also interested in what happens to organisms over time. A study of yeast, which would interest biologists, showed the effects of senescence as well as a model that accurately represents the experimental data. It has been shown that the addition of a Sir2 gene can prolong life in yeast. Once we can model human aging, we can look for ways to extend our lifespan and counteract the negative aspects of aging.
Hoang Pham
Backmatter
Metadaten
Titel
Recent Advances in Reliability and Quality in Design
herausgegeben von
Hoang Pham
Copyright-Jahr
2008
Verlag
Springer London
Electronic ISBN
978-1-84800-113-8
Print ISBN
978-1-84800-112-1
DOI
https://doi.org/10.1007/978-1-84800-113-8

Neuer Inhalt