Skip to main content
main-content

Über dieses Buch

International cooperation on reliability and accident data collection and processing, exchange of experience on actual uses of data and reliability engineering techniques is a major step in realising safer and more efficient industrial systems. This book provides an updated presentation of the activities in this field on a worldwide basis.

Inhaltsverzeichnis

Frontmatter

General Session

Error Analysis in Man-Machine Systems

The study to be reported was aimed at evaluating the reliability of MMS in an automated process control system. The methodology was based on error analysis and took into consideration: a) the error occurred, b) the error detections and c) the recoveries from error. It was adopted to evaluate both a prototype and the implementation of the MMS. The two evaluations had different aims. The first, related to the “design” phase, was aimed at establishing whether the designed tasks, were compatible with the cognitive abilities and competence of the operators. The objective of the second evaluation related to the “re-design” phase, was to identify where and how to improve th MMS.

S. Bagnara, A. Rizzo, F. Stablum

Environmental Impact Assessment and Risk Analysis. Theoretical Background of the Computerized Methodology “LIVIA”

Industrial facilities exchange materials and energy with the surrounding area either in normal operating conditions or in exceptional accidental situations. The total effect is then due to the sum of a continuous deterministic component (impact) and of random stochastic components (risks). A method has been developed to identify and evaluate the impacts (using a logical approach based on the coaxial matrix theory) and to merge their effects with those caused by accidental events , in order to map the likely total interaction between the facility and the environment. The computerized version of the method , named LIVIA , has been tested in various applications in the refining and petrochemical fields.

V. Colombari, G. C. Bello, E. Galatola

Addressing the Problem of the Relevance of Reliability Data to Varied Applications

Reliability data is collected for many reasons on a wide range of components and applications. Sometimes data is collected for a specific purpose whilst in other situations data may be collected simply to provide an available pool of historical data. Data can also be extracted from information that was gathered without recognition that it could be adapted for use as reliability data at a later stage. It is not surprising that there should be significant differences in the strengths and weaknesses of data obtained in such different circumstances. This paper describes work undertaken to investigate how to make best use of available data to provide specific and reliable predictions of valve reliability for nuclear power station applications.

P. J. McIntyre, I. K. Gibson, H. H. Witt

OREDA Session

Planning and Management of the Offshore Reliability Data Project (OREDA)

The OREDA project was established by seven oil companies to collect data on failures on selected offshore equipment. Data were collected from maintenance systems of six companies operating offshore oil and gas fields, and pooled into an event and inventory data bank. The paper summarizes how the project was organized and managed and presents the resulting data base.

Magne Toerhaug

OREDA Phase II Data Collection Experience

This paper describes data collection activities in Phase II of the OREDA (Offshore Reliability Data) project. Data collection involved the collection and coding of engineering, functional and failure data for over 1600 equipment inventories and 8000 failure events. Problems encountered in acquiring data from maintenance records and maintaining satisfactory quality control for such a large and geographically dispersed project are also discussed.

T. R. Moss, H. Sandtorv, C. Pratella

Failure Rate Estimation Based on Data from Different Environments and with Varying Quality

Data to be included in reliability databases are typically collected from different sources. The “true” reliability of a component may vary from source to source, due to factors such as different manufacturers, design, dimensions, materials, and operational and environmental conditions. The quality of the data may vary in completeness and level of detail, due to one or more reasons such as data registration methods, company boundary specifications, subjectiveness and skill of the data collector, and time since the failure events occurred.The paper discusses reliability estimation based on data with the characteristics mentioned above. Special problems and uncertainties are highlighted. The discussion is exemplified with problems encountered during data collection projects such as OREDA.

S. Lydersen, M. Rausand

Reliability Data Banks

Data Selection for Probabilistics Safety Assessment

The IAEA has, in the framework of its PSA activities, compiled from the available literature a component reliability data base. This data base contains more than 1,000 records drawn from 21 different sources. In order to assess the actual ranges of data a number of graphs containing failure rates or probabilities on similar components and failure modes were plotted. It was recognized that order of magnitude differences exist among the sources. The influence of the data set used for the final PSA result was assessed by use of a plant PSA model. The original data set for 8 components with the highest importance measure was replaced by consistently high and low data drawn from generic sources. The results show an increase by a factor of 15 in the normalized core melt frequency. The exercise was repeated for the 23 most important components and a factor of 46 in core melt frequency was observed

B. Tomic, L. Lederman

A Reliability Data Bank for the Natural Gas Distribution Industry

The present study is an update and extension of those already reported. It illustrates the results obtained from ordinary maintenance operations on six large sets of urban mains with a view to optimasing the establishment of a reliability data bank. The study undescores the extreme individuality of gas distribution plants and the need for a sector data bank. A prototype form for the collection of pressure Regulating Installations (PRI) faults and breakdowns is also presented.

M. Scarrone, N. Piccinini, C. Massobrio

Guidelines for Process Equipment Reliability Data by CCPS

The Center for Chemical Process Safety (CCPS) of the American Institute of Chemical Engineers (AIChE) will release a book on reliability data in 1989. The book is titled Guidelines for Process Equipment Reliability Data and is part of a series of safety guidelines published by CCPS. The book was written and published by a task force of AIChE members from industry with the help of Science Applications International Corporation.In the last few years there has been increasing emphasis on risk reduction in the chemical process industry. This has resulted in an increased use of tools such as quantitative risk analysis which require component reliability. This new CCPS publication will partially fulfill that requirement. Also, the book is intended to act as a resource for any future data collection exercises in the chemical process industry.The book contains generic reliability data from a wide variety of sources. There are descriptions of those sources (and others) for further reference. Also, there is a section explaining how to collect data and how to convert it to the same format as the guidelines data.

Gary R. Van Sciver

Incidents Data Banks

IFP Databanks on Offshore Accidents

IFP has created two databanks concerning offshore accidents. The first, called TANKER, concerns accidents to carrier vessels having caused an offshore oil spill of at least 500 metric tons. The second, called PLATFORM, concerns accidents to drilling vessels or offshore oil platforms (whether used for exploration, development, production or as living quarters) having caused a shutdown of activity for at least 24 hours.This paper describes how the banks were created. chosen presents the data in two forms: a series of synthesis data sheets, based on a press brief containing all the articles, reports and documents available on the accident, which clearly describes how each accident occurred and its consequences;a computerized databank giving, for each accident, about 30 parameters selected according to criteria making them suitable for use in counting or statistics.The problems encountered are mentioned, and a number of practical applications are given.

A. Bertrand, L. Escoffier

OCAAR: A Tool for Incident Analysis, Reporting and Processing

This paper describes a dedicated software tool to analyse incidents, to statistically process them and to edit reports. This tool, named OCAAR, is made up of two programs: one for each local unit (e.g. a plant), and one for the headquarters (to combine the information coming from each local unit). The incident analysis is based on two methods: the cause tree method and the incident chain method. They allow the incident causes to be analysed, and the succession of the disruptive factors leading to the incident to be identified.

A. Leroy, M. Gaboriaud, J. C. Michon

Common Cause Data

Common-Cause Failures — Evidence from Sellafield

In this study records from the Sellafield nuclear chemical plant were analysed for evidence of common-cause failure. The component-specific Beta factors estimated from these data were significantly lower than the global recommended value of 0.2 and compare favourably with estimates based on large data collected in the USA.Component-specific Beta factors should be employed in risk assessments to predict common-cause effects. Where such data are not available a conservative global value of 0.1 appears to be appropriate for most chemical plant from the information analysed in this study.

T. R. Moss, G. T. Sheppard

Stochastic Reliability Analysis — Its Application to Complex Redundancy Systems

Background and formalism of a new approach to reliability analysis of redundancy structures are briefly presented. An extension of the formalism from the component level to the system level is carried out and the general equation for single and multiple failure frequencies in k-redundant symmetric systems with n serial components per subsystem (redundancy) is derived.

Peter Doerre

The Distributed Failure Probability Approach to Dependent Failure Analysis, and Its Application

The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the “environment” of the component, where the term “environment” is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various “environments” giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modeling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed.

R. P. Hughes

DEFEND—A Dependent Failures Database

The traditional sources of operational reliability data do not record system and component unavailabilities in a way which is amenable to the support of dependent failures modelling. Progress has been made in developing techniques for analyzing the data so as to detect and express dependencies, but different data sources require different techniques and express differing views of component and system availability.Given a large amount of inhomogeneous data from event reports and component failure data, NCSR undertook the. development of a new dependent failures database which would contain all of the available data in an upgraded and rationalised form. This database, DEFEND, was designed so that it could potentially support a range of dependent failures models.

A. M. Games

Uncertainties - Source and Propagation

An Analytic Method to Estimate Uncertainty in a Risk Function

This paper presents a method for estimating the uncertainty in a probabilistic risk analysis. Based on the estimation of probability intervals of some basic parameters of a risk function, the method estimates a probability interval for the risk function. An example of risk analysis of releases of flammable materials is shown.

Henrik Kortner

Modeling of Fuzzy Measurements in Reliability Estimation

There are two main sources of uncertainty in reliability data. The first kind of uncertainty is stochastic variation analyzed by stochastic models. The second kind of uncertainty is fuzziness of observed life times. This kind of uncertainty is neglected in standard statistical inference. Therefore statistical inference procedures have to be adapted to take care of fuzzy life time data. Models to analyze fuzzy life time data are given in classical frequentist and Bayesian context.

Reinhard Viertl

Evaluation of the Probability Distribution of Evacuation Time from a Light Railway Transport Station

In order to support the Naples Light Railway Transport design, Ansaldo performed a study aimed at evaluating the evacuation time te from the stations. This time depends on the parameters that determine station lay-out, accident propagation and people motion which are affected from considerable uncertainties. In this paper the methodology followed to evaluate the te probability distribution is presented. The analysis was performed through the following steps: uncertainty sources identification and estimation, sampling of the input variables of the deterministic evacuation model, numerical simulation of the evacuation process with the EPDES code, response surface synthesis using stepwise regression, uncertainty propagation by means of Latin Hypercube Sampling. Finally the numerical results were presented and the major findings of the study were discussed.

B. Dore, L. Lambardi, L. Lazzari, V. Siciliano

Computer Aided Risk Analysis

Computer Aided Reliability and Risk Assessment

Activities in the fields of reliability and risk analyses have led to the development of particular software tools which now are combined in the PC-based integrated CARARA system. The options available in this system cover a wide range of reliability-oriented tasks, like organizing raw failure data in the component/event data bank FDB, performing statistical analysis of those data with the program FDA, managing the resulting parameters in the reliability data bank RDB, and performing fault tree analysis with the fault tree code FTL or evaluating the risk of toxic or radioactive material release with the STAR code.

R. Leicht, H. J. Wingender

Hazard Identification: A Proposal for a New Development

Risk assessment is a complex task performed to quantify and minimise the hazards associated with potentially dangerous installations. The hazard identification represents the first step and can be performed by means of different methodologies. Some of these methodologies allow the collection of information on the possible plant behaviours in a structured way. However, this information cannot be easily used for further investigations (i.e. accident quantification).The paper suggests the application of a simple technique for hazard identification and fault tree construction, that can easily be implemented in a computer program.

S. Contini, N. Labath

A Prototype Expert System to Perform Safety Evaluation of Alternative Architectures for Offshore Platforms

Basic safety evaluations should be carried out before all the key design decisions are taken. This doument describes the architecture of an expert system prototype in order to perform safety analysis of alternative architectures for offshore gas platforms.

A. Lancia, L. Scataglini, L. Bortolazzo, A. Romano

Reliability and Incidents Data Acquisition and Processing

Report on the On-Going EUREDATA Benchmark Exercise on Data Analysis

In April 1987 the JRC was charged by the Assembly of the EuReDatA members with the organization and the coordination of a Benchmark Exercise (BE) on data analysis. The main aim of the BE is a comparison of the methods used by the various organizations to estimate reliability parameters and functions from field data. The reference data set was to be constituted by raw data taken from the Component Event Data Bank (CEDB), managed by the JRC. The CEDB is a centralized bank, which collects data describing the operational behaviour of components of nuclear power plants operating in various European Countries.

A. Besi, A. G. Colombo

Expert Opinions as Data Source: Methods and Experiences

Formalized procedures for using expert judgment are expected to contribute substantially to improving the quality of reliability data such as the failure rates of mechanical components. Key requirements for these procedures are that they provide solutions to such questions as: How do we encode, evaluate and combine expert opinions?Three different categories of models for eliciting and combining expert opinions have been selected for detailed investigation: weighted averaging, Bayesian models and psychological scaling models. Within each category, one model has been selected/developed that is hoped to be an improvement over the models suggested so far within that category. These models have been made operational and have been tested and evaluated in case studies on real-world problems in industry. The paper discusses the models developed and presents some results of the case studies.

Jacques F. J. Van Steen, Roger M. Cooke

The Analysis of Electronic Component Reliability Data

The paper describes the development of analysis procedures for data within the Electronic Component Reliability Database at Loughborough University of Technology. The database includes data acquisition from Plessey, GEC, STC and two Danish Companies.Earlier papers have described the content and administration of the database and the path to establishing the database. The current paper concentrates instead on the analysis of the data within the base. The structure of the base has facilitated an exploratory approach to analysis and enabled the identification and calibration of relevant environmental, mounting, screening and other influences upon failure behaviour to be made from the data itself rather than from prior assumptions.

J. M. Marshall, J. A. Hayes, D. S. Campbell, A. Bendell

An Intelligent End-User Interface for the Collection and Processing of Accident Data

Incident data collection is hindered by the need to use bulky standard forms with many questions designed to cover a wide range of varying types of incident. The majority of such questions are irrelevant for any one incident.

A. R. Hale, J. Karczewski, F. Koornneef, E. Otto, L. Burdorf

Physics Models for Reliability Data Processing

Normally, in examining and processing failure reliability data, people apply the flat part of the “bathtube” curve; experience has verified that for many simple electric and electronic components such a distribution occurs: they are characterized by a long period with few and random failures. But experience has also verified that many types of components have not a constant hazard rate; f. e., normally mechanical components have a wearout characteristic and a continuously increasing hazard rate during all the life. Besides, if only two or three items are tested for a short time, little in the way of accurate quantitative conclusions can be drawn applying the classical statistics laws and assuming a bathtube distribution; bayesian methods may be taken into consideration, but these methods may be applied only when significative data on a set of similar items (components) is available; on the contrary we must remark that the preferred method to compute the reliability of an item is to consider the basic physics of the failure: it is, to predict the component failure rate in terms of a statistical variation of the physics parameters characterizing its operating conditions. On this subject. we must say that if one agrees to characterize a component by a hazard rate function and to evaluate the hazard parameters by testing. the result is a macroscopic model. Such a model is not concerned with what happens inside the black box containing the part but only with the statistics of the part performance. To delve inside the black box. one must postulate various hypotheses about the microscopic behaviour of the device and to prove validity; in this type of reliability model. the causes of the component failure are considered. We have called “physics models” this type of methods to process reliability data; they mainly consist of the following three basic steps: 1-Analysis of the physics laws characterizing the component(part) normal operation.2-Singling out of the parameters of the above laws affected by stocastic variations in order to choose and to define the failure distributions to be applied.3-Evaluation, on the basis of the material data and of the component characteristics. of the limit variation range of the parameters mentioned in point 2 in order to exactly evaluate the foreseen failure rate distribution of the component.ENEA has carried out a data collection on typical components of the French fast reactors Rapsodie and Phenix; these data have been registered on an informatics system. In consideration of the experimental type of these components. of the small number of followed items and of occurred failures. the classical and bayesian methods do not allow the evaluation of too significative reliability data. At this aim ENEA-VEL has set·up some physics models on the ground of the physics laws governing the component operation and of the failure causes. A brief description of these methods, the results obtained by their application on some components and a comparison of these results with the ones obtained applying the classical and bayesian methods will be the subject of the present paper.

R. Righini, G. Zappellini

Human Reliability

Human Reliability Assessment and Probabilistic Risk Assessment

Human reliability assessment (HRA) is used within Probabilistic Risk Assessment (PRA) to identify the human errors (both omission and commission) which have a significant effect on the overall safety of the system and to quantify the probabilitv of their occurrence. There exist a variey of HRA techniques and the selection of an appropriate one is often difficult. This paper reviews a number of available HRA techniques and discusses their strengths and weaknesses. The techniques reviewed include: decompositional methods, time-reliability curves and systematic expert judgement techniques.

D. E. Embrey, D. A. Lucas

Human Reliability Data Collection for Qualitative Modelling and Quantitative Assessment

Effective human reliability assessment requires both qualitative modelling of possible errors and their causes, and quantitative assessment of their likelihood. This paper considers the available sources for both qualitative and quantitative data collection. A classification for different types of data is proposed. Currently used methods of gathering data using operational experience and simulators are discussed in relation to these data types.

D. A. Lucas, D. E. Embrey

Quantification of Human Reliability

Human reliability has been brought increasingly into focus during the past years, and several quantitative models for human reliability have been proposed. In a man-machine system analysis of operator tasks in the control room of one of our PVC plants, four alternative models were used for quantification. These were: HEART, TESEO, PROF, and data from Swain & Guttmann’s “Handbook of Human Reliability Analysis”. Based on the experience from this study, it was decided to develop PROF further. This model uses a human reliability data base as a basis for estimating error rates. Most of the data in this data base were collected from open reports. As the amount of such data is limited, it was natural to concentrate on further data collection.At Sandsli, Bergen, Norsk Hydro has developed a full-scale true-copy simulator for the control room at the Oseberg oil production platform in the North Sea. Operator training was first performed, using the simulator, in 1988. This opportunity was used to collect more data on human reliability.

Unni Nord Samdal, Jon Arne Grammeltvedt

Treatment of Uncertainties in Human Reliability Analysis

The modelling of operator errors has been discussed in several contexts and also in various benchmark studies. The estimates of operator error probabilities have differed from each other several orders of magnitude depending on the modelling approach. In this paper, reasons for this phenomenon are discussed. Furthermore, the application of an approach based on influence diagrams in estimation of error probabilities is presented. The use of influence diagrams proved to be an effective tool in the modelling and the identification of the major uncertainties. However, some problems concerning data uncertainty discussed in the text still remain unresolved.

Pekka Pyy, Urho Pulkkinen

Information and Communication Technology in the British Coal Industry—Systems Management and the Organisation and Reliability of Production and Maintenance Work

In the last ten years the British Coal Industry has introduced information and communication technology into the process of production in order to increase the reliability of its data collection, and to reorganise work and make it more flexible. In part this is to increase the machine operating time within machine available time by a speedier and more accurate identification of machine performance and likely breakdowns. This for example allows the reorganisation of both production and maintenance workers, and in particular the assessment and calculability of their availability in number, place, and time.The focus of my paper is the interface between th is new technology, the system of management and the organisation of work in coalmining.The technology by which information concerning the production process and the reliability of machinery is gathered is itself a major factor in both how the data is collected and how the work of coalminers is organised. It is argued that it is not just the technology of production which determines human and machine reliability but how it is organ ised as a whole process of production and in what way informa t ion and com munication technology is applied to this process.My paper therefore analyses those interfaces between production technology, information and communication technology, the particular structure and system of management which has been developed and the organisation of work. It focuses upon those changes in the British Coal Industry since the 1970’s and points forward to possible effects upon management, human reliability and the economic structure of the industry in the 1990’s.

Stephen Heycock

Probabilistic Safety and Availability Assessment

ESFAS: An Information System on Worldwide Nuclear Power Stations

The idea which was at the basis of the development of the data base “ESFAS—Engineered Safety Features and Auxialiary Systems” was to create a centralized information system in which nuclear power stations should be described from the standpoint of their safety features.

M. Melis, B. Lisanti

Availability Assessment of a Technological Railway Information System

In the paper an assesment of availability and reliability of the computer supported railway technological information system in Slovenia is described. Two versions of a Markov model comprising hardware failures and software and human errors have been used to calculate the system steady state availability and mean time to failure. The forms for the collection of field data on failures and repairs of elements of the central and communication part of the system are presented.

S. Hanžel, A. Hudoklin

Assessment of Failure Probabilities of Launch and Orbital Manoeuvres of Satellites

The question of failures leading to abortions of space missions is of importance in terms of financial and scientific expenditures. However, if nuclear devices such as power sources of satellites are involved and re-entries are to be envisaged, these failures will become a matter of safety, too. The results found are that abortive mission failures lay in the range of 10% probability. A break down of accessible event data provides a sound basis for these results. The more complex question of safety can only be touched here.

R. Leicht, H. J. Wingender

Investigation of the Safety of a Ship Propulsion System by Monte Carlo Technique

The simulation model of safety/risk to investigate ship’s systems, taking into consideration their usage and maintenance process, is presented in the paper. The study of a ship’s propulsion system is presented and discussed as the example of safety/risk analysis in the area of sea going ships. The numerical simulation methods have been used there. The emphasis in the following paper is laid on the presentation of the analysis over the safety/risk and reliability of system. It may be useful for ship’s design.

A. Brandowski

From the Plant Risk Analysis to the Area Risk Management: Problems and Perspectives

The application of risk analysis techniques to major hazard plants has increased in the recent years, and these studies are now common practice in the design and authorization procedures for major hazard plants. A natural development of this approach is the consideration of groups of plants with the associated infrastructures to assess the risk at an area level.This approach is not so consolidated and Authorities are now starting experimenting its benefits.The paper presents the Author’s point of view of the problem based on their experience in developing and testing risk assessment for industrial areas.

G. Uguccioni, A. Crotti, S. De Sanctis

Risk Analysis of Dangerous Goods Transport in Spain

Between December 1986 and October 1988, three projects on risk analysis of dangerous goods transport by road and train, have been developed in Spain.The following products have been studied: chlorine, butadiene, styrene, propylene oxide, anhydric ammonia, anhydric fluorhydric acid, acrylonitrile, propane, butane, gasoline, gas-oil, toluene, methyl methacrylate, chlorhydric acid solution and caustic soda solution.

C. Gonzalez, J. A. Domenech, A. Lazaro, J. M. Renau, A. Tasias

Criteria for the Brazilian Directive on High Risk Plants Recent Developments

A directive for the classification of hazardous installations and for the specification of the content of the required risk assessment will be issued by the State of Sao Paulo and subsequently by the Federal Government of Brazil. A critical review of the Seveso Directive suggested the development of new criteria to improve the consistency of the specified masses with the intrinsic hazard of the substances, to include consider special physical conditions into the process that can modify the hazard, to deal with plants using many different hazardous substances, to consider the characteristics of the scenario and to specify the contents of the risk analysis and the criteria for emergency planning.

Tania Maria Amorim, Gian Carlo Bello, Rosana Santalucia

Design Optimization of New Plants by Reliability Engineering Methodologies: Application to a Subsea Pumping Station

Reliability Engineering methodologies may be successfully applied to the analysis of new plants for which no direct experiences exist. For these plants the risk analysis represents the only possibility to optimize the design in alternative to the “a posteriori” experience.In this context the conventional methods for the risk analysis have to be calibrated on the particular needs of the case to be analyzed.The paper shows the main results of an application carried on in SNAMPROGETTI in the framework of SBS (Subsea Booster System) project involving the design of a remotely controlled pumping station to be installed in deep sea (up to –1000 m) for oil/gas reservoirs development. The installation and maintenance costs of the system require to ensure the system reliability; the design solutions will therefore be defined on the basis of a reliability analysis.The procedure that has been applied starts from the reliability data calibration to enable, by FMEA, HAZOP and Fault. Tree methods, the system availability evaluation and the identification of critical components.On the basis of the reliability model, indications are provided about the expected reliability level for each sub-system and the optimum design configuration in order to achieve the economical feasibility targets.A sensitivity analysis is also applied to evaluate the influence of the parameters’ estimation on the final results and the benefits due to increased component quality level, different functional solutions or component redundancies.

Giovanni Uguccioni, Fausto Zani, Simberto Senni

Risk Assessment of Operational Aspects in Offshore Drilling

A new and different type of risk assessment of the operational aspects of offshore drilling is discussed. Main emphasis is given to a shallow gas risk analysis which were performed to evaluate different risk-reducing measures. Consideration is also given to a relatively new hazard identification technique and its use in offshore drilling, as well as the importance of risk management.

R. Østebø, T. Gjerstad

Availability Assessment of Complex Distributed Control System of a Petro Chemical Plant

Particular application of reliability techniques is developing to evaluate dependability (availability, integrity and security) of large process distributed control system (DCS) for a Ethylene Plant. Each critical plant section is associated to own control subsystem which is analyzed taking into account major field unavailabilities contributing to global and section outages figures. Control system availability, (hardware and software) is concerned, evaluating section fault tolerant approach by Stochastic Petri Nets Theory application on computerized control sub systems over a prior extensive standard Fault-Tree analysis conducted all over the plant. Cost-benefit balance is then highligthted as decision-making tool, where reveals important performance in final control derated conditions.

G. Picciolo

Reliability Assessment of a Charging Machine for Radioactive Waste Disposal

The charging machine designed for use in an experimental programme for disposal of vitrified high level radioactive waste canisters in the ASSE salt mine was the subject of a design state reliability assessment. The aim was to allow a judgement of the general reliability and availability of parts of the machine as well as of the system as a whole. The assessment was performed according to the fault tree method. The results are presented in the form of a reliability profile. It was found that those subsystems depending on the hydraulic unit contribute most to the overall failure frequency. No additional weak points were identified. The total failure frequency per year of operating time was estimated to be 29. However, the corresponding availability of the system amounts to more than 95%, thus meeting the requirements for such a machine.

R. Leicht, B. Puttke, H. J. Wingender

The Availability of a Set of Inhomogeneous Machines

This paper presents a solution of the machine availability problem in which N different machines are looked after by a team of r operatives.The run time of each machine is assumed to have a general distribution, different for each machine, and the repair times are assumed to have a negative exponential distribution with different means for the different machines. An explicit expression for the probability that a particular set of machines is running in the steady state is given. From this other useful measures for the system can be obtained. It is shown that these quantities depend on the run time distributions only through the means of these distributions.

B. D. Bunday, E. Khorram

Availability of the Pressure Relief Panel System at Pickering “A” Nuclear Generating Station

Based on the overall containment system reliability requirements, targets have been apportioned for the new pressure relief panel system at Pickering NGS A. The availability targets for the rupture panels and the valves were derived from the overall containment system requirements.A method has been developed to estimate, with a given confidence, the availability of the rupture panels. An acceptance sample plan for the rupture panels has been developed, to ensure that the availability targets are met. The reliability of the bypass valves is determined by using the fault tree analysis techniques.

R. Rajagopalan, F. Camacho

Comparative Availability and Reliability Assessment of Design Options for the Secondary Sodium Loops of the EFR

The EFR (European Fast Reactor) project has entered a conceptual study period where different design alternatives are compared concerning feasibility, safety and economic aspects.This paper describes a comparative probabilistic availability and reliability assessment of alternative design options for the secondary sodium loops.These loops will provide heat transfer from the reactor pool to the water-steam (power generating) side. So a high operational availability of the secondary loops during plant lifetime is essential for economic power generation. Additionally a high reliability is required to fulfill the operational decay heat removal function in case of a reactor trip.Availabilities and reliabilities of the different options were assessed using failure mode and effect analysis and the fault tree method.

Hartmut Pamme

Risk Analysis of Areas with High Concentration of Industrial Activities: Methodology Used for Priolo and Naples Areas (Italy)

The paper deals with the methodology adopted for the execution of Risk analyses of two areas with high concentration of industrial activities.

R. Fox, M. Melis

Risk Assessment for Transportation of Dangerous Materials — A Comparative Study

A general methodology for the risk assessment of transportation of dangerous materials is presented in this contribution and is illustrated in a comparative study dealing with the rail and road transport of chlorine in an important North Italian route.

D. Diamantidis, A. Guiducci, C. Buttasi, G. Peretti, F. Battista

Effect of Maintenance Factors on Risk and Adequacy of Power Generation

The overall availability assessment of a generating unit involves an appreciation of the forced outage parameters for the unit and its maintenance reqirements.This paper introduces an algorithm for performing maintenance scheduling of generation systems in terms of levelizing the system risk throughout the year. The moment-cumulant method is used in developing the technique of this paper. The paper investigates the effects of the system forecast peak load and forced outage rate of a generating unit on the maintenance schedule of the generation system. Applications to the standard IEEE Reliability Test System (RTS) have been used to illustrate these studies.

Farag Ali El-Sheikhi, Roy Billinton

Feedback of Reliability into System Design, Operation and Management

An Analysis of Accidents with Casualities in the Chemical Industry Based on Historical Facts

On behalf of the Directorate-General of Labour, TNO carried out an analysis of accidents with casualties which occurred in the chemical industry. From the databank FACTS, 700 accidents have been selected. The selection was focused on accidents during industrial activities with hazardous materials. The aim of the analysis was to find out how casualties occur and to give concrete suggestions and recommendations to improve safety during daily practice. The first results are presented in this paper. By the end of 1988 the full report of the analysis will be submitted to the Directorate-General of Labour, and most probable it will become available as a publication of the Directorate.

L. J. B. Koehorst

Reliability Follow-Up in the Pre-Production Process at Volvo Car Corporation

In the Product Engineering Division of Volvo Car Corporation, we have, since about 15 years, a special Quality Assurance Department Which is responsible for the Reliability Programme of the division There are four major activities involved in this programme (Fig. 1 describes roughly how they are connected): Specification of reliability levels. General reliability demands from the Product Planning Department (e.g. “Best in class”) are translated into technical terms (defaults per vehicle, L50, probability of failure) by using competitor information from different marketing surveys, test results from motor magazines, own testing at VCC etc.Reliability analysis during the pre-production process to assure that the reliability requirements will be met. New designs and major design changes are analysed with techniques like FMEA and FTA. For this work, it is necessary to have field data and test results in order to make confident reliability assessments.The Long-term field follow-Up programme (LU) involves about 400 production cars per year model, from which repair data is collected and stored in a database. This gives us the possibility to measure the reliability level on the field, and see if it differs from the specifications. The LU programme is established since 10 years.Follow-up of pre-production test cars, the latest activity in the Reliability programme (started in 1986), which will be further described in this paper.Together, all these activities make a powerful tool for us at VCC to assure that the final output (i.e. cars to customers) will meet Volvo standards.

A. Wendel, O. Lindwall

Issues Associated with the Development of Nuclear Power Plant “Programm Effectiveness” Indicators for Regulatory Use

This paper outlines several data issues related to the development of programmatic indicators for monitoring nuclear power plants (NPP) safety.The programmatic indicators are used by management to monitor program effectiveness. The development of program effectiveness indicators for the use of regulatory organizations which are external to the “program” requires an entirely different approach than the development of indicators for program internal use. A regulatory agency is required to manage NPPs according to its objective, which is to protect public health and safety, and hence its indicators should reflect this objective. The NPPs, on the other hand, exist for productivity and profitability, and, therefore, their indicators should reflect primarily those goals. A safe plant may not be an economic plant, and the reverse is also not necessarily true; profitability and safety do not coincide at all times. This paper outlines methods for developing programmatic indicators for regulatory use. It is based on work sponsored by the US Nuclear Regulatory Commission (NRC).The methods were developed within the context of one specific US NPP activity, the maintenance program. The method developed included frameworks for a systematic identification and evaluation of candidate indicators, and statistical techniques for indicator validation. The paper presents preliminary conclusions drawn from the methods applied and the data analysis performed. From the methods applied, it is concluded that regulatory programmatic indicators should monitor data related to program outputs (i.e. observable events), rather than data related to program processes (i.e. inputs and throughputs), although it is recognized that process indicators may be more suitable for internal plant management use. As a result of the data analyses, some preliminary statistical correlations of plant safety to plant performance outputs have been identified; these relationships show the potential for developing leading indicators of safety, and for combining regulatory and NPP senior management program effectiveness indicators.

E. Lois, J. Wreathall, J. Fragola

The Role of a Disturbance Analysis and Surveillance System for the Operation and Management of Complex Plants

The correctness of human action, especially in a complex situation, has very often proved to be a critical point, just like the correctness and unambiguousness of the information which is made available. This consideration led to the development of Disturbance Analysis and Surveillance Systems aimed at assisting the operator in decision-making, having regard to the operation or management of plants.Unfortunately, the D.A.S.S’ s developed so far are not capable of foreseeing the potential dynamic behaviour of transients or accidents under surveillance. The studies conducted by Snamprogetti have led to the conclusion that it may be possible to achieve D.A.S.S.’ s able to give at the same time: information tools based on ergonomic criteria making it possible to know in advance the probable development and outcome of transients and accidents, hence the potential impact on the site and the environment;tools for integrated Reliability, Availability and Safety analyses;tools for computer-based collection and management of the data coming from the plant, arranged in a Reliability, Availability, Mantainability Data Bank.

S. Messina, R. Galvagni

Data Fusion

Data Fusion Problems in Intelligent Data Banks Interface

The reported work has been motivated by different kinds of data fusion problems encountered in the design of an advanced interface for exploiting heterogeneous reliability parameter sources. The considered data fusion issues originate either in the aggregation of pieces of information obtained from different sources or in the treatment of fuzzy queries addressed to the interface. The aggregation problem is discussed both in the presence and the absence of confidence estimates for the pieces of information to De fused. The different combination problems are dealt with in the framework of possibility theory.

S. A. Sandri, A. Besi, D. Dubois, G. Mancini, H. Prade, C. Testemale

Measures of Dissimilarities for Contrasting Information Sources in Data Fusion

Information content of data coming from a given source is modelled and formalized as a probability distribution and, as such, considered as a point in a function space, where a concept of distance can be introduced.In such a space, Kullback-Leibler Information is a contrast function able to measure dissimilarities between probability distributions and, then, a practical index for clustering different information sources according to the quality of their content.

L. Olivi, R. Rotondi, F. Ruggeri

Reliability Modeling and Techniques

Computing Cumulative Measures in Reward Stochastic Processes by a Phase-Type Approximation

This paper illustrates new methodologies. that are under study for the analysis of complex degradable systems. The basic idea is to include into a single dependability model both the random variation of the system configuration in time, and the effective performance level of the system in each configuration. In order to characterize the system behaviour, cumulative measures are defined. The evaluation of the distribution function of these measures is a difficult task for the inherent computational complexity. This paper surveys the possibility of solving the above models by approximating the resulting non-markovian stochastic process by means of a suitably generated Markov chain.

Andrea Bobbio, Laura Roberti, Enrica Vaccarino

The Use of Data to Identify Systems for Reliability Assessment Using Monte-Carlo Simulation Techniques

Direct Monte-Carlo simulation offers enormous potential for the assessment of system reliability. The method suffers from serious run-time limitations (12), there are however far more complex Monte-Carlo techniques, known generically as VARIANCE REDUCTION TECHNIQUES (7), which can overcome these limitations. Such methods are however very difficult, for non-mathematicians, to implement and require expert knowledge of the systems under assessment to choose the appropriate technique. At Aston University we are working on the development of an expert system to use the optimal Monte-Carlo Variance Reduction Technique for the assessment of the reliability of a given system. The main thrust of the work at present involves the identification of problems and the assessment of which Variance Reduction Technique is most applicable. This paper sets out to explain what system attributes to collect data on and use in the identification of that system in respect of Variance Reduction Monte-Carlo methods of reliability assessment.

A M Featherstone, K Ghazvini

Data Analysis and Modelling in the Evaluation of Multiple System Organ Failure

At the Istituto di Chirurgia d’Urgenza of the University of Milan a computerized model with an original statistical algorithm has been developed to study the evolution of critically ill surgical patients with Multiple System Organ Failure (MSOF), defined as the insufficiency of two or more vital systems.

P. Guadalupi, R. Pizzi, A. DeGaetano, O. Chiara, C. Verna

Real-Time Operational Reliability

The classical approach to the determination of reliability characteristics is based on empirical data which represent operating time to the failure of several systems or components. Thus reliability assessment is a process which takes place after the failures have occurred. The main aim of this paper is to present a new approach which enables reliability to be assessed at real—time and reliability characteristics to be determined before failure takes place. The method used is based on the Relevant Condition Parameter reliability approach.

J. Knezevic

The Two-Sample Problem and its Practical Applications to Reliability Data Sets

A number of non-parametric statistical tests, suitable for the solution of the two-sample problem with censored data are discussed. The tests presented are the Gehan’s modification of the Wilcoxon test, the Logrank, and the modified Kohnogorov-Smirnov test. Their use in the reliability field is illustrated using British Rail data from both bogie ends and gearboxes. In each case two samples were drawn. With reference to the bogie ends, one sample was taken from one end of the car and the other sample from the other end. As to the gearboxes, a modified version was compared to the original. There was a statistically significant difference between bogie ends where as gearboxes were found to be more or less equally reliable.Throughout this paper the results were obtained using FORTRAN routines developed in the Department of Engineering Production.

G. Bohoris, E. M. Aspinwall, D. M. Walley

Implementing Stochastic Filtering for Reliability Applications — The Arc Code

A model for using field data for component reliability assessment is described. Field data (i.e. component histories consisting of maintenance operation descriptions, times to failure, survivals) are treated by stochastic filtering techniques, to “filter” the maintenance effect. The main aspects of the model, based on a Bayesian approach, and of its practical implementation in the ARC (Age Réel du Composant) are presented. Finally the results of an example of code validation are given.

A. Besi, C. A. Clarotti, C. P. Nichele, L. Piepszownik

A Development Flow Graph Method

The paper demonstrates the use of graph theory to obtain steady-state behaviour of multi-state systems described by Markov models. A number of applications are given and compared with results obtained by other methods. The potential development of flow graph methods for further applications is also discussed.

Isa S. Qamber, A. Z. Keller

Structural and Mechanical Reliability

KODABA — A Corrosion Data Information System

The corrosion data base system KODABA is a shell to collect, document, and archive corrosion data from different fields. It provides fast and easy access to the stored data and offers comfortable retrieval options. Three dialogue languages are available for the user — English, French, and German. Different output options are offered to record the results of a query. It is written in CLIPPER- and FORTRAN-language and designed for use on IBM compatible personal computers operated under MS-DOS. Definitely KODABA is a helpful tool for people working in fields where corrosion is an important aspect, e.g. reliability engineering.

R. Leicht, G. Luthardt, R. Schönfeld, H. J. Wingender

The Evaluation of Mechanical Resistance of Storage Tanks Exposed to Fire by Using Data Bank Information

The paper describes a mathematical procedure aimed at evaluating the heat flow and the transient variation of the wall temperature of a spherical vessel containing a liquified gas during exposure to thermal radiation. In fact in case of fire the vessel is subject to both overtemperatures and overpressures, which may lead to overheating and rupture of the wall.The main objects of the study are to analyse the effect of dimension and duration of fire, different materials, insulation thickness and filling level in the vessel on the time needed to reach critical wall temperatures in order to help both designers and operators in the choice of suitable materials, the selection of optimal layout and the establishment of proper safety systems.

Viviana Colombari, Maurizio Gilioli, Alfredo Verna

Database of Mechanical Components in a Refinery. Structure and Operational Criteria for Inspection Planning an Reliability Evaluations

The paper presents a recently established computerized database system for the collection and processing of mechanical inspection reports of a petroleum refinery. Major items under control are tanks, vessels, furnaces, pipes, heat exchangers and safety relief valves. The paper describes the type of data collected and their utilization, e.g. for determining life-expectations, strategic planning for the frequency of inspection, etc.

E. Galatola, G. Simeone, G. C. Bello

Applicability of First Order Reliability Methods A State of the Art

This paper introduces First-Order Reliability Methods (FORM) and presents in a condensed form examples from recent applications of FORM in structural analyses.

D. Diamantidis, G. Ferro, P. Bazzurro

Estimate of Reliability Characteristics and Maintenance of Building Machines by Applying Statistics of Extreme Values

Statistics of extreme values, above all, has practical application with estimating extreme values of reliability characteristics and maintenance, both components and complex systems. The paper shows definition of extreme values, and according to asymptote distribution type I (Gumbel distribution) MINIMUM values of UP TIME and MAXIMUM values of DOWN TIME of machine stroke are determined as critical units of a bulldozer TG-50 manufactured in “14 Octobar” factory, Kruševac (Yugoslavia). Processing empirical data from exploitation is done according to a developed program packet in FORTRAN 77 program language.

L. Papic, P. Dasic

Consequence Modeling

VVF SIGEM: A Computerized System for the Emergencies Management

In this paper it is described the general architecture of the Computerized system conceived by TEMA in collaboration with the Italian Fire Brigade Branch for the emergencies management.The structure of the main Data Bank and the typologies of the interconnected simulation models are indicated as follow.

E. Marchionne, M. Lanzino, M. Gilioli

Heavy Gas Dispersion

Dispersion modelling for accidental releases has been a rapidly expanding and advancing field. One aspect which is causing growing concern is the hazard arising from an accidental release a with flammable or toxic gas. Knowledge of how such materials get dispersed in the atmosphere is important. The main objective is to provide a good basis for the determination of hazardous zones in case of accidental releases. Predictive methods as used in air pollution problems may be considered adequated for various practical applications. Numerical modelling of heavy gas dispersion can be used. The mathematical techniques vary from simple box models to numerical solutions of turbulence equations over a three dimensional grid.

Ma Teresa Galvez, José Maria Renau

Software Reliability

Software Data Collection and the Software Data Library

The Software Data Library Project began in 1985 with a one-year study of the problems involved in setting up a facility for the collection, storage, analysis, and dissemination of software engineering data. The second phase involved the design, development and administration of an appropriate database system. Data collection procedures were created, beginning with detailed definition of those entities about which data was to be collected and the metrics which applied to those entities. A data model was created which forms the basis of the design of both the database system and the data collection manuals produced by the project.

P. Comer

Electronic Reliability

An Integrated Data Base and Reliability Assessment of Electrical Distribution Systems

This paper describes a reliability data base which has been developed as an integral part of a analysis package (RELNET) that evaluates the reliability of electrical distribution systems. It discusses the structure and file organisation of the data base, the management system which maintains the data base under central control and the interface between the data base and the evaluation routines. The concepts will be of interest in a wide variety of application areas.

R. N. Allan, T. Y. P. So, D. K. Gazidellis

Field Failure Data Collection and Analysis of Repairable Systems

During the last 4 years a research project concerning field failure data collection and analysis has taken place at the Danish Engineering Academy (DIA). Field failure data has been collected on two Danish electronic products.In order to analyse the data it has been necessary to design a database to hold and manipulate the data.This paper describes the level of information in the database and the work performed to handle incomplete datasets.

C. Kjærgaard

Reliability Tests

Accelerated Test Procedure for Estimating Proportion of Early Failures of Metal Film Resistors

The possibility of estimating the proportion of early failures using accelerated testing was investigated for metal film resistors. The aim of the investigation was to establish a test procedure equivalent to the standard endurance test. Thereby, a quick evaluation of a production lot would be possible to avoid long duration endurance tests on bad lots. In the paper a practical test procedure is presented together with accelerating test conditions which were determined by means of the Eyring mathematical model and on the basis of a series of accelerated tests performed together with the corresponding reference standard endurance tests on approximately 4600 test items. The proposed test procedure can be used as a simple tool for evaluating or/and comparing the reliability of individual lots in the manufacturing process as well as at the incoming inspection.

I. Likar, S. Kolenko

Reliability Demonstration and Field Analysis of a Communications Controller

This paper sets out to demonstrate the suitability of reliability growth models for reliability prediction during reliability demonstration and in the field for a communications controller.The paper describes reliability demonstration tests, a failure review process, reliability prediction and follow up field reliability analysis for the communications controller. The reliability was demonstrated by simulating the product early life through two types of tests. One sample was run at ambient conditions and the other sample at accelerated conditions. In order to corroborate the demonstrated reliability, field failure data of a product sample was collated and analysed. A good correlation in reliability measures between the demonstrated and field observations was achieved.In both test samples similar failure modes were observed. The acceleration factor of the stressed sample was estimated by modeling time to first failures using the Weibull model. This enabled combination of both test data. Through the ‘test analyse and fix’ process steered by a failure review board, a number of modifications were introduced to eliminate recurring failures. As a result, the demonstrated reliability of the product improved significantly. Weibull process model was applied to model the combined test data for reliability prediction.The field performance of a sample of products was monitored for approximately one year. These systems also exhibited reliability growth. This was mainly due to natural screening. Weibull process and IBM models were applied to this data. Good agreement in reliability measures, demonstrated with field observations was observed.The failure review board was seen as an effective vehicle to instigate corrective actions in the reliability demonstration program. The Weibull process and IBM models were suitable to model repairable systems failure data in the presence of reliability growth.

U. D. Perera

Bayes Optimal Burn-In-Times

The Bayesian definition of optimal burn in procedure is given. The Bayesian approach to the burn-in problem is described in both the case of independent and in the case of dependent component failure times.

C. A. Clarotti, F. Spizzichino

Closing Session

Review of the EuReDatA Project Groups’ Activities and Results

One of the most significant objectives of the EuReDatA association is the establishment of compatible standards for reliability data collection and analysis. This is being pursued by specialized project groups. By starting with the proposal of general reference classification schemes for component reliability and the issue of a guide for reliability data collection and analysis, the project groups’ activity has moved in more recent times towards data analysis exercises and feasibility studies for data collection projects.

A. Amendola, T. Luisi
Weitere Informationen