Skip to main content

2011 | Buch

Assessment of Power System Reliability

Methods and Applications

insite
SUCHEN

Über dieses Buch

The importance of power system reliability is demonstrated when our electricity supply is disrupted, whether it decreases the comfort of our free time at home or causes the shutdown of our companies and results in huge economic deficits.

The objective of Assessment of Power System Reliability is to contribute to the improvement of power system reliability. It consists of six parts divided into twenty chapters. The first part introduces the important background issues that affect power system reliability. The second part presents the reliability methods that are used for analyses of technical systems and processes. The third part discusses power flow analysis methods, because the dynamic aspect of a power system is an important part of related reliability assessments. The fourth part explores various aspects of the reliability assessment of power systems and their parts. The fifth part covers optimization methods. The sixth part looks at the application of reliability and optimization methods.

Assessment of Power System Reliability has been written in straightforward language that continues into the mathematical representation of the methods. Power engineers and developers will appreciate the emphasis on practical usage, while researchers and advanced students will benefit from the simple examples that can facilitate their understanding of the theory behind power system reliability and that outline the procedure for application of the presented methods.

Inhaltsverzeichnis

Frontmatter

Background

Frontmatter
Chapter 1. Introduction to Power Systems
Abstract
Electric power system is one of the largest and the most complex systems, which is established by the mankind. Because of the complexity of the electric power systems, it is relatively difficult to define and assess the reliability as a single parameter of a single system. Rather, several methods, tools, and measures are developed, which highlight the questions about the power systems reliability each from its viewpoint. Introduction to power systems highlights the short history of power systems, the facts about increased consumption and complexity of power systems, and the problems with reliability definition and application.
Marko Čepin
Chapter 2. Introduction to Blackouts
Abstract
Power system blackout means that a larger area of consumers of electrical energy is left without electrical energy for a determined duration of time. Power system blackouts have become a phenomenon that seems to be more important. The importance of blackouts is large, because their economical and their technical consequences may grow up. Larger and larger consequences are mostly connected with the increased complexity of the systems. The probability of future blackouts is very low and difficult to predict. Their impact can be destructible. Review of selected recent blackouts is presented with emphasis on their causes and consequences in this chapter. Estimation of the future risks connected with blackouts is an important task, not only to prevent huge economic damage but also to protect human lives.
Marko Čepin
Chapter 3. Definition of Reliability and Risk
Abstract
Because of the many different operational requirements and varying environments, reliability means different things to different people. The generally accepted definition of reliability defines the reliability as the characteristic of an item expressed by the probability that it will perform a required function under stated conditions for a stated period of time. The term reliability is divided to two terms when dealing with the power systems. Those two terms are adequacy and security. The adequacyadequacy is related to the existence of sufficient generation of the electric power system to satisfy the consumer demand. The securitysecurity is related to the ability of the electric power system to respond to transients and disturbances that occur in the system. RiskRisk is a combination of a probability for an accident occurrence and resulting negative consequences. Risk is often reserved for random events with negative consequences to human life and environment.
Marko Čepin
Chapter 4. Probability Theory
Abstract
Probability theory is a part of mathematics that aims to provide insights into phenomena that depend on chance or on uncertainty. A basic treatment of probability from a perspective of engineers who are going to use the probability theory as a support for the practical reliability analyses is presented. Probability can be defined in terms of frequency of occurrence, as a percentage of successes in a large number of similar situations. Or probability of the event may express the subjective belief about the event. The mathematical representation of probability theory starts with set theory and basic probability concepts. The definition of factorial and the Pascal triangle represents the background for the theory of combinations. The theory of combinations determines the possible grouping of objects. There are three processes of interest: (i) permutations, (ii) combination, and (iii) variations, which are actually a union of the other two. One could say a permutation is an ordered combination. When the objects of a group are arranged in a certain order, the arrangement is called a permutation. In a permutation, the order of the objects is very important. The conditional probability of an event is the probability given that another event has occurred. Bayes theorem can be seen as a way of understanding how the probability that a theory is true is affected by a new piece of evidence. A random variable is any variable determined by chance and with no predictable relationship to any other variable. The term random variable is presented in theory and examples. Distribution is a degree to which the outcomes of events are evenly spread over the possible values. Probability distribution function is a function that represents probabilities to which the outcomes of events are spread over the possible values. The probability distribution functions are presented. The bathtub failure rate concept is widely used to represent failure behavior of many engineering items. The term bathtub stems from the fact that the shape of the failure rate curve resembles a bathtub. The bathtub curve consists of three periods: (i) an infant mortality period with a decreasing failure rate, (ii) a normal life period or useful life period with a low and relatively constant failure rate, and (iii) a wear-out period that exhibits an increasing failure rate.
Marko Čepin

Reliabilty Methods

Frontmatter
Chapter 5. Fault Tree Analysis
Abstract
The fault tree analysis is a standard method for improvement of reliability, which is applied in various sectors, such as nuclear industry, air and space industry, electrical industry, chemical industry, railway industry, transport, software reliability, and insurance. The fault tree analysis is described in a way of the procedure for application together with small practical examples. The development of the fault trees and their qualitative and quantitative evaluation is presented. The illustrative examples for the application of the importance measures, such as Fussel?Vesely importance, risk achievement worth, risk reduction worth, and Birnbaum importance, are given. The applications of the fault tree analysis are mentioned, and a comprehensive list of related references is given.
Marko Čepin
Chapter 6. Event Tree Analysis
Abstract
Event tree analysis is the technique used to define potential accident sequences associated with a particular initiating event or set of initiating events. The event tree model describes the logical connection between the potential successes and failures of defined safety systems or safety functions as they respond to the initiating event and the sequence of events. The event tree evaluation can be qualitative or quantitative or both. The evaluation is similar to the fault tree evaluation. Two general methods or approaches exist for the event tree linking process with the fault tree analysis. The small event tree and large fault tree approach are mostly used in nuclear industry in probabilistic safety assessment of nuclear power plants.
Marko Čepin
Chapter 7. Binary Decision Diagram
Abstract
A binary decision diagram is a directed acyclic graph that consists of nodes and edges. It deals with Boolean functions. A binary decision diagram consists of a set of decision nodes, starting at the root node at the top of the decision diagram. Each decision node contains two outgoing branches, one is a high branch and the other is a low branch. These branches may be represented as solid and dotted lines, respectively. The binary decision diagram contains high and low branches that are used to connect decision nodes with each other to create decision paths. The high and low branches of the final decision nodes are connected to either a high- or low-terminal node, which represents the output of the function. The development of examples of binary decision diagrams is presented in text and in figures. Shannon decomposition is explained. The conversion of a fault tree to a binary decision diagram is shown.
Marko Čepin
Chapter 8. Markov Processes
Abstract
A Markov chain is a type of Markov process in which there are number of finite states that the process may exist at any given time. The probability of the process moving from one state to another is denoted by the transition probability and the probability of the process remaining in the same state is denoted by certain probability. Such modeling provides a clear representation of all the states of a system as well as the transition between these states. One disadvantage is that for large systems with many components, it is difficult to draw a diagram. This is because for a system of n components, each with a failed or operating state, the number of states that exist is equal to 2 n .
Marko Čepin
Chapter 9. Reliability Block Diagram
Abstract
The reliability block diagram is a method used to analyze systems and assess their reliability. It includes a graphical representation of the system and equations that can be used to analyze the reliability of the system. The blocks represent the groups of components or the smallest entities of the system, which are not further divided, i.e., components of the system. If the individual components of a system are connected in series, the failure of any component causes the system to fail. If the individual components of a system are connected in parallel, the failures of all components cause the system to fail.
Marko Čepin
Chapter 10. Common Cause Failures
Abstract
Common cause failure events are a subset of dependent events in which two or more component fault states exist at the same time and are a direct result of a shared root cause. If the dependency exists between parallel events, the probability of system failure is larger than the product of failure probabilities of all parallel events. The procedures for common cause failure analysis are presented. The representation of common cause failures within the fault tree analysis is described and presented on simple examples. The methods for evaluation of common cause failures are described: beta factor method, basic parameter method, multiple Greek letter method, and alpha factor method. The mathematical models are presented. The emphasis is placed to beta factor method, which is the simplest of the four methods. Simple examples are given.
Marko Čepin

Part 3

Chapter 11. Methods for Power Flow Analysis
Abstract
The methods for power flow analysis can be divided to deterministic and probabilistic methods. The deterministic methods, such as Newton?Raphson method, Gauss?Seidel method, fast decoupled load flow method, and direct current load flow method, use specific values of power generations and load demands of a selected network configuration to calculate system states and power flows. The probabilistic methods require inputs with probability density function to obtain system states and power flows in terms of probability density function, so that the system uncertainties can be included and reflected in the results. The methods are presented and the related equations and systems of equations are explained. The focus is placed to the Newton?Raphson method and to Gauss?Seidel method. The iterative procedures are explained. The graphical representation of the procedure steps is given.
Marko Čepin

Reliability of Power Systems

Frontmatter
Chapter 12. Generating Capacity Methods
Abstract
Several methods, measures or indicators have been developed for assessment of reliability of power systems. A single formula or technique, which would give all the answers, does not exist, because the power systems are far too complex for this to be possible. Selected methods, measures and indicators are summarized from the point of view of generating power and energy which is produced in power plants. Generation reserve margin is a measure, which shows how the capacity of power system exceeds the peak consumption. Percent reserve evaluation is calculated by comparing the total installed generating capacity at peak with the peak load. The loss of load probability is defined as the probability of the system load exceeding available generating capacity under the assumption that the peak load is considered as constant through the day. The loss of load probability does not really stand for a probability. It expresses statistically calculated value representing the percentage of hours or days in a certain time frame, when energy consumption cannot be covered considering the probability of losses of generating units. The frequency and duration method utilizes the transition rate parameters ? and ? in addition to availability and unavailability. Parameter ? represents failure rate. Parameter ? represents the repair rate.
Marko Čepin
Chapter 13. Reliability and Performance Indicators of Power Plants
Abstract
The reliability of power plants is one of the parameters of the reliability of power systems. The power system includes a variety of power plants so each of the plants is presented by its reliability indicators. The need for several indicators for one plant arises from the fact that the plants under consideration are fairly complex facilities and they are a part of a very complex system, where only one indicator may not be sufficient. The presented indicators that are collected for distinguished power plants are not all comparable to each other. The reason for their incomparability lays in the fact that different groups of professionals deal with each of the plants and they do not invest the efforts in some unification of the terminology and methodology. The reliability and performance indicators include plant availability, unit capability factor, unplanned capability loss factor, safety accident rate, safety system performance, time availability factor, capacity factor or load factor, a forced outage rate, which is actually not a rate but availability, equivalent forced outage rate, and successful start-up rate.
Marko Čepin
Chapter 14. Distribution and Transmission System Reliability Measures
Abstract
The power system reliability is one of the features of power system quality. The electric utility industry has developed several performance measures of reliability or reliability indices. These reliability indices include measures of outage duration, frequency of outage, number or customers involved or their lost power or energy, and the response time. The distribution and transmission reliability indices include system average interruption frequency index (SAIFI), Transformer SAIFI, equivalent number of interruptions related to the installed capacity (NIEPI), customer interruption, system average interruption duration index (SAIDI), transformer SAIDI, equivalent interruption time related to the installed capacity (TIEPI), customer-minutes lost (CML), customer average interruption duration index (CAIDI), customer total average interruption duration index (CTAIDI), customer average interruption frequency index (CAIFI), average service availability index (ASAI), customers experiencing multiple interruptions (CEMIn), energy not supplied (ENS), average energy not supplied (AENS), average customer curtailment index (ACCI), average system interruption frequency index (ASIFI), average system interruption duration index (ASIDI), average interruption time (AIT), average interruption frequency (AIF), average interruption duration (AID), momentary average interruption frequency index (MAIFI), momentary average interruption event frequency index (MAIFIE), and customers experiencing multiple sustained interruption and momentary interruption events (CEMSMIn).
Marko Čepin
Chapter 15. Power System Reliability Method
Abstract
The electric power system reliability can be assessed based on the configuration of the system, based on the reliability of the components of the system, and based on the viewpoint of power delivery to the loads of the power system. The reliability of the power system is defined from its complement, i.e., unreliability. Unreliability of the power delivery to the ith load can be assessed as the top-event probability of the respective fault tree analysis. Consideration of each of the loads can be done as consideration of a subsystem. Evaluation of subsystems represents the input to the evaluation of the overall power system. The unreliabilities of power delivery to the loads of the system are considered as weighted to get the overall measure of the power system reliability. The method bases on the fault tree features. It is described on small examples. The prerequisite for the method development is the representation of the system topology. The nodes of the network and their connections are represented by the buses in the power system and the power lines between the buses. When the system topology is defined, the functional tree of power flow paths is developed. When the functional tree of power flow paths is developed, fault tree is constructed and analyzed. The prerequisite for the quantitative analysis is collection of data about the failure probabilities of modeled equipment. Sources of databases are mentioned. Examples are given.
Marko Čepin

Part 5

Chapter 16. Linear Programming
Abstract
Linear programming is an optimization method capable of dealing with an objective function and constraints written as linear inequalities related to objective function and finding the optimal value under specified constraints. An optimization procedure called simplex procedure is developed for solving the problems with the linear programming method. The linear programming method has a very high speed of solution, and high reliability in the sense that an optimal solution can be obtained for most situations. The main drawback of the method is inaccuracy of the problem, where linearized problem was built from a non-linear one. Consequently, the inaccuracy of the result follows the inaccuracy of the model.
Marko Čepin
Chapter 17. Dynamic Programming
Abstract
Dynamic programming is an optimization method that transforms a complex problem into a sequence of simpler problems. A sequence of simpler problems can be dealt with a variety of optimization techniques that can be employed to solve particular aspects of a more general formulation. Dynamic programming can be top-down or bottom-up oriented. Three most important characteristics of dynamic programming problems are the following:
  • Multiple stages, which are solved sequentially one stage at a time.
  • States, which reflect the information required to assess the consequences that the current decision has on future actions.
  • Recursive optimization, which builds to a solution of the overall N-stage problem by first solving a one-stage problem and sequentially including one stage at a time and solving one-stage problems until the overall optimum has been found.
Marko Čepin
Chapter 18. Genetic Algorithm
Abstract
Genetic algorithm is a probabilistic search method founded on the principle of natural selection and genetic recombination. Genetic algorithm represents a powerful method that efficiently uses historical information to evaluate new search points with expected better performance. It is applicable to linear and to nonlinear problems with many local extrema. The advantages and the disadvantages of the genetic algorithm are given. The procedures for performing optimizations are explained. The flowcharts are given together with the genetic algorithm structure descriptions. The steps of the procedures are explained. Further reading of selected references is suggested because it is not possible to present in a short chapter all the features of the method with practical examples.
Marko Čepin
Chapter 19. Simulated Annealing
Abstract
Simulated annealing is a method suitable for solving optimization problems of a large scale specially ones where a desired global extremum is hidden among many local extrema. The idea of the method is an analogy with thermodynamics, specifically with the way that liquids freeze and crystallize or metals cool and anneal. For slowly cooled systems, nature is able to find the minimum energy state. If a liquid metal is cooled quickly, it does not reach this state, but rather ends up in a polycrystalline or amorphous state having higher energy. So slow cooling is essential for ensuring that a low-energy state is achieved. Simulated annealing randomizes the iterative improvement procedure and also allows occasional uphill moves in attempt to reduce the probability of being stuck at local optimal solution. These uphill moves are controlled probabilistically by the temperature, and become less and less likely toward the end of the process, as the value of temperature decreases.
Marko Čepin

Applications in Practice

Frontmatter
Chapter 20. Application of Reliability and Optimization Methods
Abstract
The field of application of reliability and optimization methods is a wide field covering several theoretical and practical problems and their solutions. The following topics are presented in more details: standby equipment reliability optimization, reliability analysis of substations, configuration control, optimization of power plants maintenance schedules, and optimal generation schedule of power system. Optimization of test and maintenance intervals of standby equipment is related to the positive and negative aspects of surveillance tests of standby equipment. Reliability analysis of substations is related to comparative analysis of substations based on reliability or based on reliability and costs. Configuration control is related to management of component and system arrangements, which differs in component or system status: available versus unavailable, to primarily control of the risk and reliability of the considered facility. Optimization of power plants maintenance schedules is related to schedule the maintenance of the power generating units, which means that the generating units are periodically taken out of the operation and they are subjected to the maintenance activities. The optimal generation schedule of power system is related to optimize the schedule of the outputs of all available generation units in the power system to minimize the fuel cost while operating and satisfying the operation constraints, including those connected to minimization of the emission of gaseous pollutants.
Marko Čepin
Backmatter
Metadaten
Titel
Assessment of Power System Reliability
verfasst von
Marko Čepin
Copyright-Jahr
2011
Verlag
Springer London
Electronic ISBN
978-0-85729-688-7
Print ISBN
978-0-85729-687-0
DOI
https://doi.org/10.1007/978-0-85729-688-7