Randomized flow model and centrality measure for electrical power transmission network analysis

https://doi.org/10.1016/j.ress.2009.11.008Get rights and content

Abstract

Commonly used centrality measures identify the most important elements in networks of components, based on the assumption that flow occurs in the network only along the shortest paths. This is not so in real networks, where different operational rules drive the flow. For this reason, a different model of flow in a network is considered here: rather than along shortest paths only, it is assumed that contributions come essentially from all paths between nodes, as simulated by random walks. Centrality measures can then be coherently defined. An example of application to an electrical power transmission system is presented.

Introduction

Modern society is witnessing a continuous growth in the complexity of the infrastructure networks which it relies upon. Reliable electric power supply, for example, is crucial for many of the services that are taken for granted today; disturbances in the power supply have the potential of severely disrupting these indispensable services. This raises significant concern about reliability and resilience to disturbances and failures of various types of infrastructure systems, and a corresponding demand for methods capable of analyzing the vulnerabilities of these systems [1].

The developments contained in this paper are motivated by the interest in the analysis of electric power networks. In this context, the network analysis paradigm set up to study the dynamics of the relations in social networks has been previously utilized to analyze the vulnerability of electric power infrastructure systems [2], [3]. The focus of these types of studies is typically on analyzing the structural properties of the system from a topological point of view, i.e., considering only the connectivity properties of the network and not the actual physical flow through it [4], [5]. Three drawbacks associated with the related measures of network performance are that they are based on:

  • binary links among network nodes (or components), thus neglecting the strength of the connections (or links or arcs or edges); this has been pointed at as a limitation both in social networks, where the strength and depth of interpersonal relationships is of relevance [6], [7], [8] and in engineered network infrastructures, where the capacities of the arcs connecting the components limit the flow among them [5];

  • a simplified modeling scheme which assumes that flow (communication, in the social case) between a pair of components (persons, in the social case) in the network takes place only along the shortest path linking them [8], [9]; this has been considered a limitation in many cases, because the flow from one node of a network to another is typically a global phenomenon which does not depend only on the links on the direct and shortest paths, since it is quite possible that information will take a more circuitous route; this is true both in social networks, where information may travel by random communication or be intentionally channeled through intermediaries, and in network infrastructures, where flow is channeled through selected routes, following the specific operative rules and constraints which apply to the system;

  • a simplified modeling scheme which neglects the possibility of failures in the interconnections between pairs of linked components; this is particularly relevant for the engineered infrastructure networks made of fallible hardware and software, operated by (unfortunately) not error-free human operators.

In synthesis, when looking at the safety, reliability and vulnerability characteristics of an infrastructure such as the electric power transmission network, one should take into account the capacities of the transmission elements and their probability of failure, and examine the different transmission routes available to the flow. This would entail undertaking a complex and detailed mechanistic modeling effort of the entire network system, which is in practice often unfeasible, both with respect to its development and its computation. For this reason, a framework of analysis has been proposed to integrate models at different levels of detail, in a problem-driven approach to solution; complementation of network analysis, for performing an initial screening of the vulnerabilities of a critical infrastructure with object-oriented modeling, to further deepen the vulnerability assessment of the screened scenarios has been investigated as a feasible way to proceed in such direction [10].

To improve the physical description of the network characteristics within a network analysis for preliminary screening, a model based on random walks is here introduced as an extension of the model in [8] giving proper consideration to the following facts:

  • each link connecting two nodes is characterized by a transmission capacity which cannot be exceeded;

  • the capacities of the network lines are assumed to stochastically vary, to account for the inherent uncertainties;

  • not only the links on the direct and shortest paths are considered in the analysis of the transmission of flow; this is achieved by a randomization of the direction of the flow in output from a node; the randomization is driven by the capacities of the outgoing links, with the highest capacity links most probably channeling the flow;

  • the network interconnecting links are assumed fallible, with given probabilities;

  • source generation and load demands are assumed to vary stochastically, to account for the fluctuations inherent in the network behavior and operation.

From the analysis of the network characteristics and behavior, it is also important to gain an understanding of the role that the elements of the infrastructure network play in determining the flow through it, as this can be of great practical aid to network designers and operators in providing indications for network protection. From a topological viewpoint, various measures of the importance of a network node, can be introduced. These so-called centrality measures, take into account the different ways in which a node interacts/communicates with the rest of the network. Classical topological centrality measures are the degree centrality [11], [12], the closeness centrality [12], [13], [14], the betweenness centrality [12] and the information centrality [15]. The major drawback of these measures is that to assess the node importance they rely only on topological information based on the three previously mentioned model simplifications. Then, based on the model proposed in this paper an extension of the betweenness centrality measure of [16] is computed, to more realistically capture the importance of the role played by the different components in determining the flow through the network.

An application of the proposed approach is illustrated with reference to a power transmission network system of literature [17].

The paper is organized as follows. In Section 2, a description of the random walk flow propagation model is provided. In Section 3, the topological concept of betweenness centrality measure is recalled and then extended to its randomized flow definition. The results obtained on a case study of literature are discussed in Section 4. Conclusions are drawn in Section 5.

Section snippets

Randomized flow model of a power transmission network infrastructure

The topological interconnection of a power transmission system can be modeled as a network consisting of N nodes (also called vertexes) and K edges (also called arcs or lines): the buses of the electric grid are represented as nodes interconnected by undirected edges representing the transmission lines; NS nodes are power sources (generators), NT nodes are targets (loads) and the rest are transmission nodes. The N×N adjacency matrix {aij} defines the topological structure of the network, i.e.,

Randomized betweenness centrality measure

Determining the critical elements of large-scale network infrastructures is an important issue for the reliability and the protection of the network. From a topological point of view, a number of centrality indices have been introduced as measures of the importance of the nodes in a network [18]. These indices take into account the different ways in which a node interacts and communicates with the rest of the network and have proved of value in the analysis and understanding of the role played

Application

The artificial transmission network system IEEE 14 BUS [17] is taken as reference case study. The network represents a portion of the American Electric Power System and consist of 14 bus locations connected by 20 lines and transformers, as shown in Fig. 1. The transmission lines operate at two different voltage levels, 132 and 230 kV. The system working at 230 kV is represented in the upper half of Fig. 1, with 230/132 kV tie stations at Buses 4, 5 and 7. The system is also provided with voltage

Conclusions

In previous works, network analysis has been shown suitable for a preliminary analysis of complex infrastructures aimed at identifying structural criticalities, e.g. the most connected nodes, shortest path lengths of connection, most vulnerable nodes, etc. Limitations of the analysis relate to the neglecting of the actual capacities of the links, their probabilities of failures and the fact that flow among network nodes is typically a global phenomenon, not restricted to only direct, shortest

Acknowledgment

The authors wish to thank Prof. Maurizio Delfanti and Dr. Mauro Pozzi of the Department of Energy of the Politecnico di Milano, for their contribution to the work.

References (24)

  • L. Festinger

    The analysis of sociograms using matrix algebra

    Human Relations

    (1949)
  • X. Yan

    A fuzzy set analysis of sociometric structure

    Journal of Mathematical Sociology

    (1988)
  • Cited by (70)

    • Method of power network critical nodes identification and robustness enhancement based on a cooperative framework

      2021, Reliability Engineering and System Safety
      Citation Excerpt :

      During past decades, they have proposed many different models and applied different methods to analyze the robustness of power networks. Among these approaches, topologies and flow models have been widely employed in exploring the properties and robustness of networks [5–8]. From the perspective of statistical physics, percolation theory provides a different angle to explain the network robustness [9–13].

    • Joint optimization of workforce scheduling and routing for restoring a disrupted critical infrastructure

      2019, Reliability Engineering and System Safety
      Citation Excerpt :

      For solving the problem of restoring a disrupted infrastructure, one of the most important issues is to determine an appropriate metric to quantitatively measure the performance of the infrastructure. As widely done in the literature [17, 28, 31, 35–38], this research employs maximal network flow between source node s and sink node d to characterize and measure the dynamic infrastructure performance during the entire restoration process. If only one restoration team is available, the shortest travelling distance is considered as the objective, and the component restoration time is ignored, the proposed problem reduces to the classic traveling salesman problem (TSP), which is known to be NP-hard [39].

    • Review of major approaches to analyze vulnerability in power system

      2019, Reliability Engineering and System Safety
    View all citing articles on Scopus

    This work has been partially funded by the Foundation pour une Culture de Securité Industrielle of Toulouse, France, under the research contract AO2006-01.

    View full text