Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2016 | OriginalPaper | Buchkapitel

5. Phenomenological Simulators of Critical Infrastructures

verfasst von : Alberto Tofani, Gregorio D’Agostino, José Martí

Erschienen in: Managing the Complexity of Critical Infrastructures

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The objective of this chapter is to introduce and discuss the main phenomenological approaches that have been used within the CI M&S area. Phenomenological models are used to analyse the organizational phenomena of the society considering its complexity (finance, mobility, health) and the interactions among its different components. Within CI MA&S, different modelling approaches have been proposed and used as, for example, physical simulators (e.g. power flow simulators for electrical networks). Physical simulators are used to predict the behaviour of the physical system (the technological network) under different conditions. As an example, electrical engineers use different kind of simulators during planning and managing of network activities for different purposes: (1) power flow simulators for the evaluation of electrical network configuration changes (that can be both deliberate changes or results from of the effects of accidents and/or attacks) and contingency analysis, (2) real time simulators for the design of protection devices and new controllers. For the telecommunication domain one mat resort to  network traffic simulators as for example ns2/ns3 codes that allow the simulation of telecommunication networks (wired/wireless) at packet switching level and evaluate its performances. Single domains simulators can be federated to analyse the interactions among different domains. In contrast, phenomenological simulators use more abstract data and models for the interaction among the different components of the system. The chapter will describe the main characteristic of some of the main simulation approaches resulting from the ENEA and UBC efforts in the CIP and Complexity Science field.

1 Introduction

Phenomenological Modelling: “Phenomenological models have been defined in different, though related, ways. A traditional definition takes them to be models that only represent observable properties of their targets and refrain from postulating hidden mechanisms and the like” [1].
The scope of this chapter is to introduce and discuss phenomenological approaches for Modelling Analysis and Simulation (MA&S) of systems involving Critical Infrastructures (CI’s). Phenomenological models provide a means to analyse the organizational phenomena of society considering its global complexity (finance, mobility, health, social, energetics, communications, etc.) and the interactions among its different components. With respect to CI’s, different modelling approaches have been introduced and used, spanning from very accurate simulators such as “physical simulators” (e.g. power flow simulators for electrical networks) to more abstract ones such as I/O models (e.g. Leontief models for finance).
There is no clear-cut definition of “phenomenological models”, however they are normally restricted to those modelling activities based on a massive set of “parameters” to be fed by the modeller. The opposite of the phenomenological models being the “ab initio” ones where parameters are limited to a minimum irreducible set. Alternatively one may qualify phenomenological models as those disregarding internal functional details, thus focussing of the effective response.
Regardless of the semantic boundaries, any MA&S activity relies on a “conceptualization” (i.e. a formal, possibly, mathematical representation) of the inspected system. The first step of any scientific approach to a technological system is its “representation”. It is worth noting that an “elective” representation does not exist: depending on the commitment, available information, knowledge and computational means, the “most effective” representation (if any) will be different.
The selection of the model and consequently the simulation paradigm depends on commitment and availability of data. Physical (or Domain) simulators are used to predict the behaviour of the physical system (the technological network) under different conditions and hence to take critical decisions or enforce structural improvements. As an example, the electrical engineers use different kinds of simulators during planning and network management activities depending on their different purposes: power flow simulators are adequate for the evaluation of electrical network configuration changes (that can be both deliberate changes of the effects of accidents and/or attacks) and contingency analysis; while real time simulators are required for the design of protection devices and new controllers. Similarly considerations apply to other energy or goods delivering infrastructures, such as gas, fuel, water transport and distribution. Concerning the telecommunications domain, or other non-conservative distribution systems, one may resort to network traffic simulators, as, for instance ns2/ns3, which allow the assessment of the telecommunication network performance for both wired and wireless architectures.
Single domain simulators can be federated to analyse the interactions among different domains, thus leading to specific simulation activities, which are covered elsewhere in this book. On the other side, phenomenological simulators may use more abstract data and models for the interaction among the different components of the system, thus providing the global response on the system (i.e. system of systems).
Within the phenomenological MA&S activities, we will shortly cover the approaches underlying the most widespread of them:
  • Topological Analyses. Topological and qualitative approaches are suitable for the identification of general characteristics and possibly emergent behaviour of technological networks. In general they do not require very detailed data input and their computational effort is limited. As a consequence, these approaches are suitable for the analysis of general properties of very large networks (e.g. internet) an provide large size effects which may be hidden by details.
  • Input-Output Models. In systems engineering and in economics input-output models are based on the concept of “blocks” that have a given transfer function which is expressed with a mathematical formula. The blocks are connected in a certain topological arrangement. For a given block, the output of the block depend on the input to the block. These models can be deterministic when the laws that govern the blocks are well known (e.g., Newton’s law) and the blocks will always give the same output for the same input. When the laws that describe the system blocks are not exactly known (or depend on some stochastic factors), the models can be probabilistic (including those that follow stochastic laws), in which case there is only a certain expectation of getting some output for a given input. Among this group it is worth mentioning the Inoperability I/O Model (IIM) [2, 3] and Dynamic IIM models [4].
  • System Dynamics. Input-output models provide the output given the input. Mathematically, there are two possible states of a system, the steady state and the transient state. The steady state occurs after the system output settles down for an input that has settled down. However, if the input changes, the output will adapt (if stable) to the new input. The trajectory of the system when transitioning from the initial state to the new state depends on the internal dynamics of the system (“inertia” in physics). The system blocks can be connected to provide each other with positive or negative feedback loops (control systems theory). In economics, these models relate production and consumption variables at a macroscopic level.
  • Stochastic Models. In principle all models may be extended to introduce nondeterministic behaviour. In this respect, one may basically identify two different approaches. On one side, one may perform deterministic simulations with a wide range of random boundary conditions [5]; on the other side, the dynamics of the system may be intrinsically stochastic [6].
  • Agents simulation. Agent based-functional modelling paradigms are based on representations of the system by different components, each behaving according to given (deterministic or stochastic) rules depending on its status and a limited set of features of the components they are related to. Agent-based functional modelling approaches, in particular, use a description of the system based on the observed knowledge of how the system behaves under a set of situations. Agents are given attributes according to their observed behaviour. These attributes play a similar role to the transfer function concept in systems engineering, but are described by “if-then” statements rather than mathematical formulas. Agent-based simulation may represent a useful tool to perform exercises, what if analysis and serious gaming. For instance, agent analysis may allow the optimization of crisis scenarios based on previous expert experience.
  • I2SIM combines several of the above methods. It uses agent-based concepts to relate system blocks that cannot be described by mathematical equations, such as the operation of a hospital, and mathematical formulas or logical relationships to describe, for example, the operation of transformer and breaker arrangements in an electrical substation. In economics Leontief’s production model relates input resources in a sector with the output of that sector linearized around an operating point. I2Sim extends this concept by allowing nonlinear relationships among input resources and output resource and also by including human factors like tiredness, enthusiasm, and others that are not directly part of the input resources but that alter the effectiveness of the process.
As already mentioned, in general, the choice of a suitable approach depends on the quantity and quality of available data, the scale of analysis and the modelling objective [7, 8]. Different approaches can be integrated in order to build complete platforms and tools for comprehensive CI M&S and analysis. Figure 1 shows a possible architecture for a comprehensive modelling, analysis and simulation approach. This proposed architecture highlights the need to manage a possibly huge quantity of heterogeneous data and the different analysis that can be performed on these data. In particular the figure shows the different phenomenological simulators that will be described in the following sections and their main modelling and analysis scope.
The chapter will describe the main characteristics of some of these simulation approaches, in particular those approaches that have been extensively applied in different research projects at ENEA and UBC.

2 Phenomenological Approaches

2.1 Leontief I/O Models

Leontief approaches have been defined mainly for the study of interdependency effects in economic systems. A Leontief model is an Input-Output model where the dependencies among different domains (in the original model, economic sectors) are represented through an input-output matrix to relate the amount of input resources needed for a given amount of finished product. The original Leontief model assumes a linear (or linearized around an operating point) relationship between the input and the output variables.
$$x = Ax + c \Leftrightarrow x_{i} = \mathop \sum \limits_{j} a_{ij} x_{j} + c_{i} \;\forall i$$
The term \(x_{i}\) represents the total output of industry or economic sector i, the coefficient \(a_{ij}\) represents the dependency between sectors i and j (sector j requires from sector i an amount of resources represented by the coefficient \(a_{ij}\)). The term \(c_{i}\) represents the “surplus” from sector i, that is, the output from sector i that is not needed by the other sectors and, therefore, is available as external output from the production system. In the context of CI MA&S the Leontief approach has been extended considering the inoperability of a CI network. The inoperability represents the expected percentage of a network malfunctioning status. I/O models based on inoperabilities are commonly referred to as “Interdependence Input/Output Models (IIM)” and are described in another chapter of this book. The IIM models can be described using the following system of linear equations proposed in [2]:
$$Q_{i} = \mathop \sum \limits_{j = 1, ..,N} M_{ij} Q_{j} + \gamma_{iA} D_{A}$$
where the Q’s are components’ inoperabilities, M is the relational matrix, DA is the disturbance and γiA measures the impact of disturbance on sector j (see also Sect. 2.1.1). Using this approach it is possible to calculate the inoperabilities of a system due to any external disturbance DA. Beyond its simplicity this model can be useful to understand non trivial systems behavior due to the intrinsic complexity of the system of systems formed by (inter-)dependent CI networks.
In the next section a particular extension adopted in ENEA of the IIM modeling approach is described.

2.1.1 ENEA Extended Leontief Models

As an enhancement of the IIM approach, a Stochastic Chain evolution law may replace the Leontief deterministic one, thus creating a more appropriate tool to dynamically follow the (stochastic) transition from an equilibrium state to a new one and possibly mimic the cascading effects triggered by unwilled disturbances. Moreover, as a variation of the “System of Systems” approach, each network has not been treated like an holomorphic entity, but its inner structure has been dealt with. Multiple implementations of the same scenarios at different level of granularity have been compared providing evidence for intrinsic inconsistency of high level abstraction models disregarding the actual geographic distribution of network [CRITIS2009].
Indeed, on can extend the former approach to introduce temporal dynamics in the model:
$$Q_{i} \left( {t +\Delta t} \right) = \mathop \sum \limits_{j = 1,..,N} M_{ij} Q_{J} \left( t \right) + \gamma_{iA} (t)D_{A} (t)$$
Considering \(\Delta _{t} \to 0\) the previous equation becomes a stochastic differential equation
$$dQ_{i} = \mathop \sum \limits_{j = 1, \ldots ,N} h_{ij} Q_{J} \left( t \right)dt + \gamma_{i} \left( t \right)dD_{A} (t)$$
\(dD_{A} (t)\) represents the “power” of disturbance (disturbance per unit time) and the matrix h is defined as follows
$$h_{ij} = \mathop {\lim }\limits_{{\Delta t \to 0}} (M_{ij} - I)/\Delta t$$
Considering the constraints that external disturbance and the response of the components are constant and the inoperabilities lie within the [0, …, 1] range in [6] an explicit solution has been given to the previous system of equations. Figure 2 shows a typical evolution of inoperabilities in a CI networks system of systems. The inoperabilities are due to an undesired event directly impacting only one component in the model (local disturbance). As it can be seen, the fault propagates affecting other components. After a while the most impacted component is not the one initially perturbed (box in Fig. 2) as may be expected.
Indeed, the systemic behaviour reflect precisely in the fact that response of the system does not dependent on local quantities but on its global characteristics.

2.2 System Dynamics

System Dynamics tries to represent the nonlinear behaviour of a complex system using dynamic stock and flows diagrams. These diagrams are formed by: stocks representing the entities in the model accumulating or depleting over time and by flows representing accumulation rates for the related stocks. System dynamics models include positive and negative feedback loops to relate production and consumption variables at a macroscopic level and feedback loops. One of the famous application of System Dynamics model is the Forrester World Model used to predict that the limits to growth of the planet. The Forrester World Model is a flat model (all processes occur in the same layer) that considers the following systems: food, industrial, population, non-renewable resources, and pollution. Considering the CIP field there are a number of approaches that use System Dynamics (SD in the following). For instance, in [9] the SD approach is used to assess the impact of cyber-attacks on critical infrastructures. The methodology compares the behavior of a complex physical process considering two possible situations: the critical assets in its normal behavior and the critical assets under cyber-attack. In this way, the methodology can be used to assess the significance of the considered cyber asset.
The SD approach has been used also in the framework of the CRISADMIN (CRitical Infrastructure Simulation of ADvanced Models on Interconnected Networks resilience) EU project [10] that aims to develop a tool to evaluate the impact of large catastrophic events and/or terrorist attacks on critical infrastructures. The tool is a DSS useful for the assessment and management of critical events. The DSS objective is to simulate preventive measures and emergency responders’ activities during an emergency. The DSS is available in the form of a prototype and it was used in four test cases: United Kingdom Flood (2007), Central Eastern Europe Flood (2002), Madrid terrorist attack (2004), and London terrorist attack (2005).

2.3 i2SIM

The I2Sim (Integrated Interdependencies Simulator) was developed at The University of British Columbia to extend the capabilities of large engineering systems simulation by incorporating phenomena that cannot be expressed in terms of mathematical transfer functions [11]. For example, the operation of a hospital in terms of patients accepted per hour cannot be capture by physical equations, but it is known to the hospital manager and can be captured in an input-output table that is called an HRT (human readable table).
Figure 3 shows an example of an HRT for a hospital emergency unit.
In the table, the full operation of the hospital, 20 patients per hour, is achieved when the electricity is 100 kW, the water is 2000 l/h, there are 4 doctors and 8 nurses, there is no physical damage (for example, due to an earthquake) and the doctors are not tired. In the scenario (circled values), there is no lack of electricity or doctors, but there are limited resources in terms of nurses, physical integrity, some tiredness of the doctors, and mostly lack of water. The output in this example is limited to 10 patients per hour due to the lack of enough water.
Figure 4 shows a simple sample system for i2Sim. The production units in i2Sim are called “cells” (Fig. 5a) that receive inputs (physical or modifiers) and produce one output. Other basic ontological elements include the connection among cells “channels” (Fig. 5b) that deliver the tokens from one cell to another (Fig. 5b). Channels may introduce losses and delays in the delivery of the tokens. The channels constitute an equivalent of the token transportation system. For example, there are many pieces of water pipes connecting the water pump station and the hospital, but a single equivalent channel can capture the water losses due to cracks in the pipes. At the output of the cell, there is a “distributor” that splits the output of the cell into the portions (ratios) delivered to the other cells. How the split ratios are determined is a “decision” made in a separate layer outside the system in the figure. The split of the outputs at the distributors is fundamental to optimize the total system objective, e.g. save lives during a disaster.
The fundamental problem during a natural disaster, cyber-attack, or system failure, is that the resources that the system uses during normal operation will be limited because of the damage caused by the event. The decisions at the distributors are made by optimizers, either of mathematical or human type. Figure 6 shows the HRT for an electrical system substation that normally delivers 60 MW of electricity. If one of the two transformers is damaged then the output will be limited to 30 MW and a decision will have to be made as to which customers will receive the available power. This decision should be made in terms of the importance of the cells that will receive this power within the global objective function of the system. For example, during a disaster the global objective will be to save human lives. It then makes sense to send all the available power to the hospitals. However, if the water pump stations do not receive power, the hospital will not be able to operate, not because of the lack of electricity but because of the lack of water. The allocation of the available electricity, water, and other resources is a mathematical optimization problem that changes dynamically in time as system repairs are made and further damage occurs. The i2Sim framework allows the incorporation of physical, cyber-physical, organizational, and human variables within the context of optimizing the global system’s objective.
I2Sim follows a layered approach (Fig. 7) at integrating physical and non-physical phenomena. The layers illustrated in Fig. 7 include: the Physical Production Layer (similar to Leontief’s production layer, expanded to include nonlinear relationships and human factors), the Geographical Damage Layer (that will include the calculations of the damage caused by and earthquake, for example), the Management and Organizational Layer (that will include the policies and procedures that regulate who makes what decisions), the Cyber-system Layer (that includes the signals that control the actions to actuate the physical equipment and the communications among managers and responders), and the People’s Well-being Layer (that includes, for example, the results of the actions of the system in terms of consequences on quality of life).
I2Sim’s solution engine has the capability of handling very large systems so that the degree of detail in the sub-systems and their interactions is limited mostly by the degree of resolution of the data available and the uncertainty of the values of these data.
Structurally, i2Sim follows the Multi-Area Thévenin Equivalent (MATE) concept developed for the simulation of large power systems [12]. The main predicate of MATE is that a large system is made up of smaller subsystems with links among them. Algorithmically, the MATE solution proceeds in several parallel/sequential stages: first the subsystems (of lower dimensionality than the full system) are solved separately (possibly eventually simultaneously in parallel processors). Then the dimensionality of each subsystems is reduced down to equal the number of links that connect the particular subsystem to the other subsystems (Thévenin equivalents). Then the Thévenin equivalents are brought together to form the links-subsystem of dimensionality equal to the total number of links. The links subsystem is now solved. The solution will give the flow in and out of the links connecting the subsystems. Finally, the individual subsystems are “updated” with the links solution. This concept has been generalized in i2Sim for the general framework of Fig. 7.
In the sample system of Fig. 4, the source resources are provided by utilities that may constitute a complete infrastructure subsystem, for example, the electrical grid, the water system, the transportation system, the telecommunications system, and others. Similarly, the outputs of some of the i2Sim cells can be given out to other infrastructures in an action that is opposite to that of a source, that is, into a load/sink. Each one of these subsystems can be modelled with a separate simulator (Fig. 8) which is best suited to the scenario under analysis. These “federation of external simulators” is coupled to the i2Sim “links subsystem”. The links subsystem is then optimized according to a global objective function in a process that involves the updating of the external subsystem, as described for MATE.
The federated simulators in Fig. 8 are coupled together through software adapters into a common service bus. The simulation proceeds along the time line using a master clock controller (Fig. 9).
The federated simulators in Fig. 8 are coupled together through software adapters into a common service bus. The simulation proceeds along the time line using a master clock controller (Fig. 9). The different subsystems that constitute the integrated i2Sim framework will have different response times (different “time constants”). For example, the supply of electricity can be controlled within seconds or milliseconds, while the water system may take a few minutes, and the organizational system of a hospital or emergency response management unit may take longer. To coordinate these different response rates, i2Sim uses multirate concepts developed in signal processing and simulation theory. The MATE solution framework allows for the integration of multirate concepts using interpolation and decimation techniques to maintain the synchronicity of the solution.
In addition to the optimization of resources allocation during disasters management, i2Sim can also be applied to evaluate the resiliency of a city or a region. In the case, for example, of a “smart city”, the recovery of the system of infrastructures after a natural disaster, cyber attack, or equipment failure should be managed in such a way that the most critical services are restored first. The overall objective in this application is to maximize the well-being of the citizens and this well-being can be mapped into a resilience index [13].
Figure 10 illustrates an example of a city where some basic infrastructures, electricity, water, and ICT have suffered damage and their delivered resources are limited. In this case the system objective function is to maintain the well-being of the city residents. We define a Well-Being Index (WBI) (“wee-bi”) using an HRT that shows the relative importance of the availability of certain services, in this example, electricity, water, general city services (banking, food, etc.), and ICT (internet, etc.). This is a subjective index that will depend on the area of the city and the country and will require the collaboration of social scientists and psychologists to define. The global objective of the optimization problem is to maximize the resiliency index based on this HRT table. Notice that the WBI can be highly nonlinear. This example further illustrates the capability of i2Sim to incorporate human factors into the system solution.
The HRT tables in i2Sim provide the flexibility to incorporate physical and non-physical factors into the same solution framework. In addition, since these tables may have a limited number of rows, the detail in the description can be adapted to the amount/uncertainty of knowledge for a given cell entity. The simplest HRT would have two rows indicating that the cell is either operating at full capacity (100%), or is totally non-operative (0%). In a more detailed analysis, with higher granularity of information, the number of rows would be larger. The tables in Fig. 10 have different granularities. The combinatorial solution of i2Sim uses the discrete HRT tables to find the optimum combination of rows across all cells in the system that maximizes the output objective function over a certain time scenario. Two optimization methods that have been successfully applied include reinforcement learning [14] and ordinal optimization [7].
In very large systems, however, with a large number of cells, distributors, and other components, a combinational solution can have very high dimensionality. An alternative solution to this problem is to convert the discrete relationships in the HRTs into continuous analytical functions. Figure 11 illustrates the analytical-i2Sim version. In this version, the columns of the HRTs are synthesized using continuous hyperbolic function approximations.
With the HRTs represented by functions h(t), a system of equations can be formed where each cell contributes an equation of the form
$$y_{i} = \hbox{min} \left\{ {q_{1} (x_{1} ),q_{2} (x_{2} ),q_{3} (x_{3} )} \right\}$$
where qi is the function that approximates column i in the HRT. The q j functions are assumed to be linearly independent. The cell equations can now be combined with the distributor equations, and the equations for the other components in the i2Sim ontology, to form a system of nonlinear equations that can be solved using a Newton-Raphson algorithm. The trajectory of the system towards maxima and minima can be tracked using the associated Hessian matrix for gradient-type methods of optimization. This work is currently under development. A variation of this analytical method, that involves a first-order approximation of the q j functions combined with a linear programming algorithm, has also been developed. This version can achieve orders of magnitude faster solutions and can be used as a good first-order approximation to many problems or as a starting base-point for systems with stronger nonlinearities. The optimization along a time line of the event can be obtained using machine learning techniques such as reinforcement learning [14].

3 Topological Analysis

Electrical power transmission and distribution networks, telecommunication (data, voice) networks, roads, oil and gas pipelines etc. are objects that can be easily represented as graphs where nodes represent different CIs components and the links represent their connections (e.g. logical, physical). In this respect there is a large deal of efforts in applying ideas and methods of Complex Systems (CS) to them, particularly to study their vulnerability and their response to fault. The main aim is to increase their resilience and to reduce the effects that a fault, regardless of its accidental or intentional origin, might produce. In the following some basic definitions of the graph theory.
A graph \(G = (V,E)\) is composed by a set of nodes V and a set of edges E. An edge \(e = (v_{i} ,v_{j} ) \in E\) connects the vertices \(v_{i} ,v_{j} \in V\). A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another. A graph may be un-weighted or weighted. In the latter case each \(e \in E\) has associated a real number \(w_{e}\). The degree of a node is the number of links entering (and/or leaving) from it. A graph can be fully represented by an Adjacency matrix A. For example, the Fig. 12 shows a graph example and its adjacency matrix.
The simplest indicator of how intensely a node is connected to the rest of the net is its degree defined as the number of nodes it is connected to or, equivalently, the total number of incoming and outgoing links entering or exiting from it:
$$deg_{i} = \mathop \sum \limits_{j = 1}^{N} a_{ij}$$
The degree distribution P(k) is introduced defined as the (relative or absolute) frequency of nodes of degree k. According to this property a graph can be classified as regular, random or scale free. Figure 13 shows the difference between the node degree distribution of random and scale-free graphs. In Fig. 14 two examples of graphs are depicted (Fig. 15).
The functional form of P(k) contains relevant information on the nature of the network under study. It has widely shown that “real” spontaneously-grown networks (i.e. grown with no external design or supervision) tend to show a power-law decaying P(k). In this type of networks (named “scale-free” networks), loosely connected nodes (leaves) and highly connected ones (hubs) co-exist. Scale-free networks are known to exhibit a high level of robustness against random faults of their elements, while showing a large vulnerability related to the removal of specific components: hub removals induce dramatic impacts on the graph connectivity. “Random” graphs, in turn, are those whose P(k) has a poissonian profile. The “random graph” approximation, although being used to map most of “real” networks, has been discovered to represent very few real systems [15].
Different statistical indices may be introduced to describe the degree distribution. For instance it is possible to compute the range of the node degrees using the minimum and maximum degree in the network. Then we have the average degree and variance defined as follows:
$$\left\langle {deg} \right\rangle = \frac{1}{N}\mathop \sum \limits_{s = 1}^{N} deg_{s}$$
$$\sigma_{deg}^{2} = \frac{1}{N - 1}\mathop \sum \limits_{s = 1}^{N} \left( {deg_{s}^{2} - \left\langle {deg} \right\rangle } \right)^{2}$$
To better describe the topological structure of a network it is possible to introduce the conditional degree distribution that is the probability that of a node of degree \(k_{0}\) has a neighbor of degree \(k\):
$$P\left( {k |k_{0} } \right) = \frac{{\mathop \sum \nolimits_{(i,j) \in E} a_{ij} \delta_{{deg_{i} ,k}} \delta_{{deg_{j} ,k_{0} }} }}{{\mathop \sum \nolimits_{(i,j) \in E} a_{ij} \delta_{{deg_{j} ,k_{0} }} }}$$
The last coefficient that will be reported in this work is related to the degree correlation. In particular when nodes of high correlation tend to be linked to nodes of high correlation, the net is said to be assortative, vice versa when high degree nodes tend to be linked to low degree ones the net is said to be disassortative. This coefficient can be defined as follow:
$$r = \frac{{\frac{1}{L}\mathop \sum \nolimits_{ij} a_{ij} deg_{i} deg_{j} - \left( {\frac{1}{L}\mathop \sum \nolimits_{ij} a_{ij} \frac{1}{2}(deg_{i} + deg_{j} )} \right)^{2} }}{{\frac{1}{L}\mathop \sum \nolimits_{ij} a_{ij} \frac{1}{2}\left( {deg_{i}^{2} + deg_{j}^{2} } \right) - \left( {\frac{1}{L}\mathop \sum \nolimits_{ij} a_{ij} \frac{1}{2}\left( {deg_{i} + deg_{j} } \right)} \right)^{2} }}$$
In [16] the authors analyzing the diffusive dynamics of epidemics and of distress in complex networks shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Then, the robustness of complex networks is related to the its assortative coefficient.
Using definition coming from the graph theory and different topological indices, several possible analysis are performable on a CI network. The MOTIA project [15] used the topological approach to study the main characteristics of ICT networks consisting of a set of devices or components (server, bridges etc.) connected by cables or wireless channels (links). The next table summarizes the possible properties that can be analyzed using the topological analysis approach [MOTIA].
Given a graph representation of an ICT network it is possible to calculate the topological indices reported in Table 1 to analysis the network characteristics. One of the most important property to consider is represented by the network robustness. The robustness indicates to what extent net topological properties are stable against damages. For example, there are two basic concepts of connectivity for a graph, which can be used to model network robustness: node-robustness and link-robustness. The “node robustness” of a net is the smallest number of nodes whose removal results in a disconnected or a single-node graph. Conversely, the “link robustness” is the smallest number of links whose removal results in a disconnected graph [17]. In [15] the authors uses the described topological indices to analysis the internet network.
Table 1
Topological indices
Connectivity
A graph is connected if all nodes are connected (or reachable) each other
 
Distance
The distance d(i, j) between two vertices (i and j) belonging to a connected part of a graph is the length of one of the shortest paths between them. The distance is symmetric (d(i, j = d(j, i)) only when the net is undirected
 
Eccentricity
The eccentricity ε(i) of a node i in a connected graph G is the maximum of the distances from i to any other node
 
Diameter
The diameter diam (G) of a connected part G of graph is the maximum eccentricity
over all its nodes
 
Radius
The radius rad(G) represents the minimum of such eccentricities
 
Wiener index of a node
The Wiener index of a node i, denoted by W(v) is the sum of distances between it and all the others
\(C_{W} \left( i \right) = \sum\limits_{j \in N} {d(i,j)}\)
Wiener index of a graph
The wiener index of a graph G, denoted by W(G), is the sum of distances over all pairs of vertices
\(C_{w} = \sum\limits_{i = 1}^{N} {C_{w} } \left( i \right) = \sum\limits_{i,j = 1}^{N} {d(i,j)}\)
Centrality
Relevance of a node to provide some type of property to the others
 
Betweenness
For a node i this index represents the sum of the fractions of paths connecting all pairs passing throw it. The number of paths connecting two different nodes j and k, will be indicated by \(n^{jk}\) while the number of such paths passing through the node i will be indicated by \(n_{i}^{jk}\)
\(b_{i} = \sum\limits_{j,k = 1}^{N} {\frac{{n_{i}^{jk} }}{{n^{jk} }}}\)
Clustering
The clustering coefficient c provides a parameter to measure the connectivity inside the neighborhood of a give node. In general, nodes of low clustering values might represent region of weakness on the network
\(C_{i} = \frac{{2N_{i}^{links} }}{{deg_{i} \left( {deg_{i} - 1} \right)}}\)
\(N_{i}^{links}\) represents the number of links among the neighbors of the i-th site

4 A CI MA&S Platform for Complex and Large Scenarios

This chapter describes the approaches used in the framework of the EU-FP7 CIPRNet project http://​www.​ciprnet.​eu. One of the main technological outcomes of the CIPRNet project is a DSS, named CIPCast that is able to provide a 24/7 service to CI operators and emergency (crisis) decision-makers providing a continuous risk assessment of CI elements due to natural threats. CIPCast has been designed and implemented to allow the prediction and rapid assessment of the consequences of a crisis scenario in an “operational” mode of operation (24/7). CIPCast, however, can also be used in an “off-line” mode for producing risk analysis starting from synthetically produced events (rather than truly occurring ones) or from synthetically produced damages (rather than by damages produced by true or synthetic events). In the former case, we will talk of “event simulator”, in the latter of “damage simulator”. One of the main components of CIPCast (when acting in the “operational” mode) is a continuous process (running on a 24/7 basis) realizing the Risk Assessment Loop, RAL in the following (as shown in Fig. 15). Starting from the prediction of the occurrence of natural hazards and of their strengths, RAL first estimates the expected damages, then transforms the damages into effects that they will produce on all Services (carried out by CI) which will be reduced (or switched off) and, subsequently, estimating the consequences that the loss (or reduction) of Services would have on relevant areas of societal life. The tool can also be used to “weigh” the efficacy of the proposed mitigation and healing actions and thus being a valuable tool for supporting emergency managers e.g., CI operators, Civil Protection and fire brigades.
This section describes a specific RAL workflow instance that has been implemented for the natural hazard risk assessment of electrical distribution networks. In particular, the described workflow is related to the heavy rain risk assessment of the electrical distribution network of Rome. The workflow has been implemented in cooperation with the Italian RoMA project partner ACEA that is the main electrical utility in Rome. Specifically, the section will show how the different phenomenological simulators for CIs can be used as the building bricks of different phases of the workflow and in general, for the realization of additional services for the DSS end users.
The first challenge to face during the development of such kind of platforms is the acquisition of CI networks data. In order to perform a comprehensive risk analysis these data need to be related to the different aspects involved in the management of the CI networks. Indeed, the basic requirement to build comprehensive models and, successively, comprehensive simulation and analysis is to dispose of data related to CI networks physical components and network management procedures (considering the differences between the procedures adopted in normal state and during a crisis). Then, the next step for any MA&S activity is the “conceptualization” of the inspected systems and to build formal representations. In [18] the authors propose UML extensions (meta-models) in order to define the different aspects of an infrastructure organization and behaviour as ownership and management, structure and organization, resources, risk and relationship. The CEML language proposed in [19] is a graphical modelling language allowing domain experts to build formally grounded models related to crisis and emergency scenarios. In general the infrastructure scale of analysis describes the level of granularity the infrastructure interdependencies are analysed and which kind of approaches can be used in the analysis. At a high abstraction level the interdependent networks can be modelled and analysed from the system of systems point of view. At this level of granularity it is possible to build graph models or the IIM. In the former case, the topological approach can be applied to compute the different coefficients, indices described in Table 1 to assess for example the robustness of the networks or possible components vulnerabilities. In the later case IIM models can be used to perform failure propagation analysis. Then, going to a lower abstraction level and thus requiring more detailed data, it is possible to use agent-based approaches or I2SIM to perform networks and crisis scenario analysis considering functional properties of network components, network management procedures and phenomenological factors that cannot be represented by more abstract models (see Fig. 17). In particular, CIPCast includes the RECSim simulator [20] (as shown in Fig. 16) that allows the simulation of the electrical distribution network management procedures and its interdependencies with the telecommunication domain. Indeed, electrical distribution operators use SCADA systems to perform remote operations (tele-control) on the electrical grid to ensure a constant and efficient energy supply to the consumers. Tele-control operations bi-directionally couple telecommunication and electrical networks: faults in one network produce effects, which in turn reverberate on the others. RECSim assesses the correct tele-control operations needed for the restoring of the electrical grid based on topological properties of the electrical substations and the Telecom nodes. A crucial approximation introduced in CIPCast is the decoupling of the electrical and telecom systems form all the other infrastructures. These networks should be considered highly dependent and tightly linked; for this reason, their behaviour and their mutual perturbation dynamics occur in times, which are much shorter than those characterizing the perturbation dynamics for other infrastructures. As such, electro-telecom dynamics are resolved at first, in a time scale typical of their interaction (from a few seconds to a few hours) by keeping the other infrastructures substantially unperturbed. Once the electro-telecom perturbation dynamics has been solved, the resulting electro-telecom situation (inoperability) is introduced in the complete infrastructures setting in order to estimate the further perturbation produced on the other infrastructures (using I2SIM).

5 Conclusion

The document describes the results of several years of research at ENEA and UBC in the field of Critical Infrastructure Protection and Complexity Science. In particular, the document describes some phenomenological simulators for complex systems of CI’s and highlights how these tools can be considered as fundamental pillars of a CI MS&A platform. This framework allows various kind of analyses for different end users and, in general, for different objectives. Regardless of the analysis objective, the first step is to build a valid and effective representation of the inspected system. It is worth noting once again that an “elective” representation does not exist: depending on the commitment, available information, knowledge and computational means, the “most effective” representation (if any) will be different. Therefore, different phenomenological approaches are currently applied. The paper proposes a general framework and platform architectures to integrate the main components of any CI MA&S approach. It further shows the details of the CIPCast and the I2sim platforms  that are compliant with the proposed general paradigm. The CIPCast platform, developed within the CIPRnet project, is a Decision Support System providing a 24/7 service to CI operators and emergency (crisis) decision-makers providing a continuous risk assessment of CI elements due to natural threats. One of the main components of CIPCast (when acting in the “operational” mode) is a continuous process (running on a 24/7 basis) realizing the Risk Assessment Loop (RAL). Within the RAL, an agent based simulator (RECSim) developed by ENEA and the I2SIM simulator have been used allowing the simulation of the electrical distribution network management procedures (considering the interdependencies between the electrical and the telecommunication domain) and, once the resulting electro-telecom situation (inoperability) has been assessed, the further perturbation produced on the other infrastructures is assessed using I2SIM. In the future, as more technological infrastructures data will be available for a specific area, CIPCast will be enriched using complete system of systems representation of the (inter)-dependent networks. Thereby, all other approaches described in the document, as for example topological ones and IIM models, will be available in real time to perform different analysis as failure propagation and vulnerability analysis. CIPCast can be used to discover intrinsic vulnerabilities of the technological networks (i.e. vulnerabilities that depend on how components are connected to others of the same or different infrastructures). Ultimately, CIPCast will result in a comprehensive decision support system also allowing for investments planning to improve resilience and mitigate the risk.

Acknowledgement and Disclaimer

This chapter was derived from the FP7 project CIPRNet, which has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 312450.
The contents of this chapter do not necessarily reflect the official opinion of the European Union. Responsibility for the information and views expressed herein lies entirely with the author(s).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
2.
Zurück zum Zitat Haimes YY, Jiang P (2001) Leontief-based model of risk in complex interconnected infrastructures. J Infrastruct Syst 7(1):1–12CrossRef Haimes YY, Jiang P (2001) Leontief-based model of risk in complex interconnected infrastructures. J Infrastruct Syst 7(1):1–12CrossRef
3.
Zurück zum Zitat Santos JR, Haimes YY (2004) Modeling the demand reduction input-output (I-O) inoperability due to terrorism of interconnected infrastructures. Risk Anal 24(6):1437–1451CrossRef Santos JR, Haimes YY (2004) Modeling the demand reduction input-output (I-O) inoperability due to terrorism of interconnected infrastructures. Risk Anal 24(6):1437–1451CrossRef
4.
Zurück zum Zitat Lian C, Haimes YY (2006) Managing the risk of terrorism to interdependent infrastructure systems through the dynamic inoperability input-output model. Syst Eng 9(3):241–258CrossRef Lian C, Haimes YY (2006) Managing the risk of terrorism to interdependent infrastructure systems through the dynamic inoperability input-output model. Syst Eng 9(3):241–258CrossRef
5.
Zurück zum Zitat Cavalieri S, Chiacchio F, Manno G, Popov P (2015) Quantitative assessment of distributed networks through hybrid stochastic modeling. In: Bruneo D, Distefano S (eds) Quantitative assessments of distributed systems: methodologies and techniques. Wiley, Hoboken. doi:10.1002/9781119131151.ch9 Cavalieri S, Chiacchio F, Manno G, Popov P (2015) Quantitative assessment of distributed networks through hybrid stochastic modeling. In: Bruneo D, Distefano S (eds) Quantitative assessments of distributed systems: methodologies and techniques. Wiley, Hoboken. doi:10.​1002/​9781119131151.​ch9
7.
Zurück zum Zitat D’Agostino G, Scala A (2014) Networks of networks: the last frontier of complexity. In: Understanding complex systems. Springer, Berlin D’Agostino G, Scala A (2014) Networks of networks: the last frontier of complexity. In: Understanding complex systems. Springer, Berlin
8.
Zurück zum Zitat Satumtira G, Dueñas-Osorio L (2010) Synthesis of modeling and simulation methods on critical infrastructure interdependencies research. In: Sustainable and resilient critical infrastructure systems. Springer, Berlin, pp 1–51 Satumtira G, Dueñas-Osorio L (2010) Synthesis of modeling and simulation methods on critical infrastructure interdependencies research. In: Sustainable and resilient critical infrastructure systems. Springer, Berlin, pp 1–51
9.
Zurück zum Zitat Genge B, Kiss I, Haller P (2015) A system dynamics approach for assessing the impact of cyber attacks on critical infrastructures. Int J Crit Infrastruct Prot 10:3–17CrossRef Genge B, Kiss I, Haller P (2015) A system dynamics approach for assessing the impact of cyber attacks on critical infrastructures. Int J Crit Infrastruct Prot 10:3–17CrossRef
11.
Zurück zum Zitat Martí JR (2014) Multisystem simulation: analysis of critical infrastructures for disaster response. In: D’Agostino G, Scala A (eds) Networks of networks: the last frontier of complexity. Springer International Publishing, pp 255–277 Martí JR (2014) Multisystem simulation: analysis of critical infrastructures for disaster response. In: D’Agostino G, Scala A (eds) Networks of networks: the last frontier of complexity. Springer International Publishing, pp 255–277
12.
Zurück zum Zitat Armstrong M, Martí JR, Linares L, Kundur P (2006) Multilevel MATE for efficient simultaneous solution of control systems and nonlinearities in the OVNI simulator. IEEE Trans Power Syst 21(3):1250–1259CrossRef Armstrong M, Martí JR, Linares L, Kundur P (2006) Multilevel MATE for efficient simultaneous solution of control systems and nonlinearities in the OVNI simulator. IEEE Trans Power Syst 21(3):1250–1259CrossRef
13.
Zurück zum Zitat Alsubaie A, Alutaibi K, Marti JR (2015) Resilience assessment of interdependent critical infrastructure. In: The 10th international conference on critical information infrastructures security (CRITIS), Berlin, pp 1–12, 5–7 Oct 2015 Alsubaie A, Alutaibi K, Marti JR (2015) Resilience assessment of interdependent critical infrastructure. In: The 10th international conference on critical information infrastructures security (CRITIS), Berlin, pp 1–12, 5–7 Oct 2015
14.
Zurück zum Zitat Khouj M, Sarkaria S, Martí JR (2014) Decision assistance agent in real time simulation. Int J Crit Infrastruct Syst 10(2):151–173 Khouj M, Sarkaria S, Martí JR (2014) Decision assistance agent in real time simulation. Int J Crit Infrastruct Syst 10(2):151–173
15.
Zurück zum Zitat MOTIA (Modelling Tools for Interdependence Assessment in ICT Systems) Project Report Activity 5 Metrics Definition, 2012 MOTIA (Modelling Tools for Interdependence Assessment in ICT Systems) Project Report Activity 5 Metrics Definition, 2012
16.
Zurück zum Zitat D’Agostino G, Scala A, Zlatić V, Caldarelli G (2012) Robustness and assortativity for diffusion-like processes in scale-free networks. Eur Phys Lett 97(6):68006CrossRef D’Agostino G, Scala A, Zlatić V, Caldarelli G (2012) Robustness and assortativity for diffusion-like processes in scale-free networks. Eur Phys Lett 97(6):68006CrossRef
17.
Zurück zum Zitat Gill NS, Balkishan (2008) Dependency and interaction oriented complexity metrics of component-based systems. In: ACM SIGSOFT software engineering notes, vol 33, no. 2, Jan 2008 Gill NS, Balkishan (2008) Dependency and interaction oriented complexity metrics of component-based systems. In: ACM SIGSOFT software engineering notes, vol 33, no. 2, Jan 2008
18.
Zurück zum Zitat Bagheri E, Ghorbani AA (2010) UML-CI: a reference model for profiling critical infrastructure systems. Inf Syst Front 12:115–139 Bagheri E, Ghorbani AA (2010) UML-CI: a reference model for profiling critical infrastructure systems. Inf Syst Front 12:115–139
19.
Zurück zum Zitat De Nicola A, Tofani A, Vicoli G, Villani ML (2011) Modeling collaboration for crisis and emergency management. In: COLLA 2011: the first international conference on advanced collaborative networks, systems and applications De Nicola A, Tofani A, Vicoli G, Villani ML (2011) Modeling collaboration for crisis and emergency management. In: COLLA 2011: the first international conference on advanced collaborative networks, systems and applications
20.
Zurück zum Zitat Tofani A, Di Pietro A, Lavalle L, Pollino M, Rosato V (2015) CIPRNet decision support system: modelling electrical distribution grid internal dependencies. In: Proceedings on critical infrastructures preparedness: status of data for resilience modelling, simulation and analysis (MS&A), ESReDA workshop, Wroclaw, 28–29 May 2015 Tofani A, Di Pietro A, Lavalle L, Pollino M, Rosato V (2015) CIPRNet decision support system: modelling electrical distribution grid internal dependencies. In: Proceedings on critical infrastructures preparedness: status of data for resilience modelling, simulation and analysis (MS&A), ESReDA workshop, Wroclaw, 28–29 May 2015
21.
Zurück zum Zitat Albert R, Jeong H, Barabási AL (2000) Error and attack tolerance of complex networks. Nature 406:378–382. doi:10.1038/35019019 Albert R, Jeong H, Barabási AL (2000) Error and attack tolerance of complex networks. Nature 406:378–382. doi:10.​1038/​35019019
Metadaten
Titel
Phenomenological Simulators of Critical Infrastructures
verfasst von
Alberto Tofani
Gregorio D’Agostino
José Martí
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-51043-9_5