Who uses what? And where?
With respect to “who” is using network analysis amongst the targeted areas in a broad sense, the applications of network analysis reviewed here were somewhat evenly spread across disaster management (with respect to all kinds of hazards) and urban systems CAS literature (a breadth of contexts). When disaggregating this further by specific network analysis techniques, it is clear that there is a wide range of different methods available. This highlights that in the areas of disaster management and urban systems, network analysis has been significantly diversified since the original conception of SNA in the 1970s (see Zhang
2010).
Advances in computational power and data availability have not only facilitated more advanced applications of SNA, but have allowed integration with other computational methods such as modelling and simulation. For example, Rodrigueza and Estuar (
2018) use SNA as a basis for understanding disaster behaviour in an ABM. Furthermore, network analysis has evolved beyond the study of sociology, as the importance of transport and critical infrastructure has become a modern concern. For example, georeferenced data has enabled path analysis of transport and mobility, or the “actors” (nodes) in a network no longer need to be people or organisations, but instead businesses, homes and emergency facilities in “Hybrid-social physical networks” (Bozza et al.
2017). Furthermore, graph theory has facilitated systems-oriented methods such as ENA, which not only pertains to ecosystems, but the interactions between physical systems such as cities across the world (Bodini et al.
2012).
With respect to “who uses what”, despite the availability of a wide range of specific network analysis methods, there is siloing within the reviewed research domains. A majority (65%) of disaster management studies used either SNA or Routing Problems. However, the application of network methods is broader in the urban systems domain, where 6 out of 8 method categories (Table
2) were used in 51% of studies overall. The focus on SNA and routing-problem-based networks within disaster management suggests that there are established priorities, yet also highlights gaps where network analysis may also be able to fill. For instance, the primary application of SNA is centred around response networks and organisational collaboration during previous disasters, where two extreme events (i.e. floods) are compared to examine how networks have evolved between two past points in time. This is intended to examine the preparedness of a nation or region. However, the prevalence of such applications suggest that there may be an overemphasis on understanding past events, instead of more directly preparing for future events. Applications that are more present- or future-oriented (i.e. evacuation and emergency service response modelling across spatial transport networks) do suggest that there is some element of future preparation involved, however, as they are based on shortest/optimal path problems, they mostly represent preparedness in the context of spatial movement, rather than abstract social collaboration. Because looking to future preparedness requires dealing with a great deal of uncertainty regarding how context may change from the present, and it is crucial to understand the more fundamental (but highly complex) dynamics behind preparedness in the present before introducing those future uncertainties, these findings are understandable. However, the high prevalence of SNA or Routing Problems suggests more could be done to expand the conceptualisation of research problems—and diversify the application of network analysis techniques—within disaster management (Bedinger et al.
2019). This review suggests that disaster management should therefore turn towards the wider urban systems literature for inspiration regarding alternative network analysis methods that consider the interdependency of multiple systems as opposed to mobility only. For instance, ENA applications in urban systems studies often model the interactions between different sectors (Liu et al.
2011).
With respect to “where”, we have further disaggregated this review by the focus of specific network analysis techniques—in other words, whether the metrics used have a local or global perspective. A wide range of metrics were identified in the chosen research domains with an almost equal representation between global and local.
It should be acknowledged that this discussion emphasises centrality metrics, due to their overwhelming popularity, and that a comprehensive discussion of reviewed applications for each of the 79 metrics would be cumbersome and out with the scope of this study.
Unsurprisingly, the most frequently used of all metrics (both global and local) were all local metrics of centrality (Freeman
1979): Betweenness, Degree, and Closeness. These are sociological in origin, and have coevolved with the development of SNA, whereby they have typically been used to describe social entities. However, the diversification of network analysis has led to a diversification of metrics in two ways. First, SNA is no longer only “social”—it is the go-to network analysis method regardless of the phenomenon being studied. Although centrality metrics are grounded in SNA, and thus would be assumed to pertain to sociological entities, these metrics have been used to describe a host of other entities, or to describe economic proxies whilst the main method is still SNA. This is important as it begs the question as to why in disaster management there remains such a focus on applying SNA to mainly social-based networks, rather than extending this to different sectors that are interconnected and are at risk (e.g. health care, economy, transport). Second, centrality metrics are frequently applied in methods other than SNA (e.g. ABM), and the concept of centrality has developed beyond metrics proposed by Freeman (
1979) to other centrality metrics.
The versatility of centrality metrics and the availability of many other metrics (global and local) highlights the value of network analysis. Although this does present issues as well.
How many metrics should I use? When should I use this metric and not the other?
It is challenging to definitively answer how many metrics one should use in network analysis, as this depends on context, adopted method, time, resources, and knowledge of the end user. However, based on the results in “
Number of metrics used” section, three things are clear; the average number of metrics observed in this review is three; the majority of studies adopt fewer than 3; and there is wide variation across the studies (i.e. some studies adopt several (8) and some adopt none at all). Centrality metrics are typically used to capture specific characteristics of a network, such as evaluating how a single node is connected to the rest (degree centrality), which provides a static overview of network structure. From a more dynamic perspective, betweenness centrality evaluates how ‘information’ propagates through the network. Other centrality metrics, such as eigenvector centrality aim to fill the gaps of basic nodal metrics such as degree centrality, as it includes ‘information’ (such as a nodes influences) whilst also describing the connectivity as degree centrality evaluates. Given these three perspectives, this could possibly explain why typically studies returned in this review adopt an average of three metrics. This would therefore assume that there is a minimum number of characteristics required to evaluated a network.
If the purpose of having an array of metrics is to capture different characteristics of a network, then it could be argued that more metrics are better, as each metric would contribute to holistically describing the system. However, this is where context becomes important. For example, Cui and Li (
2020) aimed to measure two concepts: social capital and how it is used in community resilience. Both of these concepts are multi-faceted and represent complex sociological interactions, such as sense of belonging, collective efficacy, trust, and reciprocity. To achieve this, Cui and Li (
2020) adopt one global metric (Density) and seven local metrics (Betweenness, Degree, Closeness, Path Length, Efficiency, Constraint, Structural Holes) as appropriate to these concepts. In a different context, Balsiger and Ingold (
2016) aimed to investigate how actors within flood governance collaborate and share information based on perceptions of sustainability, using just one local metric (Degree Centrality). Degree Centrality uses the concept of “Structural embeddedness” (see Granovetter
1992) which describes how embedded an actor is in the network based on how central they are (i.e. how many actors they are connected to). Both studies clearly define their objectives and achieve them through appropriate metrics; the former study’s scope is wider or more complex, therefore a wider range of metrics is perhaps necessary.
However, one could also use the latter study to highlight inconsistent metric applications between studies measuring similar concepts. Balsiger and Ingold (
2016) and another study (Comfort et al.
2016) both aim to examine collaboration. Comfort et al. (
2016) use another sociological concept: “bridging” actors. These are defined as actors that link between two indirectly connected actors, and this can be measured using Betweenness Centrality. If the objectives of the two studies are similar, why has one adopted Betweenness Centrality and the other has not? Further contradicting these observations is Faas et al. (
2017), whose objective is also analysing bridging actors, however in this instance, Degree Centrality is the only metric presented in the paper. These examples show it is difficult to justify which and how many metrics should be used, based only on referencing past applications of disaster management and urban systems research, because the existing body of work is inconsistent.
So how should we justify which and how many metrics should be used? The fundamental aims of network analysis are arguably to represent complex concepts (e.g. multi-faceted social interactions) with a systems perspective (i.e. what is happening both locally and globally). Selecting only one metric is insufficient to achieve either. One metric can only cover one concept, and either global or local characteristics. Therefore at least one global and one local metric is desirable. In addition, more is not necessarily better, as this runs the risk of redundancy if the results of the chosen metrics are correlated (Miele et al.
2019).
Therefore, we would argue that an important step in selecting network metrics is a correlation analysis in order to minimise this risk. For example, the R package,
Central Informative Nodes in Network Analysis (CINNA) (see Ashtiani et al.
2019) enables comparisons across numerous measures of centrality to identify the most important metrics using Principal Component Analysis (PCA) and pairwise associations.
Are some metrics more versatile than others? Can common metrics consistently describe the same characteristic across contexts?
The results in Sects. “
Network metrics” and “
Network characteristics” sections highlight that there is diversity in terms of what characteristics of a system or entities can describe. Moreover, the above discussion has alluded to versatility amongst metrics in that two metrics can describe the same thing and that terminology is interchangeable. This raises the question, does interchangeable terminology represent versatility? Or inconsistency in reporting? Furthermore, beyond the most common metrics, what about the less popular ones?
In favour of the argument of inconsistency is the fact that there were studies (albeit a small percentage) which provided no rationale or explanation of metric choice. Katerndahl (
2012) uses SNA to understand how research collaboration within academic faculties impacts productivity at the individual and departmental level, however does not provide any definition or rationale behind the use of Degree, Betweenness and Eigenvector Centrality. Similarly, Comfort et al. (
2013) and Oh (
2017) provide no rationale for their selection of metrics. Kim and Hastak (
2018) provide no rationale for Density, yet describe Degree, Betweenness and Eigenvector centralities as metrics to explain “prominence or importance”, without distinguishing how these three centrality metrics differ and why all three are required to measure the same concept. In contrast, Liu and Lim (
2016) provide no definitions for the centrality metrics Betweenness and Degree, yet provide definitions and interpretations of Centralisation and Density. Moreover, Comfort and Zhang (
2020) explain the rationale behind Betweenness Centrality and the External/Internal Index, but omit any explanation of Density or Diameter. Tozer and Klenk (
2019) use only Degree in a Bibliometric analysis but provide no rationale as to what it represents. Ma et al. (
2020) simply state that degree measures structure. Finally, Pheungpha et al. (
2019) and Zelenkauskaite et al. (
2012) do not specify which metrics or measure of centrality is being used, respectively. In these instances, it appears the analysis was qualitative and that the relationships (i.e. who was connected to who) was of primary interest. Rather than specific failures to adequately outline methodological choices, we believe these instances speak to a larger issue of “letting the researcher decide” how to communicate about network analysis in non-mathematical fields. This is a barrier to a more transparent, higher standard of interdisciplinary network science.
In terms of versatility, it appears that it is not necessarily always a case of
which characteristic is being measured but a case of
how. The most frequently occurring characteristic is
connectivity. In the context of Routing Problem based methods, this typically refers to how connections between nodes are disrupted as a result of a hazard in which the optimal path length is impacted due to a loss of connectivity (Espada et al.
2015).
Accessibility is also a frequently appearing characteristic which is interchangeable with connectivity in this context. Connectivity is used as a generic term when adopting centrality metrics, in which Degree, Betweenness and Closeness describe different aspects of connectivity. For example, Čerba et al. (
2017) describe the connectivity of semantic resources in terms of quantity (Degree), distance and relation (Closeness) and whether nodes act as
bridges (independent, or indirectly connected nodes; described by Betweenness) or not. Optimal connectivity is therefore described as a node which is connected to many others, acts as a bridge and is close to each other nodes. However, whilst in this instance connectivity appears to be a characteristic described by three measures of centrality, there are several examples of this that do not relate to well-known and oft-used centrality metrics. Derudder and Taylor (
2005) use the GNC metric to measure a city’s connectivity in relation to other cities. This metric does not consider centrality. Furthermore, “Connectivity” is also a metric that represents the minimum amount of nodes or edges that would need to be removed to fragment the network into two or more isolated subgroups (Diestel
2005) and Samarasinghe and Strickert (
2013) claim that Density is a global indicator of connectivity. It is no surprise that connectivity is the most frequently occurring characteristic, given that networks are fundamentally about connection.
Similarly, the same applies for
importance and
influence. Kim and Hastak (
2018) state that Degree, Betweenness and Eigenvector centrality are used to explain the importance of actors in an SNA analysis of social media data post-disaster. Taking a selection of examples from the application of SNA to measure response networks in disaster management, Calliari et al. (
2019) use Degree to assess the
influence of the most central actors in the network, Celik and Corbacioglu (
2018) use Degree to highlight the most
important and well-connected actors, and Celik and Corbacioglu (
2016) and Cui and Li (
2020) both measure the
power of actors using Degree. Moreover, Celik and Corbacioglu (
2016) use Betweenness as a means of measuring an actors’
position in the network, yet Htein et al. (
2018) measure such actor-level
positioning using Degree (with respect to Centralisation). Mathematically, Degree Centrality is simply the number of other nodes which a given node is connected to (Freeman
1979). Therefore, it is recognisable in the context of social networks that a well-connected actor plays a prominent role in the network, and possesses influence and importance. However, the nature of this influence and importance is not only a function of the amount of connections. For instance, Meilani and Hardjosoekarto (
2020) and Chen et al. (
2020) make a distinction between Degree and Eigenvector Centrality by stating that power is measured in the latter not by how
many connections a node has, but
who the connections are. In both examples, nodes represent actors within disaster risk reduction efforts after an event and the
power is identified by examining nodes who are both mutually high in Eigenvector Centrality, thus identifying
who is most powerful differently than Degree Centrality affords.
Whilst centrality metrics are most popular where SNA is concerned, of particular interest is how else these metrics have been applied. A number of studies adopted centrality measures out with SNA, and described entities other than people or organisations. For example, Lao et al. (
2016) use degree centrality to weight edges in their network to represent air passengers, thus providing a measure of a city’s centrality. Arora and Ventresca (
2018) use Betweenness and Closeness centrality for preferential linking in the synthesis of resilient Supply Chain Networks (SCN), where centrality measures act as proxies for price, performance and quality. Mu et al. (
2020) examine the spatial distribution of green space and physical factors to explore alternative green space planning strategies using Degree, Closeness and Betweenness. Garrett et al. (
2017) adopt Degree, Closeness, Betweenness, Eigenvector as measures of centrality to explore food security and agricultural networks (alongside Cliques, Diameter and Path Length). In disaster management, centrality measures are typically associated with road networks and critical infrastructure; Fan and Mostafavi (
2019) use degree centrality with social media data in a graph-based event detection model to identify disruption of critical infrastructure. Papilloud et al. (
2020) characterise flood exposure of road network using Edge Betweenness Centrality (EBC) and Sasabe et al. (
2020) also apply EBC in road network risk analysis. Alongside Lao et al. (
2016), these instances were the only three in which Betweenness Centrality was measured at edges instead of nodes.
Whilst interchangeable terminology for some metrics is a prevalent theme emerging from this review, there are instances in which the metric being described is more definitive. A metric that is terminologically consistent across studies is Throughflow, used in ENA. Whilst the nature of the flows may vary between study, the purpose of applying the metric remains the same. Throughflow is classed as both a global and local metric. Locally, the flows can measure the importance of a node, whereas at a system level, the Total System Throughflow (TST) can indicate if a system is at a steady state if the sum of all inflows is equal to outflows. Measuring TST indicates the level of activity that pertains to the system in question, and this can be useful to characterise the system’s level of growth (e.g. economic growth in a city) (Bodini et al.
2012). This presents a useful insight into the methods of network analysis as it is clear that SNA has evolved beyond sociology in terms of method and metrics, and its background in sociology has perhaps fostered the level of versatility, interchangeability, and at times, ambiguity as to what metrics actually mean for those interpreting their results. ENA on the other hand is far more clear-cut, as it depends on measuring flows in terms of materials and resources, not the roles of individuals which are far more difficult to quantify.
Moreover, there is also more consistency and less ambiguity in studies that adopt bespoke/composite/less popular/less generalisable metrics. Nakatani et al. (
2018) demonstrate adaptability of network analysis by using a well-established economic indicator, the Herfindahl–Hirschman Index (see Matsumoto et al.
2012) to measure the vulnerability of supply-chains. In contrast to vulnerability (an assessment of weak network links) is criticality, which measures importance (Knoop et al.
2012). Mitsakis et al. (
2016) adopt the Unified Network Performance Measure (UNPM) to assess the performance of a transportation network against technological and natural disasters. Developing on the approach by Nagurney and Qiang (
2008), the UNPM is an example of a metric that has been developed to measure the performance of a network in a specific context, therefore the meaning of “importance” in comparison to that described by centrality measures is less ambiguous. An additional example of bespoke metrics are Travel Alternative Diversity and Network Spare Capacity Dimension by Xu et al. (
2018). These are measures of redundancy in a transport network and aim quantify alternative travel routes and how much spare capacity the network has under normal and disruptive conditions.