Introduction

Dynamics on complex networks, such as the diffusion of information in social networks, are commonly modelled as Markov processes. An advantage of this approach is that for every (static) network with positive edge-weights we can define a corresponding Markov process by interpreting the network as the state space of a random walker and assigning the state-transition probabilities according to the link weights. This direct correspondence between the state space of the Markov process and the network enables us to examine the interplay between structure and dynamics from two sides. On the one hand, one can assess how the topological properties of a network influence the dynamical process. On the other hand, this coupling between topology and dynamics allows us to explore the structure of a network by means of a dynamical process. Specifically, for a linear Markov process the impact of the network structure on the dynamics will be mediated by the spectral properties of the matrix governing the time-evolution of the process, e.g. the adjacency matrix or the Laplacian1,2. Vice versa, spectral properties can be used to uncover salient structural properties of a network, such as modular organisation3,4,5.

While simple Markov models have been very successful in modelling dynamics of complex systems and found many applications, they have one obvious disadvantage. In this class of models, the future state of the system only depends on its current state and does not account for its history. In a diffusion process, for instance, the next position of a random walker only depends on the currently occupied node and its outgoing links, but not on any of the previously visited nodes. However, as it has been emphasised recently, for a broad range of networked systems, flows tend to exhibit a temporal path dependence6,7. Think of human mobility: the places a person is likely to visit next, depend in most cases strongly on where the person came from. For instance, a person coming to work from home is likely to return home afterwards8. Other examples of processes with temporal memory include web traffic, journal citation flows and email cascades. Such processes therefore cannot be reproduced accurately by simple Markov models. However, the impact of this temporal correlations can often be already well-approximated by second-order Markov () models6. Importantly, the transition probabilities to define these models can be obtained empirically by measuring pathways of interaction cascades, rendering such an approach suitable for applications like information spreading or human mobility.

In the standard Markovian network model (), the elementary states are identified with the nodes of the original network (see Fig. 1). In the model, elementary states correspond to sequences of two nodes and thus can be identified with directed edges in the original network (see Fig. 1). Therefore we may think of a model alternatively as a random walk between directed edges of the original network. The state space of the model defines a new network describing the observed dynamics, which we call here the or memory network. The structural properties of the memory network can now be studied by many tools of network science, allowing us to uncover interesting patterns of flow in an model.

Figure 1
figure 1

Schematic – dynamics on networks as Markov processes.

A process on a physical network (left) may be abstracted in a number of different ways – often this is done via a Markov model. In a first order Markov model (right side, top panel), the state space is isomorphic to the physical network: every node corresponds to one state, every link indicates a transition between those states. In a second order Markov model the state space is structured like a directed line-graph of the original network. The states in this network can be identified with the directed edges in the original network and are connected if it is possible to traverse from one edge to another edge in the original directed network. Note that, when projecting back these dynamics onto the physical network, the probability to move from one node to another will thus appear non-Markovian.

The purpose of this paper is to highlight how the formalism of higher order Markov models provides a simple means to extend network theoretic tools to take into account important dynamical properties as encoded in the representation. Indeed many network theoretic tools may be understood as the outcome of a matrix iteration or eigenvector computation9, which can be naturally associated to a dynamical process.

In this paper we focus on the problem of community detection, though higher-order models can be equally applied to other problems, including the assessment of node centralities6,9 or the speedup (or slowdown) of spreading processes7. While Ref. 10,11 discussed how the map equation can account for higher-order flows in community detection, here our main example will be the Markov stability3,12,13 formalism, as it incorporates many commonly used community detection measures as special cases, notably, the concept of Modularity14, diverse Potts models15 and spectral clustering16,17. We thereby highlight that all these classical algorithms can indeed be naturally generalised to account for memory effects by using networks, showing that these representations provide a general tool for the analysis of a dynamics occuring on a latent network structure. Moreover, as spectral clustering can be shown to be a special case of the Markov Stability formalism for long times, we can draw connections to recently proposed techniques of graph partitioning based on non-backtracking random walks18 and discuss how these can be understood from the point of view of second order Markov processes. We remark that as clustering an network is equivalent to finding a partition of the edges of the original system, it naturally leads to the detection of overlapping communities in the same way as the analysis of line graphs19. This can be a highly desirable feature, especially when analysing social networks, which tend to be organised in overlapping social circles20,21.

The remainder of this article is organised as follows. Initially, we review the Markov stability formalism for community detection, a general framework to detect flow-based communities in complex networks and particularly emphasise its properties in the case of directed networks2. We show how this quality function can be naturally generalised for the analysis of networks, which we illustrate by studying a flight network of the United States from the perspective of second order Markov models. In this context, we also discuss the possibility to generate realistic pathway data using models of second-order Markov processes, even if only time aggregated network information is available. Subsequently, we propose a mechanism to extract pathway statistics to build second-order models from event-based, temporal network data, which do not contain pathway statistics a priori. Using computer-generated data and time-resolved interactions records of school-children, we demonstrate how we can uncover communities which capture important flow-constraints imposed by the temporal activation patterns of the links. Finally, we draw connections between the network representation and non-backtracking random walks to uncover communities in sparse networks.

Markov Dynamics and Community Detection

Random walks on directed networks

Let us consider a continuous-time random walk on a network with N nodes, governed by the following Kolmogorov forward equation:

Here pj(t) denotes the probability of a walker to be present on node j at time t; Aij is the weight of the link from node i to node j, i.e., A is the (weighted) adjacency matrix; finally, is the weighted out-degree of node j.

Clearly this dynamics is driven by the normalised Laplacian matrix L, defined as . The Perron-Froebenius theorem guarantees that a unique stationary solution exist for the above dynamics if the network is strongly connected. For an undirected network, this stationary solution is πj = kj/2M, where is the node degree and M is the total weight of the undirected links. For a general directed network the stationary solution is given by the dominant (left) eigenvector of L, which depends on the global network properties and cannot be expressed by a simple analytical formula.

Making the dynamics ergodic (when it is not)

Note that for the continuous-time random walk defined above, a strongly connected graph implies an unique stationary distribution and an ergodic dynamics. These criteria are a common requirement for dynamics-based methods for network analysis3,10. However, in a majority of real-world systems the system contains not just one but several strongly connected components (SCCs), whose sizes typically have different orders of magnitude. A standard solution is to address this issue by either neglecting some nodes and focusing on the largest SCC only, or by making the system strongly connected through random teleportations22. The first solution has the advantage of not distorting the dynamics, while the second allows for the analysis of the whole system. When analyzing real-world systems in the following, we will for these reasons use the first solution if the vast majority of nodes belongs to the largest SCC, while we will opt for the second solution if the system comprises a large number of disconnected components.

Community structure from a dynamical viewpoint

The structure of a network has a strong effect on the dynamics of a diffusing random walker: in unstructured (random) networks, the diffusion process will evolve almost isotropically and the walker will quickly reach its stationary distribution. However, if the network contains structure, a walker can get trapped inside a group of nodes for a time far longer than expected. Such groups of nodes thus constitute flow-retaining, dynamical communities in the network. Hence a random walk dynamics can effectively be used to define a quality function for a network partition based on the persistence of the diffusion inside the groups. By optimising such a quality function, we can therefore search for a modular partition of the network. Important examples for such an approach include the map equation10 and the Markov stability framework3, which we consider in this paper. Note that, as highlighted in refs 4 and 10 this notion of community is markedly different to the commonly considered structural viewpoint of communities, in that we are interested in flow-retaining, rather than densely (homogeneously) connected substructures.

The Markov stability of a partition is defined as:

where P(C, t) is the probability for a walker to be in community C initially and at time t, for a system at stationarity. Note that as all information about the initial position is lost after infinite time, P(C, ∞) also describes the probability of two independent walkers to be in C.

For the process (1), the Markov stability quality function can be written explicitly as:

where etL denotes the matrix exponential. Intuitively, with increasing time the walker will be able to explore larger and larger parts of the graph. The larger the time, the larger the modules that will be found in general. By ‘zooming’ through different Markov times one can thus reveal adequate scales of robust community structure, which manifest themselves as plateaux in time where the same partitions are found consistently4,23. In the limit t → ∞, it can be shown by eigenvalue decomposition that is maximised by a bi-partition in accordance with the normalised Fiedler eigenvector, a classic method in spectral clustering17. Interestingly, the expression for Markov stability can be rewritten as the Newman-Girvan modularity14 of a different network, whose adjacency matrix is given by

Here πi(etL)ij is the flow of probability from i to j after a time interval t at stationarity. In this new network Y, the weight of a connection between two nodes is thus modulated according to its importance for the diffusion dynamics. This rewriting of Markov stability as the modularity of a different network enables us to use any modularity based algorithm for the optimization of Markov stability, too. For the results in the present manuscript, we have made use of the Louvain algorithm24, a fast greedy optimisation heuristic that has been shown to have a good performance in the literature.

In practice, it can be more efficient to consider (3) in the limit of small times t → 0. Performing a Taylor expansion and keeping only linear terms in t results in the linearised Markov stability:

Note that for an undirected network, we have πi = ki/2M and and one recovers the modularity of the original network for t = 1 as well as the Potts-model heuristic15 for other Markov times (see Ref. 3,12 for details). However, for directed networks (5) is not equivalent to the directed modularity of the original network, but is given by:

Interestingly, similar adjusted adjacency (or Laplacian) matrices have been proposed as means for community detection in the literature based on different reasoning24,25.

Note that, as with all applications of unsupervised algorithms, care is needed when interpreting the results. For instance, the optimisation of Markov Stability for a fixed Markov time inherits the limitations of modularity maximisation, such as a resolution limit27 and a high degeneracy of optimial solutions28. Nevertheless, this class of methods has been applied successfully in numerous applications29. Indeed, some of these problems can be circumvented by carefully sweeping through different resolution parameters4,29 and finding partitions that are robust over a range of parameters4,13,23,31. Most importantly, however, the notion of community inherent to the methods presented here is based on flows3,4,10. This is in contrast to methods like stochastic blockmodels (SBM), see e.g. refs 32, 33, 34, which employ a generative model for the whole network and aim at finding patterns of pairwise connections in the system, though very recent work aims to incorporate dynamical aspects within the SBM framework35.

Dynamical Community Detection using Higher-Order Markov Models

From first to second order Markov models

Let us now show how we can incorporate second-order Markov models into the Markov stability framework. As higher-order Markov models provide more faithful representations of the dynamics observed in real world systems, this enables us to capture more of the real flow constraints in the uncovered communities. For simplicity let us initially consider undirected networks composed of N nodes and M links. The dynamics of a second order Markov process are encoded by the transition matrix , which describes the probability that a walker moves from node j to node k if it came from i in the previous step. By definition, this transition matrix is normalised such that .

As sequences of two vertices in our original network correspond to the nodes in the network, the entries in T describe precisely the transitions from -node to -node . The (second-order) process on the original network is thus equivalent to a first-order Markov process, albeit on a different network: namely, the network composed of 2M nodes. As each undirected link of the original network can be traversed in two distinct directions (from i to j and vice versa) it accounts for 2 nodes in the network, and . This implies that the network is directed even if the original network is undirected. To see this, observe that if ki, there cannot be a link between and , even if a transition between and exists. Henceforth we use Greek letters to denote nodes and Latin characters to denote the nodes in the original network.

Similarly to (1), we can define a continuous-time random walk on the network as:

where pα(t) is the probability of finding a walker on node α at time t. Likewise the stationary distribution πα of this process is given by the left eigenvector of the corresponding Laplacian, associated with eigenvalue 0:

Note that we can also observe how a simple Markov dynamics evolves from the point of view of an network, i.e., we can view the dynamics from the point of view of transitions between the (directed) edges of the graph. As this is equivalent to lifting the dynamics into the larger state-space, we will denote this representation as a network. In this case it can be shown easily that πα = wα/W, where wα is the weight of edge α and 2,6. The transition probability to any out-neighbour of α on the network is simply

Here is the set of out-neighbours of α and is the out-strength of α. Note that, since the original network has been undirected we have , where j is the endpoint of memory node (the directed edge) .

Models of second-order Markov processes

While second order transition matrices T can be directly generated from temporal pathway data, in some cases only information about the aggregated (first-order) network might be available. However, in such cases we can still create networks using a simple memory model, calibrated by the pathway statistics of similar datasets for which this temporal information is obtainable. For instance, the temporal information pathways of one Email dataset may be used to fit model-parameters to generate a second-order model for different email communication network, where only aggregated information is obtainable.

The key idea underpinning the model is to weight different types of transitions between two directed edges. As illustrated in Fig. 2, we define three different types of transitions6.

  1. 1

    A return step, in which a walker coming from jumps to . In other words: a walker coming from node i to j returns to node i.

  2. 2

    A triangular step, in which a walker coming from moves to edge , where ki is a neighbor of i.

  3. 3

    An exploratory step, in which the walker moves from to an edge , whose endpoint l is neither i, nor any of the neighbors of i.

Figure 2
figure 2

Construction of a network using a second-order transition model.

From each directed edge in the original network a walker can in principle perform three types of moves, which get weighted according to the parameters ri: a return step (r2), a triangular step (r3) and an exploratory step (r>3). The picture shows these different options for a generic network edge.

To account for their relative importance we assign positive weights r2, r3 and r>3 to the different types of transition as follows. Let us denote the adjacency matrix of the directed line graph associated with our network by G. Then we can decompose G into three matrices Gret (return links), Gtri (triangular) and Gexp (exploratory) each containing only the links of the respective type. To obtain a weighted memory model we now define the weighted 2M × 2M adjacency matrix:

from which we can compute the associated second order transition matrix Tmodel simply by normalising each row to sum to 1.

As can be easily verified, this definition implies that when we project the resulting walk onto the node space, we obtain a (first order) Markov process when r2 = r3 = r>3 = const. We further remark that if the dynamics is ergodic on a graph for any set of parameters, it will remain ergodic for any value of r2, r3 and r>3, provided each parameter is strictly positive.

Higher order Markov dynamics reveal communities in a network of flight pathways

Let us now demonstrate how second-order Markov dynamics can help to uncover the organisation of systems where pathway data are available and compare their outcome with the community structure obtained by first-order dynamics and by simplified models of second-order Markov dynamics, as defined in in the section above. To do so, we consider a flight network in the United States. The data used to construct the network consists of individual flight trajectories of people navigating between different airports in the US. From these trajectories one can directly obtain a or network as discussed above and in Ref. 6. The main purpose of our analysis here is to illustrate the differences that may arise in community detection when considering the same mobility data from different modelling perspectives.

We analysed the modular structure of this system for the following three scenarios: i) a first order Markov network (), viewed from the perspectives of the edges; ii) a second order Markov network () where the transition probabilities are directly obtained from empirical data; iii) a second order Markov network (), where transitions are approximated by a simple second-order transition matrix Tmodel (see text), whose parameters have been fitted from the data. In each case, optimising the linearised stability for these different Markov models yields a clustering of the edges, which can be interpreted in terms of an overlapping community structure at the airport level. To compare the different scenarios we concentrate on the results obtained for Markov time t = 1 (see Fig. 3). Let us also note that, for each network, only few hardly active airports do not belong to the largest SCC. For this reason, we restrict the scope to nodes in this SCC. To be more precise, we restrict the scope to the intersection of the largest SCCs of the three processes, in order to ensure that the Markov stability is well-defined for each of them.

Figure 3
figure 3

Clustering analysis of a passenger traffic-network between airports in the US.

(a) Results based on clustering the system representation. (b) Clustering results obtained from the system representation. In (a,b) each airport is represented by a pie chart indicating the participation of the airport in different communities. For visual clarity, nodes with a community participation entropy smaller than one are displayed as smaller nodes. (c) Distributions of the community participation entropy across the network for the (left), (middle) and (right) network. All maps are created with https://github.com/andrea-cuttone/geoplotlibGeoplotlib using map tiles and data from https://www.mapbox.com/map-feedback/Mapbox and http://www.openstreetmap.org/copyrightOpenStreetMap, © OpenStreetMap contributors.

In order to compare the communities associated to each Markov process, we first calculate the normalised variation of information36 between the three different partitions of nodes obtained by optimising stability at t = 1. By construction, smaller values of variation of information indicate more similar community structures. We obtain a variation of information of 0.42 between and , 0.36 between and and 0.38 between and . These results confirm that the clusters obtained from provide an intermediate solution between and , with the advantage of requiring far fewer parameters than in the case.

After having found the edge-communities in each scenario, each node can be characterised by the set of group labels of its incident edges. In order to quantify the apparent difference between the covers, we measure for each node the entropy of its associated group labels:

where pi(c) is its fraction of edges assigned to community c. A value Si = 0 indicates that the node is associated to a single community, while higher values indicate a more diverse participation. The level of overlap of an edge partition can now be characterised by the distribution of entropy values on the set of nodes. We observe in Fig. 3 that analyzing the network results in more overlapping communities than the expanded first-order model, while the network is characterized by intermediate participation values. Similarly as observed for the map equation6, accounting for memory via second order dynamics therefore uncovers communities with a stronger overlap, in agreement with empirical observations that higher-order dynamics tends to constrain flows within these modules.

Analyzing Time-Stamped Temporal Networks without Pathway Data.

Second-order Markov models of time-stamped temporal networks

Node or link activity in networked systems often exhibits non-trivial temporal patterns, such as heterogeneous inter-event times and correlations between activation times of neighbouring edges37. Different network representations of such temporal data can be adopted, each associated to a different notion of temporal community. A first, ‘naive’ approach is to neglect the edge-timings and work with a static network representation, where the weight of each edge corresponds to an aggregation of the activity over the whole observed time interval. In this case, community detection aims at grouping nodes which have the aggregated edge-weight concentrated inside modules.

Alternatively, one can represent the system by a set of (coupled) adjacency matrices At, where each At represents the system in a short-time frame of the whole observation period. By applying community detection to this representation one tries to uncover meaningful structures in each time window, while enforcing some continuity between different time intervals38. This approach accordingly aims at tracking the evolution of communities in the course of time.

As we now discuss, a third notion of community is naturally associated to the representation of dynamics on temporal networks as a second-order Markov process. In situations when the activations of neighbouring edges present temporal correlations, the dynamics of a random walker is expected to be poorly reproduced by a first-order Markov process7. However, by finding a second-order representation of the temporal data and using the methodology introduced above, one can capture the observed temporal constraints on flow, including causal paths and a high levels of synchrony between edge activity.

The mapping from a time series into a second-order Markov model is realized as follows. We consider a system described by an ordered set of adjacency matrices At defining the connections between nodes at time t [1, T], where T is the number of observations. The static representation of the temporal network is provided by the adjacency matrix . Each directed edge within Astatic defines a node. The connection strength between nodes is now obtained by simulating pathways of a random walk process on the temporal network as follows. A walker is initially randomly assigned to a node. At every time step the walker waits until at least one edge is available for transport. If an edge becomes available, the walker leaves the node with probability (1 − ps) and remains on its current node with probability ps. If there are multiple possible transitions, the walker takes each edge with a probability proportional to its weight. This process is repeated multiple times for the observed interval [1, T] in order to generate sufficiently many trajectories. From these trajectories, one can simply construct a network by evaluating the transition frequencies between different edge pairs.

We remark here that several other procedures could be defined to generate networks, too. For instance, we could account for the duration of the intervals between time steps, or define the walker’s leaving probability to be proportional to the number of available contacts. Note that in the above construction, when ps = 0, the walker always takes the first available edge and one recovers the dynamics studied in e.g. ref. 39. The use of non-zero values of ps introduces a source of randomness in the dynamics, which can prevent spurious effects such as an overly strong tendency for backtracking as observed in Ref. 40. However, with increasing values of ps, the original ordering of events becomes less important and the impact of the exact timings is expected to be diluted41.

Community detection in networks constructed from temporal data: an illustrative example

Let us now show that a representation helps at uncovering hidden structures in temporal networks. As a first illustrative example, we consider a computer-generated benchmark of a social network defined as follows. The underlying structure is a complete graph, where the (directed) social interactions are divided into g different, non-overlapping types, e.g., work relations, friends, etc. Fig. 4. A shows such a network with N = 10 nodes and g = 3 edge types. We assume that different types of relationships dominate at different times, as often observed in empirical data42. Therefore, at each time, all edges of one single group are simultaneously active. Initially, a randomly chosen group is active. At each step, the currently active group remains active with probability pk; with probability 1 − pk, a new active group is randomly selected among all groups (including the currently active one). For the parameters considered here, the walker thus has three outgoing edges available for transport at each step (see Fig. 4). It is important to note that for the generation of trajectories on this temporal network, a change of the probability for the walker to stay at its node ps is effectively equivalent to a change in pk. For the sake of simplicity, we therefore set ps = 0 here.

Figure 4
figure 4

Analysis of an example temporal network with 10 nodes and 3 groups.

(a) Shows a complete graph with 10 nodes, whose (directed) edges are divided into 3 groups, such that each node has exactly 3 outgoing edges of each type. At each time-step only edges belonging to one group are active and available for transport. The currently active group ceases to be active with probability 1 − pk and is replaced by a randomly selected group. A random walker traversing the network is thus more likely to remain within the same group, the higher the probability pk. In panel (b), the variation of information between the planted structure and the partition found by Markov stability in the network is displayed as a function of Markov time. We can see that there is a plateau for values of time slightly larger than t = 1. As shown in in the inset, the method successfully uncovers the organisation into 3 modules for sufficiently large values of pk, i.e., when the groups remain active long enough in time such that the walker remains trapped inside a community for non-negligible periods.

Clearly, the trajectories of the random walker and the corresponding structure of the network are affected by the value of pk. When pk = 0, a new active group is randomly selected at each step of the random walker. In this limit, the average dynamics is equivalent to a first-order random walk on a fully connected network and thus no underlying structure can be found. Increasing pk implies that the random walker remains for longer time periods inside one group of edges before leaving and it is thus possible to uncover the group structure. The extreme case pk = 1 is again trivial: the initial group remains active for all times. This case is therefore not considered in the following. The results of our analysis are summarised in Fig. 4, where we compare the uncovered communities with the planted solution into groups by means of normalised variation of information (VI). Our observations are twofold. First, for values of t slightly larger than t = 1, we find the planted partition with high accuracy. Second, one observes that the underlying communities can be successfully found when the value of pk is sufficiently large, with a clear transition around pk = 0.4 (see inset in Fig. 4).

A real-world case study: analyzing temporal interaction pattern of school children

To demonstrate the utility of our approach with a real-world example, we have considered an empirical dataset from the SocioPatterns project43. The data, described in detail in44, consists of time-dependent face-to-face contacts, captured by wireless wearable sensors, between children and teachers in a primary school. In total, the dataset is made of 77,602 contact events between 242 individuals during two consecutive days. As the children belong to 10 different classes, we expect to find strong structural communities associated to these classes in the dynamic contact network. In the following analysis, the pathways were generated with a probability for the walker to stay at his node at each step set to ps = 0.05, but similar results were obtained when varying this parameter.

By scanning through Markov times we can find several persisting partitions. As validation we first identified the Markov time tC=10 ≈ 0.631 around which the algorithm robustly detects 10 communities and verified that the communities overlap with the partition into classes of the students. One should note here that the partition into classes can in fact be uncovered even from the aggregated, weighted network associated to the data44.

The advantage of our approach becomes apparent when looking at further stable partitions found for smaller Markov times. For instance, we find another robust split of the network at Markov time t = 0.56 into 15 communities. For this partition most communities are still identified with classes, as can be seen in Fig. 5. However, there are also additional modules in which students from different classes are mixed. These modules display activity patterns which are highly localized in time and therefore are bound to get lost when averaging over the temporal dimension in the data. Interestingly, these latter communities correspond in fact to interactions taking place between students during lunch breaks. Notably, these interaction are not detectable from the aggregate network or when using a memoryless representation of the dynamics (e.g., the network). By using the approach outlined here, we can thus not only detect ‘structural’ communities, in which actors maintain a higher amount of interactions between each other throughout time, but also ‘temporally localized’ communities, which are associated to groups that interact with each other strongly over certain time-periods.

Figure 5
figure 5

Analysis of the temporal community structure of a school network.

Two types of communities can be found by analyzing the network generated with pstay = 0.05. Around Markov time equal to t = 0.56 we find a robust of the directed edges into 15 communities. 10 of these communities are ‘single class communities’, in the sense that most connections are concentrated in one single class (cm2a in the example shown in (a)). These communities do not display a particular temporal structure in their activation patterns. This is clear from the temporal profile of activity (bottom panel in (a)), which displays the number of active links as a function of time, averaged over 2 days. (b) The remaining 5 communities are temporally localized ‘lunchtime communities’. While they extend over a range of class labels, connecting pupils from different groups, they are highly coherent and synchronised in time. These communities represent the lunch-time interactions between students of different classes.

Second-Order Models, Non-Backtracking Walks and Spectral Clustering

Recently, there has been an increased interest in spectral methods based on the so-called non-backtracking matrix of a network. While standard spectral algorithms for graph partitioning tend to fail when applied to very sparse networks, Krzakala et al.18 demonstrated how the non-backtracking matrix can be used to design spectral algorithms which behave optimally right until the theoretical limit of detectability of a stochastic block model. As the name suggests, the non-backtracking matrix is intimately related to the non-backtracking random walk on a network. The non-backtracking random walk is a diffusion process in which a particle behaves just like a simple random walker on the network, albeit with one additional constraint: after arriving at a node the walker is not allowed to immediately return to the node from which she originated (she cannot ‘backtrack’).

Interestingly, this non-backtracking random walk can be simply phrased in terms of a memory dynamics. Following the notation of section the non-backtracking matrix is defined on the set of -nodes and may be simply written as

Clearly this matrix describes the line graph for a memory walker with parameters set to r2 = 0, r3 = 1, r>3 = 1 (see section). The associated transition matrix TB of the non-backtracking random walk is simply obtained through normalisation:

where is the weighted out-degree of -node α. As this is just a diagonally scaled version (by the degree) of the non-backtracking operator, we may employ the flow based transition matrix TB for spectral partitioning of the nodes45. In fact, TB is just one particular instance of a whole continuum of possible transition matrices, each with varying amounts of memory. We can thus assess the effect the introduced memory has on the clustering by varying the parameters ri and performing a spectral clustering analysis.

Like in Ref. 18, we consider the problem of spectral clustering here in the simplest possible setup, which is the detection of a bipartition of equally sized groups in a large, sparse network. The networks we consider here consist of N = 104 nodes, divided into two groups according to a simple stochastic block model18. We denote the average degree of each node by c, the average degree to nodes inside its group by cin and the average degree to nodes outside its own group by cout. Note that these quantities are coupled by the simple relation c = (cin + cout)/2.

Similar to Ref. 18, while keeping the ratio cout/cin = 0.3 fixed, we vary the average degree c and observe the results we obtain from different operators via spectral clustering. Now, each node is grouped using the following protocol18,45. We first compute the second dominant (left) eigenvector of the respective matrix operator. Second, for each node we sum over the eigenvector components corresponding to all of its incoming links. The nodes are grouped according to the sign of this sum: if is is positive the node is assigned to the first group, if not to the second group. In case the respective matrix operator is directly defined on the node space, the nodes are simply partitioned according to the signs of the eigenvector. Note that in contrast to the problems considered above we here look at the problem of community detection from the perspective of a hard clustering of the nodes rather than the edges and moreover assume that there is a fixed, known number of non-overlapping node-communities. We thus do not expect that using second-order transition matrices will necessarily improve the performance (for the non-overlapping node-clustering), indeed accounting for memory can also be detrimental for this purpose as we will see in the following. However, our main purpose here is not to design an improved node clustering algorithm but to better understand the connection between spectral clustering and memory matrices.

In Fig. 6a, we initially perform a spectral clustering analysis the non-backtracking matrix B18, the adjacency matrix A, the normalised Laplacian matrix . As has been observed by Krzakala et al., the clustering based on the backtracking matrix B performs clearly better than the one based on the adjacency or Laplacian matrix, in particular when the graph becomes very sparse (see Fig. 6a). Figure 6b shows the results for the spectral clustering based on the memory walk matrices with various parameter settings ri. There appears to be a clear trend: the more weight there is assigned to exploratory steps, the easier it appears to decide on the node groupings. Spectral clustering based on the non-backtracking second-order transition matrix T(r2 = 0, r3 = 1, r>3 = 1) performs almost as well as the clustering based on B for very sparse networks. For networks with a higher average degree (c ≈ 8), the results are even slightly better. As the second eigenvector of T, which is used for the clustering, describes the approach of a memory dynamics towards stationarity it should not be too surprising that the results deteriorate if more memory is introduced: as very sparse graphs have a tendril like structure, additional memory can lead to a localisation of the dynamics in parts of the graph which correspond to strong ‘local’ bottlenecks. Stated differently, the slowest time-scale of the diffusion may not be correlated with moving from one planted (node) community to the other, but local obstacles become more important making the second eigenvector a bad predictor of the bi-partition.

Figure 6
figure 6

Spectral Clustering of sparse networks using memory matrices.

The networks consist of 104 nodes with a planted bipartition. The average degree of the network vs. the spectral clustering performance for classifying the nodes as measured by the normalised mutual information (NMI). Note that the theoretical detectability limit is given by c* = ≈3.45 (red dashed line). (a) Clustering performance based on the Adjacency matrix A, the non-backtracking matrix B and the normalised Laplacian matrix . (b) Clustering performance based on different transition matrices with different parameter settings.

This is in particular interesting, as for most memory dynamics reported6 the return flow appears to be significantly large, which would imply that the dynamical constraints on the flow are much more pronounced on a local level than one may expect from the perspective of an aggregated network. This observation appears to be aligned with the fact that many real-world systems tend to be composed of overlapping communities21.

Conclusions

Network-based models have been extremely successful for the analysis of complex systems and have led to the discovery of structural features with profound impacts on its dynamics46. However, to accurately capture the complexity of real-world interacting systems, simple network models are often not sufficient. For this reason, different approaches have been proposed recently, which increase the model dimension and enrich the network paradigm. One approach that has gained prominence in the literature, is to account for the different types of interactions present in the system by means of multiplex or multilayer networks47. Our approach developed in the work presented here proceeds in a similar spirit: by providing a larger state space representation in the temporal dimension, we aim to obtain new insights about the dynamical processes taking place in the system. Despite their similarities (see Ref. 2), multiplex networks and higher-order Markov models differ, as the former model tends to emphasise differences in the connections of the system, while the latter emphasises pathways.

In this paper, we have focused on the applicability of second-order Markov dynamics in the context of flow-based community detection. Second-order Markov dynamics model real-world temporal dynamics more accurately, so the found communities are expected to better capture the actual flow constraints in the system. In particular, we have shown how the Markov stability framework and thus techniques like Potts models and spectral clustering, can be based on a second-order Markov dynamics, which is equivalent to considering transitions between directed edges in the original network. As clustering of second order models yields a partition of the edges, one can readily interpret the results as an overlapping clustering of the nodes in the original system, which is a beneficial feature of the approach.

Some practical concerns are associated with moving to a second-order model. Naturally, there is an increased computational cost, as the number of nodes in the second-order model is equivalent to the number of directed edges in the original network. However, as most networks are sparse, this cost tends to be outweighed by the benefits gained from a higher-order dynamical representation. Another practical issue is that pathway statistics, needed to construct a second-order transition matrix directly from data, may not always be immediately accessible. To make the toolkit of second order models available in such scenarios, we have presented two different strategies. First, we have analysed a simple model able to generate realistic second-order dynamics, which can be calibrated from similar datasets. Second, we have demonstrated how to represent temporal network data as a second order network. We have tested these two strategies by focusing on a flight network in the United States and a (temporal) social interaction network in a school environment, respectively. In both cases, the second order dynamics, even approximated, allowed us to extract temporal patterns in the data that would have been missed by an aggregated first-order model.

Finally, we have highlighted the relationship between second-order Markov models and the recently introduced spectral clustering formalism based on the non-backtracking matrix. Interestingly, the non-backtracking matrix corresponds to a scaled version of the transition matrix of a non-backtracking random walk, which is a special case of the second order dynamics discussed in this manuscript. As we have demonstrated, this connection opens up the possibility to investigate further second order Markov processes and their relationships to spectral clustering. Investigating these issues in more detail will be the subject of future work.

Additional Information

How to cite this article: Salnikov, V. et al. Using higher-order Markov models to reveal flow-based communities in networks. Sci. Rep. 6, 23194; doi: 10.1038/srep23194 (2016).