Skip to main content
Top
Published in: Foundations of Computational Mathematics 6/2017

Open Access 27-09-2016

Conley–Morse–Forman Theory for Combinatorial Multivector Fields on Lefschetz Complexes

Author: Marian Mrozek

Published in: Foundations of Computational Mathematics | Issue 6/2017

Activate our intelligent search to find suitable subject content or patents.

search-config
insite
CONTENT
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

We introduce combinatorial multivector fields, associate with them multivalued dynamics and study their topological features. Our combinatorial multivector fields generalize combinatorial vector fields of Forman. We define isolated invariant sets, Conley index, attractors, repellers and Morse decompositions. We provide a topological characterization of attractors and repellers and prove Morse inequalities. The generalization aims at algorithmic analysis of dynamical systems through combinatorialization of flows given by differential equations and through sampling dynamics in physical and numerical experiments. We provide a prototype algorithm for such applications.
Notes
Communicated by Herbert Edelsbrunner.
This research is partially supported by the Polish National Science Center under Maestro Grant No. 2014/14/A/ST1/00453.

1 Introduction

In the late 1990s of the twentieth century Forman [10] introduced the concept of a combinatorial vector field and presented a version of Morse theory for acyclic combinatorial vector fields. In another paper [11] he studied combinatorial vector fields without acyclicity assumption, extended the notion of the chain recurrent set to this setting and proved Conley type generalization of Morse inequalities.
Conley theory [9] is a generalization of Morse theory to the setting of non-necessarily gradient or gradient-like flows on locally compact metric spaces. In this theory the concepts of a non-degenerate critical point and its Morse index are replaced by the more general concept of an isolated invariant set and its Conley index. The Conley theory reduces to the Morse theory in the case of a flow on a smooth manifold defined by a smooth gradient vector field with non-degenerate critical points.
Recently, Kaczynski et al. [16] defined the concept of an isolated invariant set and the Conley index in the case of a combinatorial vector field on the collection of simplices of a simplicial complex and observed that such a combinatorial field has a counterpart on the polytope of the simplicial complex in the form of a multivalued, upper semicontinuous, acyclic valued and homotopic to identity map.
The aim of this paper is to combine the ideas of Forman with some classical concepts of topological dynamics in order to obtain an algorithmic tool for studying sampled dynamics, that is dynamics known only via a finite set of samples obtained from a physical or numerical experiment. The method to achieve this aim is the combinatorialization of classical dynamics. By this we mean constructing an analogue of classical topological dynamics that is set up in finite combinatorial spaces: simplicial complexes, cubical sets (also called cubical complexes) or more generally cellular complexes. Such spaces are equipped with a natural but non-Hausdorff topology via the Alexandroff Theorem [1] and the partial order given by the face relation. Simplicial complexes in the form of triangular meshes are typically used in visualization of vector fields sampled from data, and the use of topological methods in this field increases [8, 24, 31]. In gene regulatory networks a frequent method used to analyse the associated dynamics is Thomas’ formalism [28] leading to the study of dynamics on cubical grids [3]. The proposed combinatorialization may also serve as a very concise description of the qualitative features of classical dynamics.
Forman’s combinatorial vector fields seem to be a natural tool for a concise approximation and description of the dynamics of differential equations and more generally flows. For instance, given a cubical grid in \(\mathbb {R}^d\) and a vector field, it is natural to set up arrows in the combinatorial setting of the grid by taking averages of the vectors in the vector field along the codimension one faces of the grid. Unfortunately, in most cases such a procedure does not lead to a well-defined combinatorial vector field in the sense of Forman. This is because in the Forman theory the combinatorial vectors together with the critical cells have to constitute a partition. In particular, each non-critical, top-dimensional cell has to be paired with precisely one cell in its boundary. Such a requirement is not satisfied by a typical space discretization of a vector field (see Fig. 1). In order to overcome these limitations we introduce and study combinatorial multivector fields, a generalization of Forman’s combinatorial vector fields. Similar but different generalizations of Forman’s combinatorial vector fields are proposed by Wisniewski and Larsen [32] in the study of piecewise affine control systems and by Freij [13] in the combinatorial study of equivariant discrete Morse theory.
We extend the concepts of isolated invariant set and Conley index introduced in [16] to combinatorial multivector fields. We also define attractors, repellers, attractor–repeller pairs and Morse decompositions and provide a topological characterization of attractors and repellers. These ideas are novel not only for combinatorial multivector fields but also for combinatorial vector fields. Furthermore, we prove the Morse equation for Morse decompositions. We deduce from it Morse inequalities. They generalize the Morse inequalities proved by Forman [11] for the Morse decomposition consisting of basic sets of a combinatorial vector field to the case of general Morse decompositions for combinatorial multivector fields.
The construction of the chain complex, an algebraic structure needed in our study, is complicated in the case of a general cellular complex. This is in contrast to the case of a simplicial complex or a cubical set. To keep things simple but general, in this paper we work in the algebraic setting of chain complexes with a distinguished basis, an abstraction of the chain complex of a simplicial, cubical or cellular complex already studied by Lefschetz [19]. It is elementary to see simplicial and cubical complexes as examples of Lefschetz complexes. A version of Forman theory for combinatorial vector fields on chain complexes with a distinguished basis was recently proposed by a few authors [14, 18, 27]. Related work concerns Forman theory on finite topological spaces [21].
The organization of the paper is as follows. In Sect. 2 we provide an informal overview of the main results of the paper. In Sect. 3 we illustrate the new concepts and results with several examples. In Sect. 4 we gather preliminary definitions and results. In Sect. 5 we introduce Lefschetz complexes, define combinatorial multivector fields and prove their basic features. In Sect. 6 we define solutions and invariant sets of combinatorial multivector fields. In Sect. 7 we study isolated invariant sets of combinatorial multivector fields and their Conley index. In Sect. 8 we investigate attractors, repellers and attractor–repeller pairs. In Sect. 9 we define Morse decompositions and prove Morse equation and Morse inequalities. In Sect. 10 we discuss an algorithm constructing combinatorial multivector fields from clouds of vectors on the planar integer lattice. In Sect. 11 we show a few possible extensions of the theory presented in this paper. In Sect. 12 we present conclusions and directions of future research.

2 Main Results

In this section we informally present the main ideas and results of the paper. Precise definitions, statements and proofs will be given in the sequel.

2.1 Lefschetz Complexes

In principle, a Lefschetz complex may be viewed as a finitely generated free chain complex with a distinguished basis. However, in this paper we follow the original definition given by Lefschetz [19]. In this definition the elements of the basis are the primary objects and the algebraic structure is given on top of them. Such a reversed approach is natural in the algorithmic context, because a computer may store and process only finite sets. Thus, a Lefschetz complex consists of a finite collection of cells X graded by dimension and the incidence coefficient \(\kappa (x,y)\) encoding the incidence relation between cells \(x,y\in X\) (see Sect. 5.1 for a precise definition). A nonzero value of \(\kappa (x,y)\) indicates that the cell y is in the boundary of the cell x and the dimension of y is one less than the dimension of x. The cell y is then called a facet of x. The family K of all simplices of a simplicial complex [15, Definition 11.8], all elementary cubes in a cubical set [15, Definition 2.9] or, more generally, cells of a cellular complex (finite CW complex, see [20, Section IX.3]) are examples of Lefschetz complexes. In this case the incidence coefficient is obtained from the boundary homomorphism of the associated simplicial, cubical or cellular chain complex. A sample Lefschetz complex is presented in Fig. 2. It consists of eight vertices (0-cells or cells of dimension zero), ten edges (1-cells) and three squares (2-cells).
Condition (3) presented in Sect. 5.1 guarantees that the free group spanned by X together with the linear map given by \( \mathbf{\partial }x:=\sum _{y\in X}\kappa (x,y)y \) is a free chain complex with X as a basis. By the Lefschetz homology of X we mean the homology of this chain complex. We denote it by \(H^\kappa (X)\). In the case of a Lefschetz complex given as a cellular complex condition (3) is satisfied and the resulting chain complex and homology is precisely the cellular chain complex and the cellular homology. Given a Lefschetz complex X we denote by \(\beta _i(X):={\text {rank}}\,H^\kappa _i(X)\) the ith Betti number of X and write \(p_X\) for the respective Poincaré polynomial, that is,
$$\begin{aligned} p_X(t):=\sum _{i=0}^\infty \beta _i(X)t^i. \end{aligned}$$
The closure of \(A\subseteq X\), denoted \({\text {cl}}\,A\), is obtained by recursively adding to A the facets of cells in A, the facets of the facets of cells in A and so on. The set A is closed if \({\text {cl}}\,A=A\), and it is open if \(X \setminus A\) is closed. The terminology is justified, because the open sets indeed form a \(T_0\) topology on X. We say that A is proper if \({\text {mo}}\,A:={\text {cl}}\,A \setminus A\), which we call the mouth of A, is closed. Proper sets are important for us, because every proper subset of a Lefschetz complex with incidence coefficients restricted to this subset is also a Lefschetz complex. Four Lefschetz complexes being proper subsets of a bigger Lefschetz complex are indicated in Fig. 2 by solid ovals. In the case of a proper subset X of a cellular complex K the Lefschetz homology of X is isomorphic to the relative cellular homology \( H( {\text {cl}}\,X, {\text {mo}}\,X). \) However, from the algorithmic point of view the direct use of Lefschetz homology is preferred, because it minimizes the amount of information which needs to be encoded.

2.2 Multivector Fields

A multivector is a proper \(V\subseteq K\) such that \(V\subseteq {\text {cl}}\,V^\star \) for a unique \(V^\star \in V\). Out of the four examples of Lefschetz complexes in Fig. 2 only the one enclosing vertex C is not a multivector. A multivector is critical if \(H^\kappa (V)\ne 0\). Otherwise it is regular. The only example of a regular multivector in Fig. 2 is the one enclosing vertex B. Roughly speaking, a regular multivector indicates that \({\text {cl}}\,V\), the closure of V, may be collapsed to \({\text {mo}}\,V\), the mouth of V. In the dynamical sense this means that one can set up a flow which eventually leaves V through \({\text {mo}}\,V\). A critical multivector indicates the contrary: \({\text {cl}}\,V\) may not be collapsed to \({\text {mo}}\,V\) and, in the dynamical sense, something must stay inside V.
A combinatorial multivector field is a partition \(\mathcal {V}\) of X into multivectors. We associate dynamics with \(\mathcal {V}\) via a directed graph \(G_\mathcal {V}\) with vertices in X and three types of arrows: up-arrows, down-arrows and loops. Up-arrows have heads in \(V^\star \) and tails in all the other cells of V. Down-arrows have tails in \(V^\star \) and heads in \({\text {mo}}\,V\). Loops join \(V^\star \) with itself for all critical multivectors V. A sample multivector field is presented in Fig. 3 (top) together with the associated directed graph \(G_\mathcal {V}\) (bottom). The terminology ‘up-arrows’ and ‘down-arrows’ comes from the fact that the dimensions of cells are increasing along up-arrows and decreasing along down-arrows. Notice that the up-arrows sharing the same head uniquely determine a multivector. Therefore, it is convenient to draw a multivector field not as a partition but by marking all up-arrows. For convenience, we also mark the loops, but the down-arrows are implicit and are usually omitted to keep the drawings simple.
A multivector may consist of one, two or more cells. If there are no more than two cells, we say that the multivector is a vector. Otherwise we call it a strict multivector. Note that the combinatorial multivector field in Fig. 3 has three strict multivectors:
$$\begin{aligned} \{ABFE,AB,AE,A\},\quad \{BCGF,BC,FG\}, \quad \{CDHG,CD,DH,GH,D,H\}. \end{aligned}$$
Observe that a combinatorial multivector field with no strict multivectors corresponds to the combinatorial vector field in the sense of Forman [10, 11].
A cell \(x\in X\) is critical with respect to \(\mathcal {V}\) if \(x=V^\star \) for a critical multivector \(V\in \mathcal {V}\). A critical cell x is non-degenerate if the Lefschetz homology of its multivector is zero in all dimensions except one in which it is isomorphic to the ring of coefficients. This dimension is then the Morse index of the critical cell. The combinatorial multivector field in Fig. 3 has three critical cells: F, C and BCGF. They are all non-degenerate. The cells F and C have Morse index equal zero. The cell BCGF has Morse index equal one.
A solution of \(\mathcal {V}\) (also called a trajectory or a walk) is a bi-infinite, backward infinite, forward infinite or finite sequence of cells such that any two consecutive cells in the sequence form an arrow in the graph \(G_\mathcal {V}\). The solution is full if it is bi-infinite. A finite solution is also called a path. The full solution is periodic if the sequence is periodic. It is stationary if the sequence is constant. By the dynamics of \(\mathcal {V}\) we mean the collection of all solutions. The dynamics is multivalued in the sense that there may be many different solutions going through a given cell.

2.3 Isolated Invariant Sets

Let X be a Lefschetz complex and let \(\mathcal {V}\) be a combinatorial multivector field on X. Assume \(S\subseteq X\) is \(\mathcal {V}\)-compatible, that is, S equals the union of multivectors contained in it. We say that S is invariant if for every multivector \(V\subseteq S\) there is a full solution through \(V^\star \) in S. The invariant part of a subset \(A\subseteq X\) is the maximal, \(\mathcal {V}\)-compatible invariant subset of A. A path in \({\text {cl}}\,S\) is an internal tangency to S if the values at the endpoints of the path are in S but one of the values is not in S. The set S is isolated invariant if it is invariant and admits no internal tangencies.
An isolated invariant set is an attractor, respectively a repeller, if there is no full solution crossing it which goes away from it in forward, respectively backward, time. The attractors and repellers have the following topological characterization in terms of the \(T_0\) topology of X (see Sect. 8, Theorems 8.1 and 8.3).
Theorem 2.1
An isolated invariant set \(S\subseteq X\) is an attractor, respectively a repeller, if and only if it is closed, respectively open, in X.

2.4 Morse Inequalities

Given a family \(\{M_r\}_{r\in {\mathbb {P}}}\) of mutually disjoint, non-empty, isolated invariant sets, we write \(r\le r'\) for \(r,r'\in \mathbb {P}\) if there exists a full solution such that all its sufficiently far terms belong to \(M_r\) and all sufficiently early terms belong to \(M_{r'}\). Such a full solution is called a connection running from \(M_{r'}\) to \(M_r\). The connection is heteroclinic if \(r\ne r'\). Otherwise it is called homoclinic. We say that \(\{M_r\}_{r\in {\mathbb {P}}}\) is a Morse decomposition of X if the relation \(\le \) induces a partial order in \(\mathbb {P}\). The Hasse diagram of this partial order with vertices labelled by the Poincaré polynomials \(p_{M_r}(t)\) is called the Conley–Morse graph of the Morse decomposition (comp. [7, Def. 2.11]).
The Poincaré polynomials \(p_{M_r}(t)\) are related to the Poincaré polynomial \(p_X(t)\) via the following theorem (see Sect. 9.3, Theorems 9.11 and 9.12).
Theorem 2.2
(Morse equation and Morse inequalities) If \(\{M_r\}_{r\in {\mathbb {P}}}\) is a Morse decomposition of X, then
$$\begin{aligned} \sum _{r\in {\mathbb {P}}}p_{M_r}(t)=p_X(t)+(1+t)q(t) \end{aligned}$$
for some polynomial q(t) with non-negative coefficients. In particular, for any natural number k we have
$$\begin{aligned} \sum _{r\in {\mathbb {P}}}{\text {rank}}\,H^\kappa _k(M_r)\ge {\text {rank}}\,H^\kappa _k(X). \end{aligned}$$
We say that a combinatorial multivector field \(\mathcal {V}\) on X is gradient-like if there exists a real valued function f on X which is non-increasing along each solution of \(\mathcal {V}\) and constant only if the cells along the solution belong to the same multivector (see Sect. 9.5). The following theorem extends the results of Forman [10, Cor. 3.6] to the case of combinatorial multivector fields (see Sect. 9.5, Theorems 9.14 and 9.15).
Theorem 2.3
Assume \(\mathcal {V}\) is a gradient-like combinatorial multivector field on X such that each critical cell of \(\mathcal {V}\) is non-degenerate. Let \(n_k\) denote the number of critical cells of Morse index k. Then the family of critical multivectors of \(\mathcal {V}\) is a Morse decomposition of X. Moreover, for any non-negative integer k we have
$$\begin{aligned} n_k-n_{k-1}+\cdots \pm n_0 \ge \beta _k(X)-\beta _{k-1}(X)+\cdots \pm \beta _0(X), \end{aligned}$$
and
$$\begin{aligned} n_k\ge \beta _k(X). \end{aligned}$$

3 Examples

In this section we present a few examples of combinatorial multivector fields and some of its Morse decompositions. We begin with the example in Fig. 4. Then, we present an example illustrating the differences between the combinatorial multivector fields and combinatorial vector fields. We complete this section with examples of combinatorial multivector fields constructed by algorithm CMVF presented in Sect. 10. Two of these examples are derived from a planar smooth vector field, and one is derived from a cloud of random vectors on an integer lattice.

3.1 Attractors and Repellers

Consider the planar regular cellular complex in Fig. 4 (left). It consists of 11 quadrilaterals and its faces. A proper subcollection of its 55 faces, marked by a circle in the centre of mass, forms a Lefschetz complex X. It consists of all cells of the cellular complex except vertices A, B, D and edges AB, AD. A combinatorial multivector field \(\mathcal {V}\) on X is marked by up-arrows and loops. The invariant part of X with respect to \(\mathcal {V}\) consists of all cells of X but the cells marked in white. The Lefschetz homology \(H^\kappa (X)\cong H(K,A)\) where A is the cellular complex consisting of vertices ABD and edges AB, AD. Thus, this is the homology of a pointed annulus. Therefore, \(p_X(t)=t\).
Consider the family of six isolated invariant sets
$$\begin{aligned} \mathcal {M}=\{\,M_{\displaystyle {\bullet }},M_{\ominus },M_{\displaystyle {\circ }},M_{\times },M_{\triangle },M_{\Diamond }\,\}, \end{aligned}$$
marked in Fig. 4 with the respective symbols. The family \(\mathcal {M}\) is a Morse decomposition of X. The respective Poincaré polynomials are: \(p_{\displaystyle {\bullet }}(t)=1\), \(p_{\ominus }(t)=t\), \(p_{\displaystyle {\circ }}(t)=t^2\), \(p_{\times }(t)=2t\), \(p_{\triangle }(t)=t^2+t\), \(p_{\Diamond }(t)=t+1\).
There are two attractors: stationary \(M_{\displaystyle {\bullet }}\) and periodic \(M_{\Diamond }\). There are also two repellers: stationary \(M_{\displaystyle {\circ }}\) and periodic \(M_{\triangle }\). The other two isolated invariant sets are neither attractors nor repellers. The Morse equation takes the form
$$\begin{aligned} 2t^2+5t+2=t+(1+t)(2+2t). \end{aligned}$$

3.2 Refinements of Multivector Fields

A multivector field \(\mathcal {W}\) is a refinement of \(\mathcal {V}\) if each multivector in \(\mathcal {V}\) is \(\mathcal {W}\)-compatible. The refinement is proper if the invariant part with respect to \(\mathcal {W}\) of each regular multivector in \(\mathcal {V}\) is empty. A Forman refinement of a multivector field \(\mathcal {V}\) is a vector field \(\mathcal {W}\) which is a proper refinement of \(\mathcal {V}\) such that each multivector of \(\mathcal {V}\) contains at most one critical vector of \(\mathcal {W}\). Then \(\mathcal {W}\) has precisely one critical cell in any critical multivector of \(\mathcal {V}\).
The concept of Forman refinement raises two natural questions. The first question is whether a combinatorial multivector field always admits a Forman refinement. The second question is whether the study of the dynamics of a combinatorial multivector field which admits a Forman refinement may be reduced to the study of the dynamics of the refinement.
We do not know what the answer to the first question is. If the answer is negative, then there exists a multivector field \(\mathcal {V}\) on a Lefschetz complex X such that at least one multivector of \(\mathcal {V}\) cannot be partitioned into vectors with at most one critical vector in the partition. There are examples of zero spaces (Lefschetz complexes with zero homology) which do not admit a combinatorial vector field with empty invariant part. They may be constructed by adapting examples of contractible but not collapsible cellular complexes such as Bing’s house [4] or dunce hat [33]. One such example is presented in Fig. 5. This example fulfils all requirements of a multivector except the requirement that a multivector has precisely one top-dimensional cell, because it has five top-dimensional cells.
Regardless of what is the answer to the first question, even if a given combinatorial multivector field does have a Forman refinement, in general it is not unique. Figure 6 shows a combinatorial multivector field \(\mathcal {V}\) (left) and its two different Forman refinements: \(\mathcal {V}_1\) (middle) and \(\mathcal {V}_2\) (right). The critical cells of all three combinatorial multivector fields are the same: AB, B, C, DF and F. However, in the case of \(\mathcal {V}\) there are heteroclinic connections running from the critical cell DF to the critical cells AB, B and C. In the case of \(\mathcal {V}_1\) there is a heteroclinic connection running from DF to B but not to AB nor C. In the case of \(\mathcal {V}_2\) there is a heteroclinic connection running from DF to C but not to AB nor B. Our next example shows that the differences may be even deeper. Thus, the answer to the second question is clearly negative.

3.3 Homoclinic Connections and Chaotic Dynamics

Figure 7 presents a combinatorial multivector field \(\mathcal {V}\) on a Lefschetz complex X (top) and one of its two Forman refinements \(\mathcal {V}_1\) (bottom). The combinatorial multivector field \(\mathcal {V}\) has homoclinic connections to the cell BEIF. Moreover, it admits chaotic dynamics in the sense that for each bi-infinite sequence of two symbols marking the two edges BF and EI, there is a full trajectory whose sequence of passing through the edges BF and EI is precisely the given one. The two Forman refinements of \(\mathcal {V}\) have neither homoclinic connections nor chaotic dynamics.

3.4 A Combinatorial Multivector Field Constructed from a Smooth Vector Field

In Sect. 10 we present algorithm CMVF. Its input consists of a collection of classical vectors on an integer, planar lattice. These may be vectors of a smooth planar vector field evaluated at the lattice points. However, the algorithm accepts any collection of vectors, also vectors chosen randomly. It constructs a combinatorial multivector field based on the directions of the classical vectors, with varying number of strict multivectors: from many to none, depending on a control parameter.
As our first example consider the vector field of the differential equation
$$\begin{aligned} \begin{array}{ccc} \dot{x}_1 &{}=&{} -x_2 + x_1 \left( x_1^2 + x_2^2-4\right) \left( x_1^2+x_2^2-1\right) \\ \dot{x}_2 &{}=&{} x_1 + x_2 \left( x_1^2 + x_2^2-4\right) \left( x_1^2+x_2^2-1\right) \end{array} \end{aligned}$$
(1)
restricted to the \(10\times 10\) lattice of points in the square \([-3,3]\times [-3,3]\). The equation has three minimal invariant sets: a repelling stationary point at the origin and two invariant circles: an attracting periodic orbit of radius 1 and a repelling periodic orbit of radius 2. The outcome of algorithm CMVF maximizing the number of strict multivectors is presented in Fig. 8. It captures all three minimal invariant sets of (1). The variant forbidding strict multivectors is presented in Fig. 9. It captures only the attracting periodic orbit, whereas the repelling fixed point and repelling periodic trajectory degenerate into a collection of critical cells.

3.5 A Combinatorial Multivector Field Constructed from a Random Collection of Vectors

Figure 10 presents a combinatorial multivector field constructed by algorithm CMVF from a randomly selected collection of vectors at the lattice points. To ensure that the boundary of the selected region does not divide multivectors, all the vectors at the boundary are not random but point inwards. The resulting Morse decomposition consists of 102 isolated invariant sets out of which three consist of more than one multivector.

4 Preliminaries

In this section we introduce the notation, recall the definitions and gather results used in the sequel.

4.1 Sets and Maps

We denote the sets of reals, integers, non-negative integers and non-positive integers, respectively, by \(\mathbb {R}, \mathbb {Z}\), \(\mathbb {Z}^+\), \(\mathbb {Z}^-\). We also write \(\mathbb {Z}^{\ge n}\), \(\mathbb {Z}^{\le n}\), respectively, for integers greater or equal n and less or equal n. Given a set X, we write \({\text {card}}\,X\) for the number of elements of X and we denote by \(\mathcal {P}(X)\) the family of all subsets of X. We write \(f:X{\nrightarrow }Y\) for a partial map from X to Y, that is a map defined on a subset \({\text {dom}}\,f\subseteq X\), called the domain of f, and such that the set of values of f, denoted \({\text {im}}\,f\), is contained in Y.

4.2 Relations, Multivalued Maps and Digraphs

Given a set X and a binary relation \(R\subseteq X\times X\), we use the shorthand xRy for \((x,y)\in R\). By the transitive closure of R we mean the relation \(\bar{R}\subseteq X\times X\) given by \(x\bar{R}y\) if there exists a sequence \(x=x_0,x_1,\ldots , x_n=y\) such that \(n\ge 1\) and \(x_{i-1}Rx_i\) for \(i=1,2,\ldots , n\). Note that \(\bar{R}\) is transitive but need not be reflexive. The relation \(\bar{R}\cup {\text {id}}\,_X\), where \({\text {id}}\,_X\) stands for the identity relation on X, is reflexive and transitive. Hence, it is a preorder, called the preorder induced by R. A \(y\in X\) covers an \(x\in X\) in the relation R if xRy but there is no \(z\in X\) such that \(x\ne z\ne y\) and xRz, zRy.
A multivalued map \(F:X{\,\overrightarrow{\rightarrow }\,}Y\) is a map \(F:X\rightarrow \mathcal {P}(Y)\). For \(A\subseteq X\) we define the image of A by \( F(A):=\bigcup \{\,F(x) \mid x\in A\,\} \) and for \(B\subseteq Y\) we define the preimage of B by \( F^{-1}(B):=\{\,x\in X\mid F(x)\cap B\ne \varnothing \,\}. \)
Given a relation R, we associate with it a multivalued map \(F_R:X{\,\overrightarrow{\rightarrow }\,}X\), by \(F_R(x):=R(x)\), where \( R(x):=\{\,y\in X\mid xRy\,\} \) is the image of \(x\in X\) in R. Obviously \(R\mapsto F_R\) is a one-to-one correspondence between binary relations in X and multivalued maps from X to X. Often, it will be convenient to interpret the relation R as a directed graph whose set of vertices is X and a directed arrow joins x with y whenever xRy. The three concepts relation, multivalued map and directed graph are equivalent on the formal level, and the distinction is used only to emphasize different directions of research. However, in this paper it will be convenient to use all these concepts interchangeably.

4.3 Partial Orders

Assume \((X,\le )\) is a poset. Thus, \(\le \) is a partial order, that is a reflexive, antisymmetric and transitive relation in X. As usual, we denote the inverse of this relation by \(\ge \). We also write < and > for the associated strict partial orders, that is relations \(\le \) and \(\ge \) excluding identity. By an interval in X we mean a subset of X which has one of the following four forms
$$\begin{aligned} {[}x,y]:= & {} \{\,z\in X\mid x\le z\le y\,\}\\ (-\infty ,y]:= & {} \{\,z\in X\mid z\le y\,\}\\ {[}x,\infty ):= & {} \{\,z\in X\mid x\le z\,\}\\ (-\infty ,\infty ):= & {} X. \end{aligned}$$
In the first case we speak about a closed interval. The elements xy are the endpoints of the interval. We recall that \(A\subseteq X\) is convex if for any \(x,y\in A\) the closed interval [xy] is contained in A. Note that every interval is convex but there may exist convex subsets of X which are not intervals. A set \(A\subseteq X\) is an upper set if for any \(x\in X\) we have \([x,\infty )\subseteq A\). Also, \(A\subseteq X\) is a lower set if for any \(x\in X\) we have \((-\infty ,x]\subseteq A\). Sometimes a lower set is called an attracting interval and an upper set a repelling interval. However, one has to be careful, because in general lower and upper sets need not be intervals at all. For \(A\subseteq X\) we also use the notation \(A^{\le }:=\{\,x\in X\mid \exists _{a\in A}\;x\le a\,\}\) and \(A^{<}:=A^{\le } \setminus A\).
Proposition 4.1
If I is convex, then \(I^{\le }\) and \(I^{<}\) are lower sets (attracting intervals).
Proof
The verification that \(I^{\le }\) is a lower set is straightforward. To see that \(I^{<}\) is a lower set take \(x\in I^{<}\). Hence, we have \(x\not \in I\) but \(x<z\) for some \(z\in I\). Let \(y\le x\). Then \(y\in I^{\le }\). Since I is convex, we cannot have \(y\in I\). It follows that \(y\in I^{<}\). \(\square \)
Proposition 4.2
Let \(I=\{\,1,2,\ldots , n\,\}\) and let \(\le \) denote the linear order of natural numbers. Then for any \(i\in I\) we have \(\{i\}^{\le }=\{\,1,2,\ldots , i\,\}\) and \(\{i\}^{<}=\{\,1,2,\ldots , i-1\,\}\).

4.4 Topology of Finite Sets

For a topological space X and \(A\subseteq X\) we write \({\text {cl}}\,A\) for the closure of A. We also define the mouth of A by
$$\begin{aligned} {\text {mo}}\,A:={\text {cl}}\,A \setminus A. \end{aligned}$$
Note that A is closed if and only if its mouth is empty. We say that A is proper if \({\text {mo}}\,A\) is closed. Note that open and closed subsets of X are proper. In the case of finite topological spaces proper sets have a special structure. To explain it we first recall some properties of finite topological spaces based on the following fundamental result which goes back to Alexandroff [1].
Theorem 4.3
For a finite poset \((X,\le )\) the family \(\mathcal {T}_{\le }\) of upper sets of \(\le \) is a \(T_0\) topology on X. For a finite \(T_0\) topological space \((X,\mathcal {T})\) the relation \(x\le _{\mathcal {T}}y\) defined by \(x\in {\text {cl}}\,\{y\}\) is a partial order on X. Moreover, the two associations relating \(T_0\) topologies and partial orders are mutually inverse.
Let \((X,\mathcal {T})\) be a finite topological space. For \(x\in X\) we write \( {\text {cl}}\,x:={\text {cl}}\,\{x\}\), \({\text {opn}}\,x:=\bigcap \{\,U\in \mathcal {T}\mid x\in U\,\}\). The following proposition may be easily verified.
Proposition 4.4
Let \((X,\mathcal {T})\) be a finite topological space and let \(x,y\in X\). The operations \({\text {cl}}\,\) and \({\text {opn}}\,\) have the following properties.
(i)
\({\text {cl}}\,x\) is the smallest closed set containing x,
 
(ii)
\({\text {opn}}\,x\) is the smallest open set containing x,
 
(iii)
\({\text {cl}}\,A=\bigcup _{x\in A}{\text {cl}}\,x\) for any \(A\subseteq X\),
 
(iv)
\(A\subseteq X\) is closed if and only if \({\text {cl}}\,x\subseteq A\) for any \(x\in A\),
 
(v)
\(A\subseteq X\) is open if and only if \({\text {opn}}\,x\subseteq A\) for any \(x\in A\),
 
(vi)
\(y\in {\text {cl}}\,x\) if and only if \(x\in {\text {opn}}\,y\).
 
In the sequel we will particularly often use property (iii) of Proposition 4.4. In particular, it is needed in the following characterization of proper sets in finite topological spaces.
Proposition 4.5
Let X be a finite topological space. Then \(A\subseteq X\) is proper if and only if
$$\begin{aligned} \forall _{x,z\in A}\forall _{y\in X} \quad x\in {\text {cl}}\,y,\; y\in {\text {cl}}\,z\;\Rightarrow \;y\in A. \end{aligned}$$
(2)
Proof
Let \(A\subseteq X\) be proper, \(x,z\in A\), \(y\in X\), \(x\in {\text {cl}}\,y\), \(y\in {\text {cl}}\,z\) and assume \(y\not \in A\). Then \(y\in {\text {mo}}\,A\) and \(x\in {\text {cl}}\,{\text {mo}}\,A={\text {mo}}\,A\). Therefore, \(x\not \in A\), a contradiction proving (2). Assume in turn that (2) holds and \({\text {mo}}\,A\) is not closed. Then there exists an \(x\in {\text {cl}}\,{\text {mo}}\,A \setminus {\text {mo}}\,A\). Thus, \(x\in A\), \(x\in {\text {cl}}\,y\) for some \(y\in {\text {mo}}\,A\) and \(y\in {\text {cl}}\,z\) for some \(z\in A\). It follows from (2) that \(y\in A\), which contradicts \(y\in {\text {mo}}\,A\). \(\square \)
Proposition 4.5 means that in the setting of finite topological spaces proper sets correspond to convex sets in the language of the associated partial order.

4.5 Graded Modules and Chain Complexes

Let R be a fixed ring with unity. Given a set X we denote by R(X) the free module over R spanned by X. Given a graded, finitely generated module \(E=(E_k)_{k\in {\mathbb {Z^+}}}\) over R, we write
$$\begin{aligned} p_E(t):=\sum _{k=0}^{\infty } {\text {rank}}\,(E_k)t^k, \end{aligned}$$
for the Poincaré formal power series of E. We have the following theorem (see [26]).
Theorem 4.6
Assume EFG are graded, finitely generated modules and we have an exact sequence
https://static-content.springer.com/image/art%3A10.1007%2Fs10208-016-9330-z/MediaObjects/10208_2016_9330_Equ79_HTML.gif
Then
$$\begin{aligned} p_E(t)+p_G(t)=p_F(t)+(1+t)Q(t), \end{aligned}$$
where
$$\begin{aligned} Q(t):=\sum _{k=0}^{\infty } {\text {rank}}\,({\text {im}}\,\gamma _{k+1})t^k \end{aligned}$$
is a polynomial with non-negative coefficients. Moreover, if \(F=E\oplus G\), then \(Q=0\). \(\square \)

5 Multivector Fields and Multivector Dynamics

In this section we define Lefschetz complexes and introduce the concepts of the combinatorial multivector and the combinatorial vector field on a Lefschetz complex. Given a combinatorial multivector field, we associate with it a graph and a multivalued map allowing us to study its dynamics. We also prove a crucial theorem about acyclic combinatorial multivector fields.

5.1 Lefschetz Complexes

The following definition goes back to S. Lefschetz (see [19, Chpt. III, Sec. 1, Def. 1.1]).
Definition 5.1
We say that \((X,\kappa )\) is a Lefschetz complex if \(X=(X_q)_{q\in {\mathbb {Z}^+}}\) is a finite set with gradation, \(\kappa : X \times X \rightarrow R\) is a map such that \(\kappa (x,y)\ne 0 \) implies \(x\in X_q,\,y\in X_{q-1}\) and for any \(x,z\in X\) we have
$$\begin{aligned} \sum _{y\in X}\kappa (x,y)\kappa (y,z)=0. \end{aligned}$$
(3)
We refer to the elements of X as cells and to \(\kappa (x,y)\) as the incidence coefficient of xy.
The family of cells of a simplicial complex [15, Definition 11.8] and the family of elementary cubes of a cubical set [15, Definition 2.9] provide simple but important examples of Lefschetz complexes. In these two cases the respective formulas for the incident coefficients are explicit and elementary (see [23]). Also a general regular cellular complex (regular finite CW complex, see [20, Section IX.3]) is an example of a Lefschetz complex. In this case the incident coefficients may be obtained from a system of equations (see [20, Section IX.5]).
The Lefschetz complex \((X,\kappa )\) is called regular if for any \(x,y\in X\) the incidence coefficient \(\kappa (x,y)\) is either zero or invertible in R. One easily verifies that condition (3) implies that we have a free chain complex \((R(X),\mathbf{\partial }^\kappa )\) with \(\mathbf{\partial }^\kappa :R(X)\rightarrow R(X)\) defined on generators by \(\mathbf{\partial }^\kappa (x) := \sum _{y\in X}\kappa (x,y)y\). The Lefschetz homology of \((X,\kappa )\), denoted \(H^\kappa (X)\) is the homology of this chain complex. By a zero space we mean a Lefschetz complex whose Lefschetz homology is zero. Since X is finite, \((R(X),\mathbf{\partial }^\kappa )\) is finitely generated. In consequence, the Poincaré formal power series \(p_{H^{\kappa }(X)}(t)\) is a polynomial. We denote it briefly by \(p_X(t)\).
Given \(x,y\in X\) we say that y is a facet of x and write \(y\prec _{\kappa } x\) if \(\kappa (x,y)\ne 0\). It is easily seen that the relation \(\prec _{\kappa }\) extends uniquely to a minimal partial order. We denote this partial order by \(\le _\kappa \) and the associated strict order by \(<_\kappa \). We say that y is a face of x if \(y\le _\kappa x\). The \(T_0\) topology defined via Theorem 4.3 by the partial order \(\le _\kappa \) will be called the Lefschetz topology of \((X,\kappa )\). Observe that the closure of a set \(A\subseteq X\) in this topology consists of all faces of all cells in A. The Lefschetz complex via its Lefschetz topology is related to the abstract cell complex in the sense of [17] and [29, Section III]). In principle, the definitions in the sequel using Lefschetz topology could be restated in terms of the partial order \(\le _\kappa \). We prefer to use Lefschetz topology to emphasize that several definitions, in particular the definition of an index pair, are analogous to their counterparts in the classical Conley theory.
Proposition 5.2
If \(X=\{a\}\) is a singleton, then \(H^\kappa (X)\cong R(X)\ne 0\). If \(X=\{a,b\}\) and \(\kappa (b,a)\) is invertible, then \(H^\kappa (X)=0\).
Proof
If \(X=\{a\}\), then \(\mathbf{\partial }^{\kappa }\) is zero. If \(X=\{a,b\}\) and \(\kappa (b,a)\) is invertible, then the only nonzero component of \(\mathbf{\partial }^{\kappa }\) is an isomorphism. \(\square \)
Proposition 5.2 shows that a Lefschetz complex consisting of just two cells may have zero Lefschetz homology. At the same time the singular homology of this two point space with Lefschetz topology is nonzero, because the singular homology of a non-empty space is never zero. Thus, the singular homology H(X) of a Lefschetz complex \((X,\kappa )\) considered as a topological space with its Lefschetz topology need not be the same as the Lefschetz homology \(H^\kappa (X)\). Some situations when the two homologies are isomorphic may be deduced from the results in [2]. However, this is irrelevant from the point of view of the needs of this paper.
A set \(A\subseteq X\) is a \(\kappa \)-subcomplex of X if \((A,\kappa _{|A\times A})\) is a Lefschetz complex. Lefschetz complexes, under the name of S-complexes, are discussed in [23]. In particular, the following proposition follows from the observation that a proper subset of a Lefschetz complex X satisfies the assumptions of [23, Theorem 3.1].
Proposition 5.3
Every proper \(A\subseteq X\) is a \(\kappa \)-subcomplex of X. In particular, open and closed subsets of X are \(\kappa \)-subcomplexes of X.
Note that a \(\kappa \)-subcomplex A of X does not guarantee that \((R(A),\mathbf{\partial }^\kappa _{|R(A)})\) is a chain subcomplex of \((R(X),\mathbf{\partial }^\kappa )\). However, we have the following theorem (see [23, Theorem 3.5]).
Theorem 5.4
Assume A is closed in X. Then \((R(A),\mathbf{\partial }^\kappa _{|R(A)})\) is a chain subcomplex of \((R(X),\mathbf{\partial }^\kappa )\). Moreover, the homomorphisms \(\mathbf{\partial }^{\kappa _{|A\times A}}:R(A)\rightarrow R(A)\) and \(\mathbf{\partial }^\kappa _{|R(A)}:R(A)\rightarrow R(A)\) coincide. In particular, the homology of the quotient chain complex \((R(X)/R(A),[\mathbf{\partial }^\kappa ])\), denoted \(H^\kappa (X,A)\), is well defined and isomorphic to \(H^\kappa (X \setminus A)\).
The following proposition is straightforward to verify.
Proposition 5.5
Assume \(X=X_1\cup X_2\), where \(X_1\) and \(X_2\) are disjoint, closed subset of X. Then \(X_1\) and \(X_2\) are \(\kappa \)-subcomplexes and \(H^\kappa (X)=H^\kappa (X_1)\oplus H^\kappa (X_2)\).
We also need the following theorem which follows from [23, Theorems 3.3 and 3.4]
Theorem 5.6
Assume \(X'\subseteq X\) is closed in X and \(X'':=X \setminus X'\). Then there is a long exact sequence of homology modules
$$\begin{aligned} \cdots \mathop {\rightarrow }\limits ^{ } H^\kappa _i(X') \mathop {\rightarrow }\limits ^{ } H^\kappa _i(X) \mathop {\rightarrow }\limits ^{ } H^\kappa _i(X'') \mathop {\rightarrow }\limits ^{ } H^\kappa _{i-1}(X') \mathop {\rightarrow }\limits ^{ } \cdots . \end{aligned}$$
(4)

5.2 Multivectors

Let \((X,\kappa )\) be a fixed Lefschetz complex.
Definition 5.7
A combinatorial multivector or briefly a multivector is a proper subset \(V\subseteq X\) admitting a unique maximal element with respect to the partial order \(\le _\kappa \). We call this element the dominant cell of V and denote it \(V^\star \).
Note that we do not require the existence of a unique minimal element in a multivector but if such an element exists, we denote it by \(V_\star \). Multivectors admitting a unique minimal element are studied in [13] in the context of equivariant discrete Morse theory. A concept similar to our multivector appears also in [32].
Proposition 5.8
For a multivector V we have \(V\ne \varnothing \) and \({\text {cl}}\,V={\text {cl}}\,V^\star \).
A multivector V is regular if V is a zero space. Otherwise it is called critical. A combinatorial multivector V is a combinatorial vector or briefly a vector if \({\text {card}}\,V\le 2\). A vector always has a unique minimal element.
Proposition 5.9
Assume X is a regular Lefschetz complex and let \(V\subseteq X\) be a vector. Then \({\text {card}}\,V=1\) if and only if V is critical and \({\text {card}}\,V=2\) if and only if V is regular. Moreover, if \({\text {card}}\,V=2\), then \(V_\star \prec _{\kappa } V^\star \).
Proof
If V is a singleton, then by Proposition 5.2 we have \(H^\kappa (V)\ne 0\); hence, V is critical. If \({\text {card}}\,V=2\), then \(V_\star \ne V^\star \). First, we will show that \(V_\star \prec _{\kappa } V^\star \). Indeed, if not, then \(V_\star<_{\kappa } x <_{\kappa } V^\star \) for some \(x\in X\). But then \(x\in {\text {mo}}\,V\) and \(V_\star \in {\text {cl}}\,{\text {mo}}\,V \setminus {\text {mo}}\,V\), which contradicts the assumption that V is proper. Thus, \(V_\star \) is a facet of \(V^\star \), \(\kappa (V^\star ,V_\star )\ne 0\) and by the assumed regularity of X it is invertible. Therefore, again by Proposition 5.2, we have \(H^\kappa (V)=0\). It follows that V is regular.

5.3 Multivector Fields

The following definition introduces the main new concept of this paper.
Definition 5.10
A combinatorial multivector field on X, or briefly a multivector field, is a partition \(\mathcal {V}\) of X into multivectors. A combinatorial vector field on X, or briefly a vector field, is a combinatorial multivector field whose each multivector is a vector.
Proposition 5.9 implies that our concept of a vector field on the Lefschetz complex of a cellular complex is in one-to-one correspondence with Forman’s combinatorial vector field (see [11]). It also corresponds to the concept of partial matching [18, Definition 11.22]. Thus, the combinatorial multivector field is a generalization of the earlier definitions in which vectors were used instead of multivectors.
For each cell \(x\in X\) we denote by \([x]_{\mathcal {V}}\) the unique multivector in \(\mathcal {V}\) to which x belongs. If the multivector field \(\mathcal {V}\) is clear from the context, we write briefly \([x]:=[x]_{\mathcal {V}}\) and \(x^\star :=[x]_{\mathcal {V}}^\star \). We refer to a cell x as dominant with respect to \(\mathcal {V}\), or briefly as dominant, if \(x^\star =x\).
The map which sends x to \(x^\star \) determines the combinatorial multivector field. More precisely, we have the following theorem.
Theorem 5.11
The map \(\theta :X\ni x\mapsto x^\star \in X\) has the following properties
(i)
For each \(x\in X\) we have \(x\in {\text {cl}}\,\theta (x)\),
 
(ii)
\(\theta ^2=\theta \),
 
(iii)
For each \(y\in {\text {im}}\,\theta \) if \(x\in \theta ^{-1}(y)\), then \({\text {opn}}\,x\cap {\text {cl}}\,y\subseteq \theta ^{-1}(y)\).
 
Conversely, if a map \(\theta :X\rightarrow X\) satisfies properties (i)–(iii), then
$$\begin{aligned} \mathcal {V}_\theta :=\{\,\theta ^{-1}(y)\mid y\in {\text {im}}\,\theta \,\} \end{aligned}$$
is a combinatorial multivector field on X.
Proof
Properties (i)–(iii) of \(\theta :X\ni x\mapsto x^\star \in X\) follow immediately from the definition of a multivector. To prove the converse assertion assume \(\theta :X\rightarrow X\) satisfies properties (i)–(iii). Obviously \(\mathcal {V}_\theta \) is a partition. To see that each element of \(\mathcal {V}_\theta \) is a multivector take \(y\in {\text {im}}\,\theta \). Then by (i) \(\theta ^{-1}(y)\subseteq {\text {cl}}\,y\) and by (iii) \(\theta ^{-1}(y)\) is open in \({\text {cl}}\,y\). This proves that \(\theta ^{-1}(y)\) is proper. By (ii) the unique maximal element in \(\theta ^{-1}(y)\) is y. Therefore, \(\theta ^{-1}(y)\) is a multivector. \(\square \)

5.4 The Graph and Multivalued Map of a Multivector Field

Given a combinatorial multivector field \(\mathcal {V}\) on X we associate with it the graph \(G_\mathcal {V}\) with vertices in X and an arrow from x to y if one of the following conditions is satisfied
$$\begin{aligned}&x\ne y=x^\star \text { (an } \textit{up-arrow}), \end{aligned}$$
(5)
$$\begin{aligned}&x=x^\star \text { and } y\in {\text {cl}}\,x \setminus [x]_{\mathcal {V}} \text { (a } \textit{down-arrow}), \end{aligned}$$
(6)
$$\begin{aligned}&x=x^\star =y \text { and } [y] \text { is critical (a } \textit{loop}). \end{aligned}$$
(7)
We write \(y \prec _{\mathcal {V}} x\) if there is an arrow from x to y in \(G_\mathcal {V}\). This lets us interpret \(\prec _{\mathcal {V}}\) as a relation in X. We denote by \(\le _{\mathcal {V}}\) the preorder induced by \(\prec _{\mathcal {V}}\). In order to study the dynamics of \(\mathcal {V}\), we interpret \(\prec _{\mathcal {V}}\) as a multivalued map \(\Pi _{\mathcal {V}}: X{\,\overrightarrow{\rightarrow }\,}X\), which sends a cell x to the set of cells covered by x in \(\prec _{\mathcal {V}}\), that is
$$\begin{aligned} \Pi _\mathcal {V}(x):=\{\,y\in X\mid y\prec _{\mathcal {V}} x\,\}. \end{aligned}$$
(8)
We say that a cell \(x\in X\) is critical with respect to \(\mathcal {V}\) if the multivector \([x]_{\mathcal {V}}\) is critical and x is dominant in \([x]_{\mathcal {V}}\). A cell is regular if it is not critical. We denote by \(\langle x \rangle _{\mathcal {V}}\) the set of regular cells in \([x]_\mathcal {V}\). It is straightforward to observe that
$$\begin{aligned} \langle x \rangle _{\mathcal {V}}={\left\{ \begin{array}{ll} {[x]_{\mathcal {V}}} &{}\quad \text {if}\,\, {[x]_{\mathcal {V}}}\,\, \text {is regular,}\\ {[x]_{\mathcal {V}}} \setminus \{x^\star \} &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(9)
As an immediate consequence of the definition (8) and formula (9) we get the following proposition.
Proposition 5.12
For each \(x\in X\) we have
$$\begin{aligned} \Pi _\mathcal {V}(x)={\left\{ \begin{array}{ll} \{x^\star \} &{} \quad \mathrm{if}\,\, x\ne x^\star ,\\ {\text {cl}}\,x \setminus \langle x \rangle _{\mathcal {V}} &{} \quad \mathrm{otherwise.} \end{array}\right. } \end{aligned}$$
We extend the relation \(\le _{\mathcal {V}}\) to multivectors \(V,W\in \mathcal {V}\) by assuming that \(V\le _{\mathcal {V}}W\) if and only if \(V^\star \le _{\mathcal {V}}W^\star \).
Proposition 5.13
If \(\le _{\mathcal {V}}\) is a partial order on X, then the extension of \(\le _{\mathcal {V}}\) to multivectors is a partial order on \(\mathcal {V}\).

5.5 Acyclic Multivector Fields

We say that \(\mathcal {V}\) is acyclic if \(\le _{\mathcal {V}}\) is a partial order on X.
Theorem 5.14
Assume X admits an acyclic multivector field whose each multivector is regular. Then X is a zero space.
Proof
Let \(\mathcal {V}\) be an acyclic multivector field on X whose each multivector is regular. We will proceed by induction on \({\text {card}}\,\mathcal {V}\). If \({\text {card}}\,\mathcal {V}=0\), that is if X is empty, the conclusion is obvious. Assume \({\text {card}}\,\mathcal {V}>0\). By Proposition 5.13 we know that \(\le _{\mathcal {V}}\) is a partial order on \(\mathcal {V}\). Let V be a maximal element of \(\mathcal {V}\) with respect to \(\le _{\mathcal {V}}\). We claim that \(X':=X \setminus V\) is closed in X. To prove the claim, assume the contrary. Then there exists an \(x\in {\text {cl}}\,X'\cap V\). Let \(y\in X'\) be such that \(x\in {\text {cl}}\,y\subseteq {\text {cl}}\,y^\star \). Since \(x\in V\) and \(y\not \in V\), we have \([x]_{\mathcal {V}}\ne [y]_{\mathcal {V}}=[y^\star ]_{\mathcal {V}}\). It follows that \(x\le _{\mathcal {V}}y^\star \) and consequently \(V=[x]_{\mathcal {V}}\le _{\mathcal {V}}[y^\star ]_{\mathcal {V}}\). Hence, V is not maximal, because \(V\ne [y^\star ]_{\mathcal {V}}\), a contradiction proving that \(X'\) is closed. In particular \(X'\) is proper. Obviously, \(\mathcal {V}':=\mathcal {V}\setminus \{V\}\) is an acyclic multivector field on \(X'\) whose each multivector is regular. Thus, by induction assumption, \(X'\) is a zero space. Since also V is a zero space, it follows from Theorem 5.6 applied to the pair \((X,X')\) that X is a zero space. \(\square \)

6 Solutions and Invariant Sets

In this section we first define the solution of a combinatorial multivector field, an analogue of a solution of an ordinary differential equation. Then, we use it to define the fundamental concept of the invariant set of a combinatorial multivector field.

6.1 Solutions

A partial map \(\varphi :\mathbb {Z}{\nrightarrow }X\) is a solution of \( \mathcal {V}\) if \({\text {dom}}\,\varphi \) is an interval in \(\mathbb {Z}\) and \( \varphi (i+1)\in \Pi _\mathcal {V}(\varphi (i)) \text { for } i,i+1\in {\text {dom}}\,\varphi . \) A solution \(\varphi \) is in \(A\subseteq X\) if \({\text {im}}\,\varphi \subseteq A\). We call \(\varphi \) a full (respectively forward or backward) solution if \({\text {dom}}\,\varphi \) is \(\mathbb {Z}\) (respectively \(\mathbb {Z}^{\ge n}\) or \(\mathbb {Z}^{\le n}\) for some \(n\in \mathbb {Z}\)). We say that \(\varphi \) is a solution through \(x\in X\) if \(x\in {\text {im}}\,\varphi \). We denote the set of full (respectively forward, backward) solutions in A through x by \({\text {Sol}}\,(x,A)\) (respectively \({\text {Sol}}\,^+(x,A)\), \({\text {Sol}}\,^-(x,A)\)). We drop A in this notation if A is the whole Lefschetz complex X. As an immediate consequence of Proposition 5.12 we get the following proposition.
Proposition 6.1
If \(\varphi \) is a solution of \(\mathcal {V}\) and \(i,i+1\in {\text {dom}}\,\varphi \) then either \(\varphi (i)^\star =\varphi (i)\) or \(\varphi (i)^\star =\varphi (i+1)\).
Given \(n\in \mathbb {Z}\), let \( \tau _n:\mathbb {Z}\ni i\mapsto i+n\in \mathbb {Z}\) denote the n-translation map. Let \(\varphi \) be a solution in A such that \(n\in \mathbb {Z}\) is the right endpoint of \({\text {dom}}\,\varphi \) and let \(\psi \) be a solution in A such that \(m\in \mathbb {Z}\) is the left endpoint of \({\text {dom}}\,\psi \). We define \( \varphi \cdot \psi :\tau _n^{-1}({\text {dom}}\,\varphi )\cup \tau _{m-1}^{-1}({\text {dom}}\,\psi )\rightarrow A, \) the concatenation of \(\varphi \) and \(\psi \) by
$$\begin{aligned} (\varphi \cdot \psi )(i):= {\left\{ \begin{array}{ll} \varphi (i+n) &{}\quad \text {if}\,\, i\le 0\\ \psi (i+m-1) &{}\quad \text {if}\,\, i>0. \end{array}\right. } \end{aligned}$$
It is straightforward to observe that if \(\psi (m)\in \Pi _\mathcal {V}(\varphi (n))\) then the concatenation \(\varphi \cdot \psi \) is also a solution in A.
Let \(\varphi \in {\text {Sol}}\,^+(x,A)\) and let \({\text {dom}}\,\varphi =\mathbb {Z}^{\ge n}\) for some \(n\in \mathbb {Z}\). We define \(\sigma _+\varphi :\mathbb {Z}^{\ge n}\rightarrow A\), the right shift of \(\varphi \), by \(\sigma _+\varphi (i):=\varphi (i+1)\). Let \(\psi \in {\text {Sol}}\,^-(x,A)\) and let \({\text {dom}}\,\psi =\mathbb {Z}^{\le n}\) for some \(n\in \mathbb {Z}\). We define \(\sigma _-\psi :\mathbb {Z}^{\le n}\rightarrow A\), the left shift of \(\psi \), by \(\sigma _-\psi (i):=\psi (i-1)\). It is easily seen that for \(k\in \mathbb {Z}^+\) we have \(\sigma _+^k\varphi \in {\text {Sol}}\,^+(\varphi (n+k),A)\) and \(\sigma _-^k\psi \in {\text {Sol}}\,^-(\psi (n-k),A)\). For a full solution \(\varphi \) we denote respectively by \(\varphi ^+\), \(\varphi ^-\) the restrictions \(\varphi _{|{\mathbb {Z}^+}}\), \(\varphi _{|{\mathbb {Z}^-}}\).
Obviously, if \(\varphi \) is a solution in A then \(\varphi \circ \tau _n\) is also a solution in A. We say that solutions \(\varphi \) and \(\varphi '\) are equivalent if \(\varphi '=\varphi \circ \tau _n\) or \(\varphi =\varphi '\circ \tau _n\) for some \(n\in \mathbb {Z}\). It is straightforward to verify that this is indeed an equivalence relation. It preserves forward, backward and full solutions through x. Moreover, it is not difficult to verify that the concatenation extends to an associative operation on equivalence classes of solutions. In the sequel we identify solutions in the same equivalence class. This allows us to treat the solutions as finite or infinite words over the alphabet consisting of cells in X. In the sequel, whenever we pick up a representative of a forward (backward) solution, we assume its domain is respectively \(\mathbb {Z}^+\), (\(\mathbb {Z}^-\)).

6.2 Paths

A solution \(\varphi \) such that \({\text {dom}}\,\varphi \) is a finite interval is called a path joining the value of \(\varphi \) at the left end of the domain with the value at the right end. The cardinality of \({\text {dom}}\,\varphi \) is called the length of the path. In the special case when \(\varphi \) has length one we identify \(\varphi \) with its unique value. We also admit the trivial path of length zero (empty set). In particular, it acts as the neutral element of concatenation. For every \(x\in X\) there is a unique path joining x with \(x^\star \), denoted \(\nu (x)\) and given by
$$\begin{aligned} \nu (x):={\left\{ \begin{array}{ll} x &{}\quad \text {if }\,\,x=x^\star ,\\ x\cdot x^\star &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
Note that the concatenation \(x\cdot x^\star \) is a solution, because \(x^\star \in \Pi _\mathcal {V}(x)\). We also define
$$\begin{aligned} \nu ^-(x):={\left\{ \begin{array}{ll} \varnothing &{}\quad \text {if} \,\,x=x^\star ,\\ x &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
Note that \(\nu (x)=\nu ^-(x)\cdot x^\star \).

6.3 \(\mathcal {V}\)-Compatibility

We say that \(A\subseteq X\) is \( \mathcal {V}\)-compatible if \(x\in A\) implies \([x]_{\mathcal {V}}\subseteq A\) for \(x\in X\). We denote by \([A]_\mathcal {V}^-\) the maximal \(\mathcal {V}\)-compatible subset of A and by \([A]_\mathcal {V}^+\) the minimal \(\mathcal {V}\)-compatible superset of A. The following proposition is straightforward.
Proposition 6.2
For any \(A,B\subseteq X\) we have
(i)
\([A]_\mathcal {V}^- \subseteq A \subseteq [A]_\mathcal {V}^+\),
 
(ii)
If AB are \(\mathcal {V}\)-compatible, then also \(A\cap B\) and \(A\cup B\) are \(\mathcal {V}\)-compatible,
 
(iii)
\([A]_\mathcal {V}^-\) and \([A]_\mathcal {V}^+\) are \(\mathcal {V}\)-compatible,
 
(iv)
\(A\subseteq B\) implies \([A]_\mathcal {V}^\pm \subseteq [B]_\mathcal {V}^\pm \).
 
Observe that if \(A\subseteq X\) is a \(\mathcal {V}\)-compatible \(\kappa \)-subcomplex of X then \(\mathcal {V}':=\{\,V\in \mathcal {V}\mid V\subseteq A\,\}\) is a multivector field on A. We call it the restriction of \(\mathcal {V}\) to A and denote it \(\mathcal {V}_{|A}\).

6.4 Invariant Parts and Invariant Sets

We define the invariant part of A, the positive invariant part of A and the negative invariant part of A, respectively, by
$$\begin{aligned} {\text {Inv}}\,A:= & {} \{\,x\in A\mid {\text {Sol}}\,(x^\star ,[A]_\mathcal {V}^-)\ne \varnothing \,\},\\ {\text {Inv}}\,^+ A:= & {} \{\,x\in A\mid {\text {Sol}}\,^+(x^\star ,[A]_\mathcal {V}^-)\ne \varnothing \,\},\\ {\text {Inv}}\,^- A:= & {} \{\,x\in A\mid {\text {Sol}}\,^-(x^\star ,[A]_\mathcal {V}^-)\ne \varnothing \,\}. \end{aligned}$$
Note that by replacing \(x^\star \) by x or \([A]_\mathcal {V}^-\) by A in the definition of the invariant part of A we may not obtain the invariant part of A. Indeed, consider the set \(\{AB,AC,ABC,B,BC\}\) marked in black in Fig. 11. Its invariant part is \(\{AB,AC,ABC,B\}\). But, by replacing \(x^\star \) by x in the definition of the invariant part we obtain \(\{ABC,B\}\). And by replacing \([A]_\mathcal {V}^-\) by A we obtain \(\{AB,AC,ABC,B,BC\}\).
Proposition 6.3
For any \(A\subseteq X\) we have
$$\begin{aligned}{}[{\text {Inv}}\,A]_{\mathcal {V}}^-= & {} {\text {Inv}}\,A,\end{aligned}$$
(10)
$$\begin{aligned} {\text {Inv}}\,A\subseteq & {} A,\end{aligned}$$
(11)
$$\begin{aligned} A\subseteq B&\;\Rightarrow \;&{\text {Inv}}\,A\subseteq {\text {Inv}}\,B,\end{aligned}$$
(12)
$$\begin{aligned} {\text {Inv}}\,A= & {} {\text {Inv}}\,^- A \cap {\text {Inv}}\,^+ A. \end{aligned}$$
(13)
$$\begin{aligned} {\text {Inv}}\,A= & {} {\text {Inv}}\,{\text {Inv}}\,A. \end{aligned}$$
(14)
Proof
Equations (10), (11), (12), (13) and the right-to-left inclusion in (14) are straightforward. To prove the left-to-right inclusion in (14) take \(x\in {\text {Inv}}\,A\) and let \(\varphi \in {\text {Sol}}\,(x^\star ,[{\text {Inv}}\,A]_{\mathcal {V}}^-)\). Fix an \(i\in \mathbb {Z}\). Obviously,
$$\begin{aligned} \varphi \in {\text {Sol}}\,(\varphi (i),[A]_{\mathcal {V}}^-)\cap {\text {Sol}}\,(\varphi (i+1),[A]_{\mathcal {V}}^-). \end{aligned}$$
Hence, by Proposition 6.1 we have \(\varphi \in {\text {Sol}}\,(\varphi (i)^\star ,[A]_{\mathcal {V}}^-)\), which means that \(\varphi (i)\in {\text {Inv}}\,A\). It follows from (10) that \(\varphi (i)\in [{\text {Inv}}\,A]_{\mathcal {V}}^-\) and since \(i\in \mathbb {Z}\) is arbitrarily fixed we conclude that \(\varphi \in {\text {Sol}}\,(x^\star ,[{\text {Inv}}\,A]_{\mathcal {V}}^-)\ne \varnothing \). Therefore, \(x\in {\text {Inv}}\,{\text {Inv}}\,A\). \(\square \)
We say that A is invariant with respect to \(\mathcal {V}\) if \({\text {Inv}}\,A=A\).
Proposition 6.4
Assume \(A\subseteq X\) is invariant. Then A is \(\mathcal {V}\)-compatible. Moreover, for any \(x\in A\) we have \({\text {Sol}}\,^+(x,[A]_{\mathcal {V}}^-)\ne \varnothing \) and for any dominant \(x\in A\) we have \({\text {Sol}}\,^-(x,[A]_{\mathcal {V}}^-)\ne \varnothing \).
Proof
It follows from (10) that the invariant part of any set is \(\mathcal {V}\)-compatible. In particular, an invariant set, as the invariant part of itself is \(\mathcal {V}\)-compatible. Let \(x\in A\) and \(\varphi \in {\text {Sol}}\,(x^\star ,[A]_{\mathcal {V}}^-)\). Then \(\nu ^-(x)\cdot \varphi ^+\in {\text {Sol}}\,^+(x,[A]_{\mathcal {V}}^-)\). Obviously, \(\varphi ^-\in {\text {Sol}}\,^-(x,[A]_{\mathcal {V}}^-)\) when \(x=x^\star \). \(\square \)

7 Isolated Invariant Sets and the Conley Index

In this section we introduce the concept of an isolated invariant set of a combinatorial multivector field and its homological invariant, the Conley index. Both are analogues of the classical concepts for flows [9]. From now on we fix a combinatorial multivector field \(\mathcal {V}\) on a Lefschetz complex X and we assume that X is invariant with respect to \(\mathcal {V}\).

7.1 Isolated Invariant Sets

Assume \(A\subseteq X\) is invariant. We say that a path \(\varphi \) from \(x\in A\) to \(y\in A\) is an internal tangency of A, if \({\text {im}}\,\varphi \subseteq {\text {cl}}\,A\) and \({\text {im}}\,\varphi \cap {\text {mo}}\,A\ne \varnothing \). We say that \(S\subseteq X\) is an isolated invariant set if it is invariant and admits no internal tangencies.
Theorem 7.1
Let \(S\subseteq X\) be invariant. Then S is an isolated invariant set if and only if S is proper.
Proof
Assume \(S\subseteq X\) is an isolated invariant set. By Proposition 6.4 it is \(\mathcal {V}\)-compatible. Assume to the contrary that S is not proper. Then there exists an \(x\in {\text {cl}}\,{\text {mo}}\,S \setminus {\text {mo}}\,S\). Hence, \(x\in {\text {cl}}\,z\) for a \(z\in {\text {mo}}\,S\) and \(z\in {\text {cl}}\,y\) for a \(y\in S\). In particular, \(x\in S\) and \(z\not \in S\). It follows from the \(\mathcal {V}\)-compatibility of S that \([x]\ne [z]\ne [y]\). Hence, \(x\in \Pi _{\mathcal {V}}(z)\) and \(z\in \Pi _{\mathcal {V}}(y)\). Thus, \(y\cdot z\cdot x\) is an internal tangency, a contradiction proving that S is proper.
To prove the opposite implication, take \(S\subseteq X\) which is invariant and proper and assume to the contrary that \(\varphi \) is an internal tangency of S. Then the values xy of \(\varphi \) at the endpoints of \({\text {dom}}\,\varphi \) belong to S and there is a \(k\in {\text {dom}}\,\varphi \) such that \(\varphi (k)\in {\text {mo}}\,S\). Thus, we can choose an \(m\in {\text {dom}}\,\varphi \) satisfying \(\varphi (m)\not \in S\) and \(\varphi (m+1)\in S\). In particular, \(\varphi (m)\in {\text {mo}}\,S\) and \(\varphi (m+1)\not \in {\text {mo}}\,S\). Proposition 6.1 and \(\mathcal {V}\)-compatibility of S imply that \(\varphi (m)^\star =\varphi (m)\). It follows that \(\varphi (m+1)\in {\text {cl}}\,\varphi (m)\subseteq {\text {cl}}\,{\text {mo}}\,S\). However, this is a contradiction, because S is proper. \(\square \)
Proposition 7.2
Every critical multivector is an isolated invariant set.
Proof
Let V be a critical multivector of \(\mathcal {V}\). Observe that V is invariant, because \([V]_{\mathcal {V}}^-=V\) and \(\mathbb {Z}\ni n\mapsto V^\star \in V\) is a full solution. Since V, as a multivector, is proper, the conclusion follows from Theorem 7.1. \(\square \)

7.2 Index Pairs

A pair \(P=(P_1, P_2)\) of closed subsets of X such that \(P_2\subseteq P_1\) is an index pair for S if the following three conditions are satisfied
$$\begin{aligned}&P_1\cap \Pi _{\mathcal {V}}(P_2)\subseteq P_2, \end{aligned}$$
(15)
$$\begin{aligned}&P_1\cap \Pi _{\mathcal {V}}^{-1}(X \setminus P_1)\subseteq P_2, \end{aligned}$$
(16)
$$\begin{aligned}&S={\text {Inv}}\,( P_1 \setminus P_2). \end{aligned}$$
(17)
We say that the index pair P is saturated if \(P_1 \setminus P_2=S\).
A sample isolated invariant set together with an index pair is presented in Fig. 12.
Proposition 7.3
If \((P_1,P_2)\) is an index pair and \(x\in P_1 \setminus P_2\) then \(x^\star \in P_1 \setminus P_2\).
Proof
Assume to the contrary that \(x^\star \not \in P_1 \setminus P_2\). Then either \(x^\star \not \in P_1\) or \(x^\star \in P_2\). Since \(x^\star \in \Pi _{\mathcal {V}}(x)\), the first case contradicts (15). Since \(x\in {\text {cl}}\,x^\star \), the second case contradicts the fact that \(P_2\) is closed. \(\square \)
Proposition 7.4
For any index pair \((P_1,P_2)\) the set \(P_1 \setminus P_2\) is \(\mathcal {V}\)-compatible.
Proof
Assume to the contrary that \(x\in P_1 \setminus P_2\) but \([x]\not \subseteq P_1 \setminus P_2\). By Proposition 7.3 we may assume without loss of generality that \(x=x^\star \). Then \(y\not \in P_1 \setminus P_2\) for some \(y\in [x^\star ]\). Since \(y\in {\text {cl}}\,x^\star \subseteq P_1\), we see that \(y\in P_2\). But \(x^\star \in \Pi _{\mathcal {V}}(y)\), therefore we get from (15) that \(x^\star \in P_2\), a contradiction. \(\square \)
The following proposition follows immediately from the definition of index pair.
Proposition 7.5
Assume \(P=(P_1,P_2)\) is an index pair, \(x\in P_1\) and \(\varphi \in {\text {Sol}}\,^+(x)\). Then either \(\varphi \in {\text {Sol}}\,^+(x,P_1)\) or \(\varphi (i)\in P_2\) for some \(i\in \mathbb {Z}^+\).
Given an index pair P, consider the set
$$\begin{aligned} \hat{P}:=\{\,x\in P_1\mid \forall _{ \varphi \in {\text {Sol}}\,^+(x)}\, \exists _{ i\in {\text {dom}}\,\varphi }\;\varphi (i)\in P_2\,\} \end{aligned}$$
of cells in \(P_1\) whose every forward solution intersects \(P_2\).
Proposition 7.6
If \(y\in \hat{P} \setminus P_2\), then \(y^\star \in \hat{P}\).
Proof
If \(y=y^\star \), then the conclusion is obvious. Therefore, we may assume that \(y\ne y^\star \). Then \(y^\star =\Pi _{\mathcal {V}}(y)\) and since \(y\not \in P_2\) we get from (16) that \(y^\star \in P_1\). Let \(\varphi \in {\text {Sol}}\,^+(y^\star )\). Then \(\varphi ':=y\cdot \varphi \in {\text {Sol}}\,^+(y)\). The assumption \(y\in \hat{P}\) implies that \(\varphi '(i)\in P_2\) for some \(i\in \mathbb {Z}^+\). Since \(y\not \in P_2\) we have \(i>0\). It follows that \(\varphi (i-1)=\varphi '(i)\in P_2\) and consequently \(y^\star \in \hat{P}\).
In the following lemma, given an index pair P we first shrink \(P_1\) and then grow \(P_2\) to saturate P.
Lemma 7.7
If P is an index pair for an isolated invariant set S, then \( P^*:=(S\cup \hat{P}, P_2) \) is an index pair for S and \( P^{**}:=(S\cup \hat{P}, \hat{P}) \) is a saturated index pair for S.
Proof
First, we will show that the sets \(\hat{P}\) and \(S\cup \hat{P}\) are closed. Take \(x\in {\text {cl}}\,\hat{P}\) and assume \(x\not \in \hat{P}\). Since \(\hat{P}\subseteq P_1\) and \(P_1\) is closed, we have \(x\in P_1\). Choose \(y\in \hat{P}\) such that \(x\in {\text {cl}}\,y\). If \(y\in P_2\), then \(x\in P_2\subseteq \hat{P}\). Hence, assume that \(y\not \in P_2\). Then, by Proposition 7.6, \(y^\star \in \hat{P}\). Moreover, \(x\in {\text {cl}}\,y\subseteq {\text {cl}}\,y^\star \). Since \(x\not \in \hat{P}\), there exists a \(\varphi \in {\text {Sol}}\,^+(x,X \setminus P_2)\). But \(x\in P_1\); hence, it follows from Proposition 7.5 that \(\varphi \in {\text {Sol}}\,^+(x,P_1 \setminus P_2)\). Note that \(x\ne y^\star \), because \(x\in \hat{P}\) and \(x\le _\kappa y\le _\kappa y^\star \). Thus, we cannot have \([x]=[y^\star ]\), because otherwise \(\varphi (1)=y^\star \) and \(\sigma (\varphi )\in {\text {Sol}}\,^+(y^\star ,P_1 \setminus P_2)\), which contradicts \(y^\star \in \hat{P}\). Hence, we have \([x]\ne [y^\star ]\). Then \(x\in \Pi _{\mathcal {V}}(y^\star )\) and \(y^\star \cdot \varphi \in {\text {Sol}}\,^+(y^\star ,P_1 \setminus P_2)\) which again contradicts \(y^\star \in \hat{P}\). It follows that \(x\in \hat{P}\). Therefore, \(\hat{P}\) is closed.
To show that \(S\cup \hat{P}\) is closed it is enough to prove that
$$\begin{aligned} {\text {cl}}\,S \setminus S\subseteq \hat{P}. \end{aligned}$$
(18)
Assume the contrary. Then there exists an \(x\in {\text {cl}}\,S \setminus S\) such that \(x\not \in \hat{P}\). Let \(y\in S\) be such that \(x\in {\text {cl}}\,y\). Without loss of generality we may assume that \(y=y^\star \). By the \(\mathcal {V}\)-compatibility of S we have \(x\not \in [y^\star ]\). Thus, \(x\in \Pi _{\mathcal {V}}(y^\star )\). Since \(y^\star \in S\subseteq {\text {Inv}}\,^-(P_1 \setminus P_2)\), we can take \(\varphi \in {\text {Sol}}\,^-(y^\star ,P_1 \setminus P_2)\). Since \(x\not \in \hat{P}\) and \(x\in {\text {cl}}\,S\subseteq P_1\), we may take \(\psi \in {\text {Sol}}\,^+(x,P_1 \setminus P_2)\). Then \(\varphi \cdot \psi \in {\text {Sol}}\,(x,P_1 \setminus P_2)\). By Proposition 7.4 the set \(P_1 \setminus P_2\) is \(\mathcal {V}\)-compatible. Therefore, \(x\in {\text {Inv}}\,(P_1 \setminus P_2)=S\), a contradiction again. It follows that \(S\cup \hat{P}\) is also closed.
In turn, we will show that \(P^*\) satisfies properties (15)–(17). Property (15) follows immediately from the same property of index pair P, because \(S\cup \hat{P}\subseteq P_1\). To see (16) take \(x\in S\cup \hat{P}\) and assume there exists a \(y\in \Pi _{\mathcal {V}}(x)\setminus (S\cup \hat{P})\). We will show first that \(x\not \in S\). Assume to the contrary that \(x\in S\). It cannot be \(y=x^\star \), because then \(y\in S\) by the \(\mathcal {V}\)-compatibility of S. Hence, \(y\in {\text {cl}}\,x\). It follows that \(y\in {\text {cl}}\,S \setminus S\) and by (18) \(y\in \hat{P}\), a contradiction which proves that \(x\not \in S\) and consequently \(x\in \hat{P}\). Since \(y\not \in \hat{P}\), we can choose a solution \(\varphi \in {\text {Sol}}\,^+(y,X \setminus P_2)\). It follows that \(x\cdot \varphi \in {\text {Sol}}\,^+(x)\). But, \(x\in \hat{P}\), therefore there exists an \(i\in \mathbb {Z}^+\) such that \((x\cdot \varphi )(i)\in P_2\). It must be \(i=0\), because for \(i>0\) we have \((x\cdot \varphi )(i)=\varphi (i-1)\not \in P_2\). Thus, \(x=(x\cdot \varphi )(0)\in P_2\), which proves (16).
Now, since \(S\subseteq S\cup \hat{P}\subseteq P_1\) and \(S\cap P_2=\varnothing \), we have
$$\begin{aligned} S={\text {Inv}}\,S\subseteq {\text {Inv}}\,(S\cup \hat{P}\setminus P_2)\subseteq {\text {Inv}}\,(P_1 \setminus P_2)=S. \end{aligned}$$
This proves (17) for \(P^*\).
We still need to prove that \(P^{**}\) satisfies properties (15)–(17). Let \(x\in \hat{P}\) and \(y\in \Pi _{\mathcal {V}}(x)\cap (S\cup \hat{P})\). Then \(y\in P_1\). Let \(\varphi \in {\text {Sol}}\,^+(y)\). Then \(x\cdot \varphi \in {\text {Sol}}\,^+(x)\). Since \(x\in \hat{P}\), it follows that \((x\cdot \varphi )(i)\in P_2\) for some \(i\in \mathbb {Z}^+\). If \(i>0\), then \(\varphi (i-1)=(x\cdot \varphi )(i)\in P_2\) which implies \(y\in \hat{P}\). If \(i=0\), then \(x\in P_2\) and by (15) applied to index pair P we get \(y\in P_2\subseteq \hat{P}\). This proves (15) for \(P^{**}\). Consider in turn \(x\in S\cup \hat{P}\) and assume there exists a \(y\in \Pi _{\mathcal {V}}(x)\setminus (S\cup \hat{P})\). If \(x\in S\) then the \(\mathcal {V}\)-compatibility of S excludes \(y=x^\star \). Therefore, we have \(y\in {\text {cl}}\,x\subseteq {\text {cl}}\,S\) and by (18) \(y\in \hat{P}\), a contradiction. This shows that \(x\in \hat{P}\) and proves (16) for \(P^{**}\). Now, observe that \(S\cap \hat{P}=\varnothing \) or equivalently \((S\cup \hat{P})\setminus \hat{P}=S\), which proves (17) and the saturation property for \(P^{**}\). \(\square \)

7.3 Semi-equal Index Pairs

We write \(P\subseteq Q\) for index pairs P, Q meaning \(P_i\subseteq Q_i\) for \(i=1,2\). We say that index pairs P, Q of S are semi-equal if \(P\subseteq Q\) and either \(P_1=Q_1\) or \(P_2=Q_2\). For semi-equal pairs P, Q, we write
$$\begin{aligned} A(P,Q):={\left\{ \begin{array}{ll} Q_1\setminus P_1 &{}\quad \text { if }P_2=Q_2,\\ Q_2\setminus P_2 &{}\quad \text { if }P_1=Q_1. \end{array}\right. } \end{aligned}$$
Lemma 7.8
If \(P\subseteq Q\) are semi-equal index pairs of S, then A(PQ) is \(\mathcal {V}\)-compatible.
Proof
Assume first that \(P_1=Q_1\). Let \(V\in \mathcal {V}\) and let \(x\in V\). It is enough to show that \(x\in Q_2\setminus P_2\) if and only if \(x^\star \in Q_2\setminus P_2\). Let \(x\in Q_2\setminus P_2\). Since \(x^\star \in \Pi _{\mathcal {V}}(x)\), we get from (15) applied to P that \(x^\star \in P_1=Q_1\). Thus, by applying (15) to Q we get \(x^\star \in Q_2\). Since \(x\in {\text {cl}}\,x^\star \), we get \(x^\star \not \in P_2\), because otherwise \(x\in P_2\). Hence, we proved that \(x\in Q_2\setminus P_2\) implies \(x^\star \in Q_2\setminus P_2\). Let now \(x^\star \in Q_2\setminus P_2\). Then \(x\in {\text {cl}}\,x^\star \subseteq Q_2\). Since \(x^\star \in Q_2\subseteq Q_1=P_1\) and \(x^\star \not \in P_2\) we get from (15) that \(x^\star \in P_2\). Hence, we also proved that \(x^\star \in Q_2\setminus P_2\) implies \(x\in Q_2\setminus P_2\).
Consider in turn the case \(P_2=Q_2\). Again, it is enough to show that \(x\in Q_1\setminus P_1\) if and only if \(x^\star \in Q_1\setminus P_1\). Let \(x\in Q_1\setminus P_1\). Then \(x\not \in P_2=Q_2\) and by (15) we get \(x^\star \in Q_1\). Also \(x^\star \not \in P_1\), because otherwise \(x\in {\text {cl}}\,x^\star \subseteq P_1\). Hence, \(x\in Q_1\setminus P_1\) implies \(x^\star \in Q_1\setminus P_1\). Finally, let \(x^\star \in Q_1\setminus P_1\). We have \(x\in {\text {cl}}\,x^\star \subseteq Q_1\). We cannot have \(x\in P_1\). Indeed, since \(x^\star \not \in P_1\), assumption \(x\in P_1\) implies \(x\in P_2=Q_2\). But then \(x^\star \in Q_2=P_2\subseteq P_1\), a contradiction. Thus, \(x^\star \in Q_1\setminus P_1\) implies \(x\in Q_1\setminus P_1\). \(\square \)
Lemma 7.9
Assume \(A\subseteq X\) is proper, \(\mathcal {V}\)-compatible and \({\text {Inv}}\,A=\varnothing \). Then A is a zero space.
Proof
First we will show that \(\mathcal {V}_{|A}\) is acyclic. Assume to the contrary that the preorder \(\le _\mathcal {V}\) is not a partial order. Then we can cnostruct a cycle \( x_n \prec _{\mathcal {V}} x_{n-1} \prec _{\mathcal {V}} \cdots \prec _{\mathcal {V}} x_0=x_n \) in A for some \(n\ge 1\). It follows that \( \varphi :\mathbb {Z}\ni i\mapsto x_{i \bmod {n}}\in A \) is a solution in A. In consequence, \({\text {Inv}}\,A\ne \varnothing \), a contradiction. Thus, \(\mathcal {V}':=\mathcal {V}_{|A}\) is acyclic. Also, if \(V\subseteq A\) is a critical multivector, then \(\psi :\mathbb {Z}\ni i\mapsto V^\star \in A\) is a solution in A, again contradicting \({\text {Inv}}\,A=\varnothing \). Thus, every multivector in \(\mathcal {V}'\) is regular. The thesis follows now from Theorem 5.14. \(\square \)
Lemma 7.10
If \(P\subseteq Q\) are semi-equal index pairs of S, then \(H^\kappa (P_1 \setminus P_2)\) and \(H^\kappa (Q_1 \setminus Q_2)\) are isomorphic.
Proof
Assume \(P\subseteq Q\). If \(P_2=Q_2\), then \(A(P,Q)=Q_1\setminus P_1\). Since \(Q_1\) is closed, it is proper. Hence, since A(PQ) is open in \(Q_1\), it is also proper. Similarly we prove that A(PQ) is proper if \(P_1=Q_1\).
Thus, A(PQ) is a Lefschetz complex and it follows from Lemma 7.8 that the restriction \(\mathcal {V}':=\mathcal {V}_{|A(P,Q)}\) is a multivector field on A(PQ).
Observe that
$$\begin{aligned} S\cap (Q_1\setminus P_1)\subseteq & {} P_1\cap (X\setminus P_1)=\varnothing ,\\ S\cap (Q_2\setminus P_2)\subseteq & {} (X\setminus Q_2)\cap Q_2=\varnothing . \end{aligned}$$
Hence, \(S\cap A(P,Q)=\varnothing \) and \({\text {Inv}}\,A(P,Q)\subseteq X\setminus S\). We also have
$$\begin{aligned} P_1=Q_1&\;\Rightarrow \;&{\text {Inv}}\,(Q_2\setminus P_2)\subseteq {\text {Inv}}\,(P_1\setminus P_2)=S,\\ P_2=Q_2&\;\Rightarrow \;&{\text {Inv}}\,(Q_1\setminus P_1)\subseteq {\text {Inv}}\,(Q_1\setminus Q_2)=S. \end{aligned}$$
Therefore, \( {\text {Inv}}\,A(P,Q)\subseteq S\cap (X\setminus S)=\varnothing . \) Thus, \(\mathcal {V}'\) is acyclic and from Lemma 7.9 we conclude that A(PQ) is a zero space.
It is an elementary computation to check the following two observations. If \(P_2=Q_2\), then \(P_1\setminus P_2\subseteq Q_1\setminus Q_2\), \( Q_1\setminus Q_2=A(P,Q)\cup P_1\setminus P_2 \) and \(P_1\setminus P_2=P_1\cap (Q_1\setminus Q_2)\) is closed in \(Q_1\setminus Q_2\). If \(P_1=Q_1\), then \(Q_1\setminus Q_2\subseteq P_1\setminus P_2\), \( P_1\setminus P_2=A(P,Q)\cup Q_1\setminus Q_2 \) and \(A(P,Q)=Q_2\setminus P_2=Q_2\cap (P_1\setminus P_2)\) is closed in \(P_1\setminus P_2\). Hence, by Theorem 5.6 applied to the pair \((Q_1\setminus Q_2,P_1\setminus P_2)\) in the case \(P_2=Q_2\) and \((P_1\setminus P_2,Q_2\setminus P_2)\) in the case \(P_1=Q_1\) and the fact that A(PQ) is a zero space we conclude that \(H^\kappa (Q_1\setminus Q_2)\) and \(H^\kappa (P_1\setminus P_2)\) are isomorphic. \(\square \)

7.4 Conley Index

The following theorem, an analogue of a classical results in the Conley index theory, allows us to define the homology Conley index of an isolated invariant set S as \(H^\kappa (P_1,P_2)=H^\kappa (P_1\setminus P_2)\) for any index pair P of S. We denote it \({\text {Con}}\,(S)\).
Theorem 7.11
Given an isolated invariant set S, the pair \(({\text {cl}}\,S,{\text {mo}}\,S)\) is a saturated index pair for S. If P and Q are index pairs for S, then \(H^\kappa (P_1\setminus P_2)\) and \(H^\kappa (Q_1\setminus Q_2)\) are isomorphic.
Proof
Obviously \({\text {cl}}\,S\) is closed and \({\text {mo}}\,S\) is closed by Theorem 7.1. Thus, to show that \(({\text {cl}}\,S,{\text {mo}}\,S)\) is an index pair we need to prove properties (15)–(17) of the definition of index pair. Let \(x\in {\text {mo}}\,S\) and \(y\in \Pi _{\mathcal {V}}(x)\cap {\text {cl}}\,S\). Assume \(y\not \in {\text {mo}}\,S\). Then \(y\in S\) and the \(\mathcal {V}\)-compatibility of S implies that \([x]\ne [y]\). It follows that \(y\in {\text {cl}}\,x\subseteq {\text {cl}}\,{\text {mo}}\,S={\text {mo}}\,S\), a contradiction, which shows that \(y\in {\text {mo}}\,S\) and proves (15).
Consider in turn \(x\in {\text {cl}}\,S\) such that there exists a \(y\in \Pi _{\mathcal {V}}(x)\setminus {\text {cl}}\,S\). Obviously, \(x\ne y\). It must be \([x]=[y]\), because otherwise \(y\in {\text {cl}}\,x\subseteq {\text {cl}}\,S\). Since \(y\not \in S\), the \(\mathcal {V}\)-compatibility of S implies that \([y]\cap S=\varnothing \). But, \(x\in [x]=[y]\), therefore \(x\not \in S\) and consequently \(x\in {\text {mo}}\,S\), which proves (16). Obviously, \(S={\text {cl}}\,S\setminus {\text {mo}}\,S\), therefore (17) and the saturation property also are satisfied. This completes the proof of the first part of the theorem.
To prove the remaining assertion first observe that it is obviously satisfied if both pairs are saturated. Thus, it is sufficient to prove the assertion in the case \(Q=P^{**}\), because by Lemma 7.7 the pair \(P^{**}\) is saturated. We obviously have \(P^*\subseteq P\) and \(P^*\subseteq P^{**}\) and each inclusion is a semi-equality. Therefore, both inclusions induce an isomorphism in homology by Lemma 7.10. \(\square \)
In the case of an isolated invariant set S we call the polynomial \(p_S(t)\) the Conley polynomial of S and \(\beta _i(S)\) the ith Conley coefficient of S.
We call the index pair \(({\text {cl}}\,S,{\text {mo}}\,S)\) canonical, because it minimizes both \(P_1\) and \(P_1\setminus P_2\). Indeed, for any index pair \({\text {cl}}\,S\subseteq P_1\), because \(P_1\) is closed and by (17) \(S\subseteq P_1\setminus P_2\). In the case of the canonical index pair both inclusions are equalities. Note that in the case of the classical Conley index there is no natural choice of a canonical index pair. Since \(S={\text {cl}}\,S\setminus {\text {mo}}\,S\), we see that \({\text {Con}}\,(S)\cong H^\kappa (S)\). Thus, in our combinatorial setting Theorem 7.11 is actually not needed to define the Conley index. However, the importance of Theorem 7.11 will become clear in the proof of Morse inequalities via the following corollary.
Corollary 7.12
If \((P_1,P_2)\) is an index pair of an isolated invariant set S, then
$$\begin{aligned} p_S(t)+p_{P_2}(t)=p_{P_1}(t)+(1+t)q(t), \end{aligned}$$
(19)
where q(t) is a polynomial with non-negative coefficients. Moreover, if
$$\begin{aligned} H^\kappa (P_1)=H^\kappa (P_2)\oplus H^\kappa (S), \end{aligned}$$
then \(q(t)=0\).
Proof
By applying Theorem 5.6 to the pair \(P_2\subseteq P_1\) of closed subsets of X and Theorem 4.6 to the resulting exact sequence we obtain
$$\begin{aligned} p_{P_1\setminus P_2}(t)+p_{P_2}(t)=p_{P_1}(t)+(1+t)q(t) \end{aligned}$$
for some polynomial q with non-negative coefficients. The conclusion follows now from Theorem 7.11 by observing that \(H^\kappa (P_1\setminus P_2)=H^\kappa (S)\). \(\square \)

7.5 Additivity of the Conley Index

Let S be an isolated invariant set and assume \(S_1,S_2\subseteq S\) are also isolated invariant sets. We say that S decomposes into \(S_1\) and \(S_2\) if \(S_1\cap S_2=\varnothing \) and every full solution \(\varrho \) in S is a solution either in \(S_1\) or in \(S_2\).
Theorem 7.13
Assume an isolated invariant set S decomposes into the union of two its isolated invariant subsets \(S_1\) and \(S_2\). Then \({\text {Con}}\,(S)={\text {Con}}\,(S_1)\oplus {\text {Con}}\,(S_2)\).
Proof
First observe that the assumptions imply that \(S=S_1\cup S_2\). We will show that \(S_1\) is closed in S. Let \(x\in {\text {cl}}\,_S S_1\) and assume \(x\not \in S_1\). Then \(x\in S_2\). Choose \(y\in S_1\) such that \(x\in {\text {cl}}\,_S y\). The \(\mathcal {V}\)-compatibility of \(S_1\) implies that \([x]\ne [y]\); hence, \(x\in \Pi _{\mathcal {V}}(y)\). Select \(\gamma \in {\text {Sol}}\,(y,S_1)\) and \(\varrho \in {\text {Sol}}\,(x,S_2)\). Then \(\varphi :=\gamma ^-\cdot \varrho ^+\) is a solution in S but neither in \(S_1\) nor in \(S_2\), which contradicts the assumption that S decomposes into \(S_1\) and \(S_2\). Hence, \(S_1\) is closed. Similarly we prove that \(S_2\) is closed. The conclusion follows now from Proposition 5.5. \(\square \)

8 Attractors and Repellers

In this section we define attractors and repellers and study attractor–repeller pairs needed to prove Morse equation and Morse inequalities.

8.1 Attractors

We say that a \(\mathcal {V}\)-compatible \(N\subseteq X\) is a trapping region if \(\Pi _{\mathcal {V}}(N)\subseteq N\). This is easily seen to be equivalent to the requirement that N is \(\mathcal {V}\)-compatible and for any \(x\in N\)
$$\begin{aligned} {\text {Sol}}\,^+(x,X) = {\text {Sol}}\,^+(x,N). \end{aligned}$$
(20)
We say that A is an attractor if there exists a trapping region N such that \( A={\text {Inv}}\,N\).
Theorem 8.1
The following conditions are equivalent:
(i)
A is an attractor,
 
(ii)
A is invariant and closed,
 
(iii)
A is isolated invariant and closed,
 
(iv)
A is isolated invariant, closed and a trapping region.
 
Proof
Assume (i). Let N be a trapping region such that \(A={\text {Inv}}\,N\). By (14) we have \({\text {Inv}}\,A={\text {Inv}}\,{\text {Inv}}\,N={\text {Inv}}\,N=A\); hence, A is invariant. To see that A is closed take \(x\in {\text {cl}}\,A\) and \(y\in A\) such that \(x\in {\text {cl}}\,y\subseteq {\text {cl}}\,y^\star \). Note that \(y^\star \in A\), because by Proposition 6.4 the set A, as invariant, is \(\mathcal {V}\)-compatible. If \([x]=[y]\), then for the same reason \(x\in A\). Hence, assume \([x]\ne [y]=[y^\star ]\). Then \(x\in \Pi _{\mathcal {V}}(y^\star )\subseteq \Pi _{\mathcal {V}}(N)\subseteq N\). From the \(\mathcal {V}\)-compatibility of N we get that also \(x^\star \in N\). Let \(\varphi \in {\text {Sol}}\,^+(x^\star )\). Since N is a trapping region, we see that \(\varphi \in {\text {Sol}}\,^+(x^\star ,N)\). It follows that \(x\in {\text {Inv}}\,^+N\). Since \(y^\star \in A={\text {Inv}}\,N\subseteq {\text {Inv}}\,^-N\), we can take \(\psi \in {\text {Sol}}\,^-(y^\star ,N)\). Then \(\psi \cdot \nu (x)\in {\text {Sol}}\,^-(x^\star ,N)\). This shows that \(x\in {\text {Inv}}\,^-N\). By (13) we have \(x\in {\text {Inv}}\,N=A\). This proves that A is closed and shows that (i) implies (ii). If (ii) is satisfied, then A, as closed, is proper. Hence, we get from Theorem 7.1 that (ii) implies (iii). Assume in turn (iii). Let \(x\in A\) and let \(y\in \Pi _{\mathcal {V}}(x)\). If \([y]=[x]\), then \(y\in A\) by the \(\mathcal {V}\)-compatibility of A. If \([y]\ne [x]\), then \(x=x^\star \), \(y\in {\text {cl}}\,x^\star \subseteq {\text {cl}}\,A=A\). Hence \(\Pi _{\mathcal {V}}(A)\subseteq A\), which proves (iv). Finally, observe that (i) follows immediately from (iv). \(\square \)

8.2 Repellers

We say that a \(\mathcal {V}\)-compatible \(N\subseteq X\) is a backward trapping region if \(\Pi _{\mathcal {V}}^{-1}(N)\subseteq N\). This is easily seen to be equivalent to the requirement that N is \(\mathcal {V}\)-compatible and for any \(x\in N\)
$$\begin{aligned} {\text {Sol}}\,^-(x,X) = {\text {Sol}}\,^-(x,N). \end{aligned}$$
(21)
We say that R is a repeller if there exists a backward trapping region N such that \( R={\text {Inv}}\,N\). The following proposition is straightforward.
Proposition 8.2
A subset \(N\subseteq X\) is a trapping region if and only if \(X\setminus N\) is a backward trapping region.
Theorem 8.3
The following conditions are equivalent:
(i)
R is a repeller,
 
(ii)
R is invariant and open,
 
(iii)
R is isolated invariant and open,
 
(iv)
R is isolated invariant, open and a backward trapping region.
 
Proof
Assume (i). Let N be a backward trapping region such that \(R={\text {Inv}}\,N\). By (14) we have \({\text {Inv}}\,R={\text {Inv}}\,{\text {Inv}}\,N={\text {Inv}}\,N=R\); hence, R is invariant. To see that R is open we will prove that \(X\setminus R\) is closed. For this end take \(x\in {\text {cl}}\,(X\setminus R)\) and \(y\in X\setminus R\) such that \(x\in {\text {cl}}\,y\subseteq {\text {cl}}\,y^\star \). Note that \(y^\star \in X\setminus R\), because the set \(X\setminus R\) is \(\mathcal {V}\)-compatible as a complement of an invariant set which, by Proposition 6.4, is \(\mathcal {V}\)-compatible. If \([x]=[y]\), then for the same reason \(x\in X\setminus R\). Hence, assume \([x]\ne [y]=[y^\star ]\). Then \(x\in \Pi _{\mathcal {V}}(y^\star )\). Let \(\psi \in {\text {Sol}}\,^+(x^\star )\) and \(\varphi \in {\text {Sol}}\,^-(y^\star )\). Then \(\gamma :=\varphi \cdot \nu ^-(x)\cdot \psi \in {\text {Sol}}\,(y^\star )\). Since \(y^\star \not \in R\), there exists an \(i\in \mathbb {Z}\) such that \(\gamma (i)\not \in N\). By Proposition 8.2 we see that \(X\setminus N\) is a trapping region. Hence, \(\gamma (j)\not \in N\) for all \(j\ge i\). In particular, \(\psi (j)\not \in N\) for large j. Since \(\psi \in {\text {Sol}}\,^+(x^\star )\) is arbitrary, it follows that \(x\not \in {\text {Inv}}\,^+N\) and consequently \(x\not \in {\text {Inv}}\,N=R\). Hence, \(X\setminus R\) is closed, that is R is open. This proves (ii). If (ii) is satisfied, then R, as open, is proper. Hence, we get from Theorem 7.1 that (ii) implies (iii). Assume (iii). Let \(x\in R\) and let \(y\in \Pi _{\mathcal {V}}^{-1}(x)\). If \([y]=[x]\), then \(y\in R\) by the \(\mathcal {V}\)-compatibility of R. If \([y]\ne [x]\), then \(y=y^\star \) and \(x\in {\text {cl}}\,y^\star \). It follows from Proposition 4.4 that \(y=y^\star \in {\text {opn}}\,x\subseteq R\), because \(x\in R\) and R is open. Hence, \(\Pi _{\mathcal {V}}^{-1}(R)\subseteq R\), which proves (iv). Finally, observe that (i) follows immediately from (iv). \(\square \)

8.3 Recurrence and Basic Sets

Let \(A\subseteq X\) be \(\mathcal {V}\)-compatible. We write \(x\mathop {\rightarrow }\limits ^{A}_{\mathcal {V}} y\) if there exists a path of \(\mathcal {V}\) in A from \(x^\star \) to \(y^\star \) of length at least two. We write \(x\mathop {\leftrightarrow }\limits ^{A}_{\mathcal {V}} y\) if \(x\mathop {\rightarrow }\limits ^{A}_{\mathcal {V}} y\) and \(y\mathop {\rightarrow }\limits ^{A}_{\mathcal {V}} x\). We drop \(\mathcal {V}\) in this notation if \(\mathcal {V}\) is clear from the context. Also, we drop A if \(A=X\). We say that A is weakly recurrent if for every \(x\in A\) we have \(x\mathop {\leftrightarrow }\limits ^{A} x\). It is strongly recurrent if for any \(x,y\in A\) we have \(x\mathop {\rightarrow }\limits ^{A} y\), or equivalently for any \(x,y\in A\) there is \(x\mathop {\leftrightarrow }\limits ^{A} y\). Obviously, every strongly recurrent set is also weakly recurrent.
The chain recurrent set of X, denoted \({\text {CR}}\,(X)\), is the maximal weakly recurrent subset of X. It is straightforward to verify that
$$\begin{aligned} {\text {CR}}\,(X):=\{\,x\in X\mid x\leftrightarrow _{\mathcal {V}}x\,\}. \end{aligned}$$
Obviously, the relation \(\leftrightarrow _{\mathcal {V}}\) restricted to \({\text {CR}}\,(X)\) is an equivalence relation. By a basic set of \(\mathcal {V}\) we mean an equivalence class of \(\leftrightarrow _{\mathcal {V}}\) restricted to \({\text {CR}}\,(X)\).
Theorem 8.4
Every basic set is a strongly recurrent isolated invariant set.
Proof
Let B be a basic set. Obviously, it is \(\mathcal {V}\)-compatible and invariant. By its very definition it is also strongly recurrent. Thus, by Theorem 7.1 we only need to prove that B is proper. For this end we will verify condition (2) of Proposition 4.5. Take \(x,z\in B\), \(y\in X\) and assume \(x\in {\text {cl}}\,y\) and \(y\in {\text {cl}}\,z\). Then also \(x\in {\text {cl}}\,y^\star \) and \(y\in {\text {cl}}\,z^\star \). If \([x]=[y^\star ]\) or \([y]=[z^\star ]\), then \(y\in B\) by the \(\mathcal {V}\)-compatibility of B. Thus, assume \([x]\ne [y^\star ]\) and \([y]\ne [z^\star ]\). It follows that \(x\in \Pi _{\mathcal {V}}(y^\star )\) and \(y\in \Pi _{\mathcal {V}}(z^\star )\) and consequently \(y^\star \mathop {\rightarrow }\limits ^{B} x^\star \) and \(z^\star \mathop {\rightarrow }\limits ^{B} y^\star \). By the definition of the basic set also \(x^\star \mathop {\rightarrow }\limits ^{B} z^\star \). It follows that the basic sets of x, y and z coincide and consequently \(y\in B\). \(\square \)

8.4 Limit Sets

Let \(\varrho :\mathbb {Z}\rightarrow X\) be a full solution. Recall that \([A]_\mathcal {V}^+\) denotes the intersection of \(\mathcal {V}\)-compatible sets containing A. We define the \(\alpha \) and \(\omega \) limit sets of \(\varrho \) by
$$\begin{aligned} \alpha (\varrho ):= & {} \bigcap _{k\ge 0}{\text {Inv}}\,\left[ {\text {im}}\,\sigma _-^k\varrho ^-\right] ^+_{\mathcal {V}},\\ \omega (\varrho ):= & {} \bigcap _{k\ge 0}{\text {Inv}}\,\left[ {\text {im}}\,\sigma _+^k\varrho ^+\right] ^+_{\mathcal {V}}. \end{aligned}$$
Proposition 8.5
The sets \(\alpha (\varrho )\) and \(\omega (\varrho )\) are invariant, \(\mathcal {V}\)-compatible and strongly recurrent. Moreover, \(\alpha (\varrho )\cap {\text {im}}\,\varrho \ne \varnothing \ne \omega (\varrho )\cap {\text {im}}\,\varrho \).
Proof
Let \(\varphi _k:=\sigma _+^k\varrho ^+\) and \(A_k:={\text {Inv}}\,[{\text {im}}\,\varphi _k]^+_{\mathcal {V}}\). Obviously, the sets \(A_k\) are invariant and \(\mathcal {V}\)-compatible. It is straightforward to observe that \(A_{k+1}\subseteq A_{k}\). Hence, since X is finite, there exists a \(p\in \mathbb {Z}^+\) such that \(A_{p}=\bigcap _{k\ge 0} A_k=\omega (\varrho )\). This proves that \(\omega (\varrho )\) is invariant and \(\mathcal {V}\)-compatible. To see that \(\omega (\varrho )\) is strongly recurrent, take \(x,y\in \omega (\varrho )=A_p\). Then \(x^\star ,y^\star \in {\text {im}}\,\varphi _p\cap A_p\). Let \(m,n\ge p\) be such that \(x^\star =\varphi (m)\) and \(y^\star =\varphi (n)\). If \(n<m\), then \(\varphi _{|[n,m]}\) is a path in \(A_p\) from \(x^\star \) to \(y^\star \). Thus, assume \(m\le n\) and take \(q>m\). Since \(A_q=A_p\), we see that \(y^\star \in A_q\). It follows that there exists a \(k\ge q\) such that \(y^\star =\varphi (k)\). Hence, \(\varphi _{|[m,k]}\) is a path in \(A_p\) from \(x^\star \) to \(y^\star \). Therefore, \(A_p=\omega (\varrho )\) is strongly recurrent.
To see that \(\alpha (\varrho )\cap {\text {im}}\,\varrho \ne \varnothing \) observe that there exist \(m,n\in \mathbb {Z}^+\), \(m<n\), such that \(\varphi _p(m)=\varphi _p(n)\). By Proposition 6.1 we may assume without loss of generality that \(\varphi _p(m)=\varphi _p(m)^\star \). Consider
$$\begin{aligned} \psi :\mathbb {Z}\ni i \mapsto \varphi _p(m+i\bmod {m-n})\in {\text {im}}\,\varphi _p. \end{aligned}$$
We have \(\psi \in {\text {Sol}}\,(\varphi _p(m)^\star ,[{\text {im}}\,\varphi _p]^+_\mathcal {V})\), therefore \(\varphi _p(m)\in {\text {Inv}}\,[{\text {im}}\,\varphi _p]^+_{\mathcal {V}}=A_p=\omega (\varrho )\). Obviously, \(\varphi _p(m)\in {\text {im}}\,\varphi \), which proves that \(\omega (\varrho )\cap {\text {im}}\,\varrho \ne \varnothing \). The proof concerning \(\alpha (\varrho )\) is analogous.\(\square \)

8.5 Attractor–Repeller Pairs

Since by Theorem 8.1 an attractor is in particular a trapping region, it follows from Proposition 8.2 that if A is an attractor, then \({\text {Inv}}\,(X\setminus A)\) is a repeller. It is called the dual repeller of A and is denoted by \(A^\star \). Similarly, if R is a repeller, then \({\text {Inv}}\,(X\setminus R)\) is an attractor, called the dual attractor of R and denoted by \(R^\star \).
For \(A,A'\subseteq X\) define
$$\begin{aligned} C(A',A):=\{\, x\in X\mid \exists _{\varrho \in {\text {Sol}}\,(x^\star )}\; \alpha (\varrho )\subseteq A',\;\omega (\varrho )\subseteq A\,\}. \end{aligned}$$
Proposition 8.6
Assume A is an attractor. Then there is no heteroclinic connection running from A to \(A^\star \), that is \(C(A,A^\star )=\varnothing \).
The pair ( AR) of subsets of X is said to be an attractor–repeller pair in X if A is an attractor, R is a repeller, \(A=R^\star \) and \(R=A^\star \).
Theorem 8.7
The pair (AR) is an attractor–repeller pair in X if and only if the following four conditions are satisfied:
(i)
A is an attractor and R is a repeller,
 
(ii)
\( A\cap R=\varnothing \),
 
(iii)
For every \(x\not \in A\) and \(\varrho \in {\text {Sol}}\,(x)\) we have \(\alpha (\varrho )\subseteq R\),
 
(iv)
For every \(x\not \in R\) and \(\varrho \in {\text {Sol}}\,(x)\) we have \(\omega (\varrho )\subseteq A\).
 
Proof
First assume that (AR) is an attractor–repeller pair. Then, obviously, (i) is satisfied. Since \(R=A^\star ={\text {Inv}}\,(X\setminus A)\subseteq X\setminus A\), we see that (ii) holds. In order to prove (iii) take \(x\not \in A\) and \(\varrho \in {\text {Sol}}\,(x)\). Since \(X\setminus A\) is a backward trapping region, we see that \({\text {im}}\,\varrho ^-\subseteq X\setminus A\). Moreover, \([{\text {im}}\,\varrho ^-]^+_{\mathcal {V}}\subseteq X\setminus A\), because \(X\setminus A\), as a backwards trapping region, is \(\mathcal {V}\)-compatible. Hence, \(\alpha (\varrho )\subseteq {\text {Inv}}\,[{\text {im}}\,\varrho ^-]^+_{\mathcal {V}}\subseteq {\text {Inv}}\,(X\setminus A)=A^\star =R\). Similarly we prove (iv).
Assume in turn that (i)–(iv) hold. We will show that \(R=A^\star \). Indeed, by (ii) \(R\subseteq X\setminus A\); hence, \(R={\text {Inv}}\,R\subseteq {\text {Inv}}\,(X\setminus A)=A^\star \). To show the opposite inclusion take \(x\in A^\star \) and assume \(x\not \in R\). Let \(\varrho \in {\text {Sol}}\,(x^\star ,X\setminus A)\). By (iv) we have \(\omega (\varrho )\subseteq A\). By Proposition 8.5 we can select a \(y\in \omega (\varrho )\cap {\text {im}}\,\varrho \). It follows that \(y\in A\cap {\text {im}}\,\varrho \). This contradicts \(\varrho \in {\text {Sol}}\,(x^\star ,X\setminus A)\) and proves that \(R=A^\star \). Similarly we prove that \(A=R^\star \). \(\square \)
The following corollary is an immediate consequence of Theorem 8.7.
Corollary 8.8
Assume A is an attractor. Then \((A,A^\star )\) is an attractor–repeller pair. In particular, \(A^{**}=A\).

9 Morse Decompositions, Morse Equation and Morse Inequalities

In this section we define Morse decompositions and we prove the Morse equation and Morse inequalities for Morse decompositions. We recall that the poset-related terminology and notation are introduced in Sect. 4.3.

9.1 Morse Decompositions

Let \(\mathbb {P}\) be a finite set. The collection \(\mathcal {M}=\{\,M_r\mid r\in \mathbb {P}\,\}\) is called a Morse decomposition of X if there exists a partial order \(\le \) on \(\mathbb {P}\) such that the following three conditions are satisfied:
(i)
\(\mathcal {M}\) is a family of mutually disjoint, isolated invariant subsets of X,
 
(ii)
for every full solution \(\varphi \) in X there exist \(r,r'\in \mathbb {P}\), \(r\le r'\), such that \(\alpha (\varphi )\subseteq M_{r'}\), \(\omega (\varphi )\subseteq M_{r}\),
 
(iii)
if for a full solution \(\varphi \) and \(r\in \mathbb {P}\) we have \(\alpha (\varphi )\cup \omega (\varphi )\subseteq M_{r}\), then \({\text {im}}\,\varphi \subseteq M_r\).
 
Note that since the sets in \(\mathcal {M}\) are mutually disjoint, the indices \(r,r'\) in (ii) are determined uniquely. A partial order on \(\mathbb {P}\) which makes \(\mathcal {M}=\{\,M_r\mid r\in \mathbb {P}\,\}\) a Morse decomposition of X is called an admissible order. Observe that the intersection of admissible orders is an admissible order and any extension of an admissible order is an admissible order. In particular, every Morse decomposition has a unique minimal admissible order as well as an admissible linear order.
The following proposition is straightforward.
Proposition 9.1
Let \(\mathcal {M}=\{\,M_r\mid r\in \mathbb {P}\,\}\) be a Morse decomposition of X and let \(r,r'\in \mathbb {P}\). Then
(i)
\(C(M_{r'},M_r)\) is \(\mathcal {V}\)-compatible,
 
(ii)
\(C(M_r,M_r)=M_r\),
 
(iii)
\(r'<r\) implies \(C(M_{r'},M_r)=\varnothing \).
 
Let \(\mathcal {B}\) be the family of all basic sets of \(\mathcal {M}\). For \(B_1,B_2\in \mathcal {B}\) we write \(B_1\le _{\mathcal {V}}B_2\) if there exists a solution \(\varrho \) such that \(\alpha (\varrho )\subseteq B_2\) and \(\omega (\varrho )\subseteq B_1\). It follows easily from the definition of the basic set that \(\le _{\mathcal {V}}\) is a partial order on \(\mathcal {B}\).
Theorem 9.2
The family \(\mathcal {B}\) of all basic sets of \(\mathcal {V}\) is a Morse decomposition of X.
Proof
Obviously, two different basic sets are always disjoint. By Theorem 8.4 each element of \(\mathcal {B}\) is an isolated invariant set. Thus, condition (i) of the definition of Morse decomposition is satisfied. To prove (ii) consider a full solution \(\varrho \). By Proposition 8.5 both \(\alpha (\varrho )\) and \(\omega (\varrho )\) are strongly recurrent. Thus, each is contained in a basic set, which proves (ii). Assume in turn that both \(\alpha (\varrho )\) and \(\omega (\varrho )\) are contained in the same basic set B. Fix a \(y\in {\text {im}}\,\varrho \) and choose an \(x\in \alpha (\varrho )\) and a \(z\in \omega (\varrho )\). Then \(x\rightarrow _{\mathcal {V}}y\) and \(y\rightarrow _{\mathcal {V}}z\). But also \(z\rightarrow _{\mathcal {V}}x\), because \(x,z\in B\) and B is strongly recurrent. Thus, \(x\leftrightarrow _{\mathcal {V}}y\) and consequently \(y\in B\). This shows that \({\text {im}}\,\varrho \subseteq B\) and proves (iii). Finally, observe that \(\le _{\mathcal {V}}\) is obviously an admissible partial order on \(\mathcal {B}\). Therefore, \(\mathcal {B}\) is a Morse decomposition of X. \(\square \)
Given two Morse decompositions \(\mathcal {M}\), \(\mathcal {M}'\) of X, we write \(\mathcal {M}'\sqsubset \mathcal {M}\) if for each \(M'\in \mathcal {M}'\) there exists an \(M\in \mathcal {M}\) such that \(M'\subseteq M\). Then, we say that \(\mathcal {M}'\) is finer than \(\mathcal {M}\). The relation \(\sqsubset \) is easily seen to be a partial order on the collection of all Morse decompositions of X.
Theorem 9.3
For any Morse decomposition \(\mathcal {M}=\{M_r\mid r\in \mathbb {P}\}\) of X we have \(\mathcal {B}\sqsubset \mathcal {M}\). Thus, the family of basic sets is the unique, finest Morse decomposition of X.
Proof
Let B be a basic set and let \(x\in B\). Since B is invariant, we may choose a \(\varrho \in {\text {Sol}}\,(x^\star ,B)\) and an \(r\in \mathbb {P}\) such that \(\alpha (\varrho )\subseteq M_r\). Since \(\alpha (\varrho )\subseteq B\), we may choose a \(y=y^\star \in {\text {im}}\,\varrho \cap B\cap M_r\). We will show that \(B\subseteq M_r\). For this end take a \(z\in B\). Since B is strongly recurrent, we may construct a path \(\varphi \) from y to y through z. Then \(\gamma :=\sigma _-\varrho ^-\cdot \varphi \cdot \sigma _+\varrho ^+\) is a full solution through z and obviously \(\alpha (\gamma )=\alpha (\varrho )\subseteq M_r\) and \(\omega (\gamma )=\omega (\varrho )\subseteq M_r\). It follows that \(z\in {\text {im}}\,\gamma \subseteq M_r\). This proves our claim. \(\square \)

9.2 Morse Sets

Given \(I\subseteq \mathbb {P}\) define the Morse set of I by
$$\begin{aligned} M(I):=\bigcup _{r,r'\in I} C(M_{r'},M_r). \end{aligned}$$
(22)
Theorem 9.4
The set M(I) is an isolated invariant set.
Proof
To prove that M(I) is invariant take \(x\in M(I)\) and let \(r,r'\in \mathbb {P}\) be such that \(x\in C(M_{r'},M_r)\). Choose \(\varrho \in {\text {Sol}}\,(x^\star )\) such that \(\alpha (\varrho )\subseteq M_{r'}\) and \(\omega (\varrho )\subseteq M_{r}\). Let \(k\in \mathbb {Z}\) and let \(y:=\varrho (k)\). Since either \(y^\star =\varrho (k)\) or \(y^\star =\varrho (k+1)\), we see that \(\rho \in {\text {Sol}}\,(y^\star )\). It follows that \(y\in C(M_{r'},M_r)\subseteq M(I)\). Therefore, \(\rho \in {\text {Sol}}\,(x^\star ,M(I))\) and \(x\in {\text {Inv}}\,M(I)\).
By Theorem 7.1 it suffices to prove that M(I) is proper. For this end we will verify condition (2) in Proposition 4.5. Take \(x,z\in M(I)\) and assume \(y\in X\) is such that \(x\in {\text {cl}}\,y\) and \(y\in {\text {cl}}\,z\). If \([x]=[y]\) or \([y]=[z]\), then \(y\in M(I)\), because by Proposition 6.4 the set M(I) as invariant is \(\mathcal {V}\)-compatible. Hence, consider the case \([x]\ne [y]\) and \([y]\ne [z]\). It follows that \(x\in \Pi _{\mathcal {V}}(y^\star )\) and \(y\in \Pi _{\mathcal {V}}(z^\star )\). Let \(\gamma \in {\text {Sol}}\,(x^\star )\) be such that \(\omega (\gamma )\subseteq M_{r}\) for some \(r\in I\). Similarly, let \(\varrho \in {\text {Sol}}\,(z^\star )\) be such that \(\alpha (\varrho )\subseteq M_{s}\) for some \(s\in I\). Let \(\chi :=\varrho ^-\cdot \nu (y)\cdot \nu ^-(x)\cdot \gamma ^+\). We easily verify that \(\alpha (\chi )=\alpha (\varrho )\subseteq M_s\) and \(\omega (\chi )=\omega (\gamma )\subseteq M_r\). This shows that \(y\in M(I)\). Thus, M(I) is proper. \(\square \)
Theorem 9.5
If I is a lower set in \(\mathbb {P}\), then M(I) is an attractor in X.
Proof
In view of Theorem 8.1 and Theorem 9.4 we only need to prove that M(I) is closed. Hence, take \(x\in {\text {cl}}\,M(I)\). We need to show that \(x\in M(I)\). Let \(y\in M(I)\) be such that \(x\in {\text {cl}}\,y\). If \([x]=[y]\), then \(x\in M(I)\), because M(I) as invariant is \(\mathcal {V}\) compatible. Thus, assume \([x]\ne [y]\). Then \(x\in \Pi _{\mathcal {V}}(y^\star )\). Let \(\gamma \in {\text {Sol}}\,(y^\star )\) be such that \(\alpha (\gamma )\subseteq M_{r}\) for some \(r\in I\). Choose also a \(\varrho \in {\text {Sol}}\,(x^\star )\) and let \(s\in \mathbb {P}\) be such that \(\omega (\varrho )\subseteq M_s\). Then \(\chi :=\gamma ^-\cdot \nu ^-(x)\cdot \varrho ^+\in {\text {Sol}}\,(x^\star )\) and \(\alpha (\chi )=\alpha (\gamma )\subseteq M_r\), \(\omega (\chi )=\omega (\varrho )\subseteq M_s\). It follows from the definition of Morse decomposition that \(s\le r\). Since \(r\in I\) and I is a lower set, we get \(s\in I\). This implies that \(x\in M(I)\). \(\square \)
We also have a dual statement. We skip the proof, because it is similar to the proof of Theorem 9.5.
Theorem 9.6
If I is an upper set in \(\mathbb {P}\), then M(I) is a repeller in X.
Proposition 9.7
If N is a trapping region and \(N'\) is a backward trapping region, then \( {\text {Inv}}\,(N\cap N')={\text {Inv}}\,N\cap {\text {Inv}}\,N'. \)
Proof
Obviously, the left-hand side is contained in the right-hand side. To prove the opposite inclusion take \(x\in {\text {Inv}}\,N\cap {\text {Inv}}\,N'\). By the \(\mathcal {V}\)-compatibility of invariant sets also \(x^\star \in {\text {Inv}}\,N\cap {\text {Inv}}\,N'\). Let \(\varrho \in {\text {Sol}}\,(x^\star ,N')\). Since N is a trapping region we have \(\varrho ^+\in {\text {Sol}}\,^+(x^\star ,N)\). Hence, \(x\in {\text {Inv}}\,^+(N\cap N')\). Similarly we prove that \(x\in {\text {Inv}}\,^-(N\cap N')\). It follows that \(x\in {\text {Inv}}\,(N\cap N')\). \(\square \)
For a lower set I in \(\mathbb {P}\) set
$$\begin{aligned} N(I):=X\setminus M(I)^\star . \end{aligned}$$
Observe that by Corollary 8.8 we have
$$\begin{aligned} {\text {Inv}}\,N(I)=M(I). \end{aligned}$$
(23)
Theorem 9.8
If \(I\subseteq \mathbb {P}\) is convex, then \((N(I^{\le }),N(I^{<}))\) is an index pair for M(I).
Proof
The sets \(M(I^{\le })^\star \), \(M(I^{<})^\star \) are repellers. Hence, by Theorem 8.3 they are open. It follows that \(N(I^{\le })\), \(N(I^{<})\) are closed. To verify property (15) take an \(x\in N(I^{<})\), a \(y\in \Pi _{\mathcal {V}}(x)\cap N(I^{\le })\) and assume that \(y\not \in N(I^{<})\). Then \(y\in M(I^{<})^\star \). It follows that a \( x\in \Pi _{\mathcal {V}}^{-1}(M(I^{<})^\star )\subseteq M(I^{<})^\star , \) because by Theorem 8.3 the set \(M(I^{<})^\star \), as a repeller, is a backwards trapping region. Thus, \(x\not \in N(I^<)\), a contradiction which proves (15). To prove (16) take an \(x\in N(I^{\le })\) such that \(\Pi _{\mathcal {V}}(x)\setminus N(I^{\le })\ne \varnothing \) and assume that \(x\not \in N(I^{<})\). Then \(x\in M(I^{<})^\star \). Let \(y\in \Pi _{\mathcal {V}}(x)\setminus N(I^{\le })\). Then \(y\in \Pi _{\mathcal {V}}(x)\cap M(I^{\le })^\star \). Since \(M(I^{\le })^\star \) is a repeller, it follows that \(x\in M(I^{\le })^\star \). This contradicts \(x\in N(I^{\le })\) and proves (16).
Finally, to prove (17) first observe that
$$\begin{aligned} N(I^{\le })\setminus N(I^{<})=M(I^{<})^\star \setminus M(I^{\le })^\star =M(I^{<})^\star \cap N(I^{\le }). \end{aligned}$$
(24)
Since \(M(I^{<})^\star \) is a backward trapping region and \(N(I^{\le })\) is a forward trapping region, we have by (24) and Proposition 9.7
$$\begin{aligned} {\text {Inv}}\,( N(I^{\le })\setminus N(I^{<}))= & {} {\text {Inv}}\,M(I^{<})^\star \cap {\text {Inv}}\,N(I^{\le })\\= & {} M(I^{<})^\star \cap M(I^{\le })\\= & {} {\text {Inv}}\,M(I^{\le })\cap {\text {Inv}}\,(X\setminus M(I^<))\\= & {} {\text {Inv}}\,( M(I^{\le })\setminus M(I^{<}))=M(I). \end{aligned}$$
\(\square \)
The following corollary is an immediate consequence of Theorem 9.8 and the definition of the lower set.
Corollary 9.9
If I is a lower set (an attracting interval) in \(\mathbb {P}\), then \(I^{\le }=I\), \(I^{<}=\varnothing \), \((N(I),\varnothing )\) is an index pair for M(I), \(H^\kappa (N(I))=H^\kappa (M(I))\) and \(p_{M(I)}(t)=p_{N(I)}(t)\).
Theorem 9.10
Assume \(A\subseteq X\) is an attractor and \(A^\star \) is the dual repeller. Then
$$\begin{aligned} p_A(t)+p_{A^\star }(t)=p_X(t)+(1+t)q(t) \end{aligned}$$
(25)
for a polynomial q(t) with non-negative coefficients. Moreover, if \(q\ne 0\), then \(C(A^\star ,A)\ne \varnothing \).
Proof
Take \(\mathbb {P}:=\{1,2\}\), \(M_2:=A^\star \), \(M_1:=A\). Then \(\mathcal {M}:=\{M_1,M_2\}\) is a Morse decomposition of X. Take \(I:=\{2\}\). By Proposition 4.2 we have \(I^\le =\{1,2\}\), \(I^<=\{1\}\). It follows that \(M(I^\le )=X\), \(M(I^<)=M(\{1\})=A\), \(N(I^\le )=X\setminus X^\star =X\). Thus, we get from Corollary 9.9 that
$$\begin{aligned} p_{N(I^\le )}(t)=p_X(t), \quad p_{N(I^<)}(t)=p_{M(I^<)}(t)=p_A(t). \end{aligned}$$
(26)
By Theorem 9.8 the pair \((N(I^\le ),N(I^<))\) is an index pair for \(M(I)=A^\star \). Thus, by substituting \(P_1:=N(I^\le )\), \(P_2:=N(I^<)\), \(S:=A^\star \) in (19) in Corollary 7.12 we get (25) from (26). By Proposition 8.6 we have \(C(A,A^\star )=\varnothing \). If also \(C(A^\star ,A)=\varnothing \), then X decomposes into A, \(A^\star \) and by Theorem 7.13 we get
$$\begin{aligned} H^\kappa (P_1)={\text {Con}}\,(X)={\text {Con}}\,(A)\oplus {\text {Con}}\,(A^\star )=H^\kappa (P_2)\oplus H^\kappa (A^\star ). \end{aligned}$$
Thus, \(q=0\) by Corollary 7.12. It follows that \(q\ne 0\) implies \(C(A^\star ,A)\ne \varnothing \). \(\square \)

9.3 Morse Equation

We begin by the observation that an isolated invariant set S as a proper and \(\mathcal {V}\)-compatible subset of X in itself may be considered a Lefschetz complex with a combinatorial multivector field being the restriction \(\mathcal {V}_{|S}\). In particular, it makes sense to consider attractors, repellers and Morse decompositions of S with respect to \(\mathcal {V}_{|S}\) and the results of the preceding sections apply.
Theorem 9.11
Assume \(\mathbb {P}=\{\,1,2,\ldots , n\,\}\) is ordered by the linear order of natural numbers. Let \(\mathcal {M}:=\{\, M_p\mid p\in \mathbb {P}\,\}\) be a Morse decomposition of an isolated invariant set S and let \(A_i:=M(\{i\}^\le )\) and \(A_0:=\varnothing \). Then \((A_{i-1},M_i)\) is an attractor–repeller pair in \(A_i\). Moreover,
$$\begin{aligned} \sum _{i=1}^np_{M_i}(t)=p_S(t)+(1+t)\sum _{i=1}^nq_i(t) \end{aligned}$$
(27)
for some polynomials \(q_i(t)\) with non-negative coefficients and such that \(q_i(t)\ne 0\) implies \(C(M_i,A_{i-1})\ne \varnothing \) for \(i=2,3,\ldots , n\).
Proof
Fix an \(i\in \mathbb {P}\). Obviously, \(A_{i-1}\subseteq A_i\). The set \(A_{i-1}\), as an attractor in S, is closed in S. Thus, it is closed in \(A_i\). It follows that \(A_{i-1}\) is an attractor in \(A_i\). The verification that \(M_i\) is the dual repeller of \(A_{i-1}\) in \(A_i\) is straightforward. By applying Theorem 9.10 to the attractor–repeller pair \((A_{i-1},M_i)\) in \(A_i\) we get
$$\begin{aligned} p_{M_{i}}(t) + p_{A_{i-1}}(t) = p_{A_{i}}(t) + (1+t)q_{i}(t) \end{aligned}$$
(28)
for a polynomial \(q_{i}\) with non-negative coefficients. Since \(p_{A_{0}}(t)=0\) and \(A_{n}=S\), summing (28) over \(i\in \mathbb {P}\) and substituting \(q:=\sum _{i=1}^n q_{i},\) we get (27). The rest of the assertion follows from Theorem 9.10. \(\square \)

9.4 Morse Inequalities

Recall that for a proper \(S\subseteq X\) we write \(\beta _k(S):={\text {rank}}\,H^\kappa _k(S)\).
Theorem 9.12
For a Morse decomposition \(\mathcal {M}\) of an isolated invariant set S define
$$\begin{aligned} m_k(\mathcal {M}):=\sum _{r\in {\mathbb {P}}} \beta _k(M_r). \end{aligned}$$
Then for any \(k\in \mathbb {Z}^+\) we have the following inequalities
(i)
(the strong Morse inequalities)
$$\begin{aligned} m_k(\mathcal {M})-m_{k-1}(\mathcal {M})+\cdots \pm m_0(\mathcal {M}) \ge \beta _k(S)-\beta _{k-1}(S)+\cdots \pm \beta _0(S), \end{aligned}$$
 
(ii)
(the weak Morse inequalities)
$$\begin{aligned} m_k(\mathcal {M}) \ge \beta _k(S). \end{aligned}$$
 
Proof
Since \(\mathcal {M}\) is a Morse decomposition of X also with respect to a linear extension of an admissible order, by a suitable renaming of the elements of \(\mathbb {P}\) we may assume that \(\mathbb {P}=\{1,2,\ldots , n\}\) with the order given by the order of integers. Thus, we can apply Theorem 9.11. Multiplying equation (27) by the formal power series
$$\begin{aligned} (1+t)^{-1}=1-t+t^2-t^3+\cdots \end{aligned}$$
and comparing the coefficients of both sides we obtain the strong Morse inequalities, because the polynomial q is non-negative. The weak Morse inequalities follow immediately from the strong Morse inequalities. \(\square \)

9.5 Gradient-Like Combinatorial Multivector Fields

We say that \(\mathcal {V}\) is gradient-like if there exists an \(f:X\rightarrow \mathbb {R}\) such that for any solution \(\rho :\mathbb {Z}{\nrightarrow }X\) of \(\mathcal {V}\) and \(n,n+1\in {\text {dom}}\,\rho \) the following two conditions hold
$$\begin{aligned}&f(\rho (n+1))\le f(\rho (n)), \end{aligned}$$
(29)
$$\begin{aligned}&f(\rho (n+1))=f(\rho (n))\;\Rightarrow \;[\rho (n)]_\mathcal {V}=[\rho (n+1)]_\mathcal {V}. \end{aligned}$$
(30)
Proposition 9.13
Assume \(\mathcal {V}\) is gradient-like and \(\rho :\mathbb {Z}\rightarrow X\) is a full solution. Then \(\alpha (\rho )\) and \(\omega (\rho )\) are critical multivectors.
Proof
Let \(f:X\rightarrow \mathbb {R}\) be such that (29) and (30) are satisfied. Since X is finite, it follows from (29) that \(f(\rho (n))\) does not depend on n if n is sufficiently large. Hence, by (30) and the definition of the solution we see that there exists an \(n_0\in \mathbb {Z}\) such that \(\rho (n)=\rho (n_0)\) for \(n\ge n_0\). In particular, \(\rho (n_0)\) is critical and for \(k\ge n_0\) we have
$$\begin{aligned} \left[ {\text {im}}\,\sigma _+^k\varrho ^+\right] ^+_{\mathcal {V}}=[\{\rho (n_0)\}]^+_{\mathcal {V}}=[\rho (n_0)]_\mathcal {V}\end{aligned}$$
and consequently \(\omega (\rho )=[\rho (n_0)]_\mathcal {V}\). It follows that \([\rho (n_0)]_\mathcal {V}\) is a critical multivector. The proof for \(\alpha (\rho )\) is analogous. \(\square \)
Theorem 9.14
Assume \(\mathcal {V}\) is a gradient-like combinatorial multivector field on X and let C denote the set of critical cells of \(\mathcal {V}\). Then the family \(\mathcal {C}:=\{\,[c]_\mathcal {V}\mid c\in C\,\}\) coincides with the family \(\mathcal {B}\) of basic sets of \(\mathcal {V}\). In particular, it is a Morse decomposition of X.
Proof
Let \(B\in \mathcal {B}\) and let \(x,y\in B\). Then, there is a path \(\rho \) from \(x^\star \) to \(x^\star \) through \(y^\star \) of length at least three. It follows from (29) that f must be constant along \(\rho \) and from (30) that \([x]_\mathcal {V}=[y]_\mathcal {V}\). Consequently, \(x^\star =y^\star \). Therefore, \(B=[x]_\mathcal {V}=[x^\star ]_\mathcal {V}\) and \(x^\star \in C\), because path \(\rho \) must consist of loops. Hence, \(B\in \mathcal {C}\) and \(\mathcal {B}\subseteq \mathcal {C}\). To see the opposite inclusion take a \(c\in C\). Obviously, \(c\in {\text {CR}}\,(X)\). Let B be the basic set to which c belongs. We already know that \(B=[c']_\mathcal {V}\) for a \(c'\in C\). Then \(c\in B=[c']_\mathcal {V}\); hence, \([c]_\mathcal {V}=[c']_\mathcal {V}=B\in \mathcal {B}\). The remaining assertion follows from Theorem 9.2. \(\square \)
Let c be a critical cell of \(\mathcal {V}\). It follows from Proposition 7.2 that the critical vector \([c]_\mathcal {V}\) is an isolated invariant set. We say that c is non-degenerate if there is a \(k\in \mathbb {Z}^+\) such that the ith Conley coefficient of \([c]_\mathcal {V}\) satisfies
$$\begin{aligned} \beta _i([c]_\mathcal {V})={\left\{ \begin{array}{ll} 1 &{}\quad \text {if }i=k,\\ 0 &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(31)
We say that the number k is the Morse index of c.
Theorem 9.15
Assume \(\mathcal {V}\) is a gradient-like combinatorial multivector field on X and all critical cells of \(\mathcal {V}\) are non-degenerate. Let \(n_k\) denote the number of critical cells of Morse index k. Then for any non-negative integer k we have
(i)
(the strong Morse inequalities)
$$\begin{aligned} n_k-n_{k-1}+\cdots \pm n_0 \ge \beta _k(X)-\beta _{k-1}(X)+\cdots \pm \beta _0(X), \end{aligned}$$
 
(ii)
(the weak Morse inequalities)
$$\begin{aligned} n_k \ge \beta _k(X). \end{aligned}$$
 
Proof
Let C be the set of critical cells of \(\mathcal {V}\) and denote by \(\mathcal {C}\) the Morse decomposition given by Theorem 9.14. Since all critical cells of \(\mathcal {V}\) are non-degenerate, we see from (31) that
$$\begin{aligned} m_k(\mathcal {C})=\sum _{c\in C} \beta _k([c]_\mathcal {V})=n_k. \end{aligned}$$
Thus, the result follows immediately from Theorem 9.12. \(\square \)

10 Constructing Combinatorial Multivector Fields from a Cloud of Vectors

In this section we present the algorithm used to generate some of the examples discussed in Sect. 3. It is a prototype algorithm intended to show the potential of the theory presented in this paper in the algorithmic analysis of sampled dynamical systems and in combinatorialization of flows given by differential equations. The algorithm may be generalized to arbitrary polyhedral cellular complexes in \(\mathbb {R}^n\). However, it is not necessarily optimal. The question how to construct combinatorial multivector fields from sampled flows in order to obtain a possibly best description of the dynamics as well as how to compute the associated topological invariants efficiently is left for future investigation.
Consider \(Z:=\{\,0,2,\ldots , 2n\,\}^2\). Let E denote the set of open segments of length two with endpoints in Z and let S be the set of open squares of area four with vertices in Z. Then \(X:=Z\cup E\cup S\) is the collection of cells which makes the square \([0,2n]^2\) a cellular complex. We identify each cell with the coordinates of its centre of mass. Thus, vertices have even coordinates, squares have odd coordinates, and edges have one coordinate even and one odd. For an edge e in X we denote by \(e^-,e^+\) its two vertices and we write \(i^\bot (e)\in \{1,2\}\) for the index of the axis orthogonal to the edge.
Assume that we have a map \(v:=(v_1,v_2):Z\rightarrow \mathbb {R}^2\) which sends every vertex of X to a planar, classical vector. This may be a map which assigns to each point of Z a vector in \(\mathbb {R}^2\) obtained from sampling a planar smooth vector field. This may be also a map obtained from a physical experiment or even selected randomly.
Let \({\text {sgn}}\,:\mathbb {R}\rightarrow \{-1,0,1\}\) denote the sign function. Consider the map \({\text {arg}}\,:\mathbb {R}^2\setminus \{0\}\rightarrow [-\pi ,\pi )\) given by
$$\begin{aligned} {\text {arg}}\,(x_1,x_2):={\left\{ \begin{array}{ll} \arccos \frac{x_1}{\sqrt{x_1^2+x_2^2}} &{}\quad \text {if}\,\, x_2\ge 0, \\ -\arccos \frac{x_1}{\sqrt{x_1^2+x_2^2}} &{}\quad \text {if}\,\, x_2< 0. \end{array}\right. } \end{aligned}$$
and the planar map \(D_{\mu ,\epsilon }:\mathbb {R}^2\rightarrow \mathbb {R}^2\) given by
$$\begin{aligned} D_{\mu ,\epsilon }(x_1,x_2):= {\left\{ \begin{array}{ll} 0 &{}\quad \text {if}\,\, x_1^2+x_2^2\le \epsilon ^2,\\ ({\text {sgn}}\,x_1,0) &{}\quad \text {if} \,\,|{\text {arg}}\,x|\le \mu \,\, \hbox {or} \,\,|{\text {arg}}\,x|\ge \pi -\mu ,\\ (0,{\text {sgn}}\,x_2) &{}\quad \text {if} \,\,|{\text {arg}}\,x-\pi /2|\le \mu \\ &{}\quad \text {or} \,\,|{\text {arg}}\,x+\pi /2|\le \mu ,\\ ({\text {sgn}}\,x_1,{\text {sgn}}\,x_2) &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
The map \(D_{\mu ,\epsilon }\) normalizes the vectors in the plane in the sense that it sends each planar vector v to one of the nine vectors with coordinates in \(\{-1,0,1\}\) depending on the location of v in one of the nine regions marked in Fig. 13.
The algorithm is presented in Table 1. Its rough description is as follows. It accepts on input a triple \((v,\mu ,\epsilon )\), where \(v:Z\rightarrow \mathbb {R}^2\) is a map, \(\mu \in [0,\pi /4]\) and \(\epsilon >0\). The map v is normalized to \(\bar{v}\) by applying the map \(D_{\mu ,\epsilon }\). The algorithm defines a map \(\theta :X\rightarrow X\) such that \(\mathcal {V}_\theta :=\{\,\theta ^{-1}(y)\mid y\in {\text {im}}\,\theta \,\}\) encodes a combinatorial multivector field via Theorem 5.11. The construction proceeds in three steps. In the first step the map is initialized to identity. This corresponds to a combinatorial multivector field consisting only of singletons. In the second step each edge is tested whether the normalized vectors at its endpoints projected to the axis perpendicular to the edge coincide. If this is the case, the edge is paired with the square pointed by both projections. In the third step each vertex \(z\in Z\) is analysed. If the normalized vector in \(\bar{v}(z)\) is parallel to one of the axes, the vertex z is paired with the corresponding edge. Otherwise, it is combined into one multivector with the corresponding square and adjacent edges unless this would generate a conflict with one of the neighbouring vertices.
Note that the only case when strict multivectors may be generated is in the third step when the normalized vector is not parallel to any axis. The chances that this happens decrease when the parameter \(\mu \) is increased. The normalized vector is always parallel to one of the axes when \(\mu \ge \pi /4\). Thus, if \(\mu \ge \pi /4\), then the algorithm returns a combinatorial vector field, that is a combinatorial multivector field with no strict multivectors.
Theorem 10.1
Consider the algorithm in Table 1. When applied to a map v and numbers \(\mu > 0\), \(\epsilon >0\) it always stops returning a map \(\theta :X\rightarrow X\) which satisfies conditions (i)–(iii) of Theorem 5.11. Thus, the collection \(\{\,\theta ^{-1}(y)\mid y\in {\text {im}}\,\theta \,\}\) is a combinatorial multivector field on X.
Proof
After completing the first \(\mathbf foreach \;\) loop the map \(\theta \), as the identity, trivially satisfies conditions (i)–(iii) of Theorem 5.11. Thus, it suffices to observe that each modification of \(\theta \) in the subsequent loops preserves these properties. One can easily check that this is indeed the case. \(\square \)
Table 1
Algorithm CMVF constructing a combinatorial multivector field from a collection of vectors on an integer lattice in the plane
https://static-content.springer.com/image/art%3A10.1007%2Fs10208-016-9330-z/MediaObjects/10208_2016_9330_Figa_HTML.gif

11 Extensions

In this section we briefly indicate the possibilities of some extensions of the theory presented in this paper. The details will be presented elsewhere.

11.1 Generalized Multivector Fields

The requirement that a multivector has a unique maximal element with respect to the partial order \(\le _\kappa \) is convenient. It simplifies the analysis of combinatorial multivector fields, because every non-dominant cell x admits precisely one arrow in \(G_\mathcal {V}\) originating in x, namely the up-arrow from x to \(x^\star \). It is also not very restrictive in combinatorial modelling of a dynamical system. But, in some situations the requirement may be inconvenient. For instance, if sampling is performed near a hyperbolic or repelling fixed point, it may happen that the sampled vectors point into different top-dimensional cells. Or, when a parameterized family of dynamical systems is studied (see Sect. 11.2), it may be natural to model a collection of nearby dynamical systems by one combinatorial multivector field. In such a situation the uniqueness requirement will not be satisfied in places where neighbouring systems point into different cells.
Taking these limitations into account it is reasonable to consider the following extension of the concept of a combinatorial multivector field. A generalized multivector in a Lefschetz complex \((X,\kappa )\) is a proper collection of cells in X. A generalized multivector V is critical if \(H^\kappa (V)\ne 0\). A generalized multivector field \(\mathcal {V}\) is a partition of X into generalized multivectors. For a cell \(x\in X\) we denote by [x] the unique generalized multivector in \(\mathcal {V}\) to which x belongs. The associated generalized directed graph \(G_\mathcal {V}\) has vertices in X and an arrow from x to y if one of the following conditions is satisfied
$$\begin{aligned}&\dim x < \dim y \text { and } y\in [x] \text { (an }\, up\text {-}arrow), \end{aligned}$$
(32)
$$\begin{aligned}&\dim x > \dim y \text { and } y\in {\text {mo}}\,[x] \text { (a }down\text {-}arrow), \end{aligned}$$
(33)
$$\begin{aligned}&x=y \text { and } [y] \text { is critical (a } loop). \end{aligned}$$
(34)
We write \(y \prec _{\mathcal {V}} x\) if there is an arrow from x to y in \(G_\mathcal {V}\), denote by \(\le _{\mathcal {V}}\) the preorder induced by \(\prec _{\mathcal {V}}\), interpret \(\prec _{\mathcal {V}}\) as a multivalued map \(\Pi _{\mathcal {V}}: X{\,\overrightarrow{\rightarrow }\,}X\) and study the associated dynamics. An example of a generalized multivector field together with the associated up-arrows is presented in Fig. 14.
The main ideas of the theory presented in this paper extend to this generalized case. The proofs get more complicated, because there are more cases to analyse. This is caused by the fact that more than one up-arrow may originate from a given cell.
It is tempting to simplify the analysis by gluing vertices of \(G_\mathcal {V}\) belonging to the same multivector. But, when proceeding this way, the original phase space is lost. And, the concept of index pair makes sense only in the original space, because in general the sets \(P_1\), \(P_2\) in an index pair \(P=(P_1,P_2)\) are not \(\mathcal {V}\)-compatible. Also, since in the gluing process two different combinatorial multivector fields would modify the space in a different way, it would not be possible to compare their dynamics. In consequence, the concepts of perturbation and continuation briefly explained in the following section would not make sense.

11.2 Perturbations and Continuation

Among the fundamental features of the classical Conley index theory is the continuation property [9, Chapter IV]. An analogous property may be formulated in the combinatorial case. Let \(\mathcal {V}\), \(\bar{\mathcal {V}}\) be two combinatorial multivector fields on a Lefschetz complex X. We say that \(\bar{\mathcal {V}}\) is a perturbation of \(\mathcal {V}\) if \(\bar{\mathcal {V}}\) is a refinement of \(\mathcal {V}\) or \(\mathcal {V}\) is a refinement of \(\bar{\mathcal {V}}\). A parameterized family of combinatorial multivector fields is a sequence \(\mathcal {V}_1,\mathcal {V}_2,\ldots ,\mathcal {V}_n\) of combinatorial multivector fields on X such that \(\mathcal {V}_{i+1}\) is a perturbation of \(\mathcal {V}_i\). Assume S is an isolated invariant set of \(\mathcal {V}\) and \(\bar{S}\) is an isolated invariant set of \(\bar{\mathcal {V}}\). We say that S and \(\bar{S}\) are related by a direct continuation if there is a pair \(P=(P_1,P_2)\) of closed sets in X such that P is an index pair for S with respect to \(\mathcal {V}\) and for \(\bar{S}\) with respect to \(\bar{\mathcal {V}}\). We say that S and \(\bar{S}\) are related by continuation along a parameterized family \(\mathcal {V}=\mathcal {V}_1,\mathcal {V}_2,\ldots ,\mathcal {V}_n=\bar{\mathcal {V}}\) if there exist isolated invariant sets \(S_i\) of \(\mathcal {V}_i\) such that \(S_1=S\), \(S_n=\bar{S}\) and \(S_i\), \(S_{i+1}\) are related by a direct continuation. In a similar way one can define continuation of Morse decompositions of isolated invariant sets.
Figure 15 presents two parameterized families of combinatorial multivector fields: \(\mathcal {V}_1\), \(\mathcal {V}_2\), \(\mathcal {V}_3\), \(\mathcal {V}_4\), \(\mathcal {V}_5\) in the top row and \(\mathcal {W}_1\), \(\mathcal {W}_2\), \(\mathcal {W}_3\), \(\mathcal {W}_4\), \(\mathcal {W}_5\), \(\mathcal {W}_6\), \(\mathcal {W}_7\) in the bottom row. Note that \(\mathcal {V}_1=\mathcal {W}_1\) and \(\mathcal {V}_5=\mathcal {W}_7\). The singleton \(\{AB\}\) is an isolated invariant set of \(\mathcal {V}_1\). The set \(\{A,AB,AC\}\) is an isolated invariant set of \(\mathcal {V}_2\). These two sets are related by a direct continuation, because the pair \((\{A,B,C,AB,AC\},\{B,C\})\) is an index pair for \(\{AB\}\) with respect to \(\mathcal {V}_1\) and for \(\{A,AB,AC\}\) with respect to \(\mathcal {V}_2\). Note that \(\{AB\}\) is also an isolated invariant set of \(\mathcal {V}_5\) but it is not related by continuation along \(\mathcal {V}_1\), \(\mathcal {V}_2\), \(\mathcal {V}_3\), \(\mathcal {V}_4\), \(\mathcal {V}_5\) to \(\{AB\}\) as an isolated invariant set of \(\mathcal {V}_1\). But \(\{AB\}\) as an isolated invariant set for \(\mathcal {W}_1\) is related by continuation along \(\mathcal {W}_1\), \(\mathcal {W}_2\), \(\mathcal {W}_3\), \(\mathcal {W}_4\), \(\mathcal {W}_5\), \(\mathcal {W}_6\), \(\mathcal {W}_7\) to \(\{CD\}\) as an isolated invariant set for \(\mathcal {W}_7\). Actually, the Morse decomposition of \(\mathcal {W}_1\) consisting of Morse sets \(\{ABCD\}\), \(\{AB\}\), \(\{CD\}\), \(\{B\}\), \(\{C\}\) is related by continuation along \(\mathcal {W}_1\), \(\mathcal {W}_2\), \(\mathcal {W}_3\), \(\mathcal {W}_4\), \(\mathcal {W}_5\), \(\mathcal {W}_6\), \(\mathcal {W}_7\) to the Morse decomposition of \(\mathcal {W}_7\) consisting of Morse sets \(\{ABCD\}\), \(\{CD\}\), \(\{AB\}\), \(\{C\}\), \(\{B\}\). In particular, the isolated invariant sets \(\{ABCD\}\), \(\{AB\}\), \(\{CD\}\), \(\{B\}\), \(\{C\}\) of \(\mathcal {W}_1\) are related by continuation respectively to the isolated invariant sets \(\{ABCD\}\), \(\{CD\}\), \(\{AB\}\), \(\{C\}\), \(\{B\}\) of \(\mathcal {W}_7\).

11.3 Homotopy Conley Index for Combinatorial Multivector Fields on CW Complexes

Consider a finite regular CW complex X with the collection of cells K. We say that \(V\subseteq K\) is a multivector if |V|, the union of all cells in V, is a proper subset of X. We say that a multivector V is regular if \({\text {mo}}\,|V|:={\text {cl}}\,|V|\setminus |V|\) is a deformation retract of \({\text {cl}}\,|V|\subseteq X\). Otherwise we call V critical. We define the generalized multivector field \(\mathcal {V}\) on X as a partition of K into generalized multivectors. Using the modified concepts of regular and critical multivectors as in the case of Lefschetz complexes we introduce the directed graph \(G_\mathcal {V}\) and the associated dynamics. This lets us introduce the concept of an isolated invariant set S and the associated Conley index defined as the homotopy type of \(P_1/P_2\) for an index pair \((P_1,P_2)\) isolating S. The proofs rely on gluing the homotopies provided by the deformation retractions of the individual regular multivectors.

11.4 Discrete Morse Theory for Combinatorial Multivector Fields

The main result of discrete Morse theory [10, Cor 3.5] establishes a homotopy equivalence between the original complex X and a complex \(X'\) with the number of k-dimensional cells equal to the number of critical points of the Morse function whose Morse index is k. A natural question is whether one can use multivectors instead of vectors to obtain a similar result under the additional assumption that all the critical cells are non-degenerate. It seems that the collapsing approach to discrete Morse theory presented in [6] extends to the case of combinatorial multivector fields on CW complexes defined in the preceding section. In the algebraic case of Lefschetz complexes one needs an extra assumption that the closure of every regular multivector is chain homotopy equivalent to its mouth.

12 Conclusions and Future Research

The presented theory shows that combinatorialization of dynamics, started by Forman’s paper [11], may be extended to cover such concepts as isolated invariant set, Conley index, attractor, repeller, attractor–repeller pair and Morse decomposition. Moreover, the broader concept of a combinatorial multivector field introduced in the present paper seems to be more flexible than the original Forman’s concept of combinatorial vector field and shall serve better the needs of the combinatorialization of classical topological dynamics. In particular, some classical concepts in dynamics, for instance the homoclinic connection, do not have counterparts in the theory of combinatorial vector fields but do have in the case of combinatorial multivector fields.
The theory proposed in this paper is far from being completed. Besides the extensions discussed in the previous section there are several directions of research to be undertaken. For instance, connection matrices constitute an essential part of the Conley index theory. It is natural to expect that they have some counterpart in the combinatorial setting. Several authors proposed a generalization of the Conley index theory to the case of time-discrete dynamical systems (iterates of maps) [12, 22, 25, 30]. A challenging question is what is a combinatorial analogue of a time-discrete dynamical system and how to construct Conley index theory for such systems. Poincaré maps for combinatorial multivector fields may be a natural place to start investigations in this direction. Also, it would be interesting to understand the relation between the dynamics of a combinatorial multivector field and its Forman refinements.
The real significance of combinatorial multivector fields will be seen only after constructing bridges between combinatorial and classical dynamics. Some work towards a bridge from combinatorial to classical dynamics has been undertaken in [16]. However, from the point of view of applications a reverse approach is more important: constructing a combinatorial multivector field modelling a classical flow. The algorithm presented in this paper is only a hint that such constructions are possible. Obviously, finite resolution of the combinatorial setting prohibits any one-to-one correspondence, but approximation schemes representing the classical flow up to the resolution controlled by the size of the cells in the approximation should be possible. Depending on the goal, there are at least two options. If the goal is the rigourous description of the classical dynamics by means of a combinatorial multivector field, the techniques developed in [5] might be of interest. If rigour is not necessary, the use of a piecewise constant approximation of a vector field [31] to construct a combinatorial multivector field is an interesting option.

Acknowledgments

I am indebted to Abhishek Rathod who presented some ideas of Conley theory in the setting of Forman’s combinatorial vector fields in a talk presented at the DyToComp 2012 conference in Będlewo, Poland. This was a direct inspiration to undertake the present research. I am grateful to Ulrich Bauer and Herbert Edelsbrunner for pointing out to me the concept of the generalized Morse matchings. This conversation lead directly to the idea of the combinatorial multivector field presented in this paper. Also, I would like to thank Tamal Dey and anonymous reviewers for several valuable comments which let me enhance the presentation of the paper.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
insite
CONTENT
download
DOWNLOAD
print
PRINT
Literature
1.
go back to reference P.S. Alexandroff. Diskrete Räume, Mathematiceskii Sbornik 2 (1937), 501–518.MATH P.S. Alexandroff. Diskrete Räume, Mathematiceskii Sbornik 2 (1937), 501–518.MATH
2.
go back to reference J.A. Barmak. Algebraic Topology of Finite Topological Spaces and Applications, Lecture Notes in Mathematics 2032, Springer-Verlag, 2011. J.A. Barmak. Algebraic Topology of Finite Topological Spaces and Applications, Lecture Notes in Mathematics 2032, Springer-Verlag, 2011.
3.
go back to reference G. Bernot, J-P. Comet, A. Richard, J. Guespin. Application of formal methods to biological regulatory networks: extending Thomas’ asynchronous logical approach with temporal logic, Journal of Theoretical Biology 229(2004), 339–347.MathSciNetCrossRef G. Bernot, J-P. Comet, A. Richard, J. Guespin. Application of formal methods to biological regulatory networks: extending Thomas’ asynchronous logical approach with temporal logic, Journal of Theoretical Biology 229(2004), 339–347.MathSciNetCrossRef
4.
go back to reference R.H. Bing. Some aspects of the topology of 3-manifolds related to the Poincaré Conjecture, in Lectures on Modern Mathematics II, T.L. Saaty ed., Wiley, Ny (1964), 93–128. R.H. Bing. Some aspects of the topology of 3-manifolds related to the Poincaré Conjecture, in Lectures on Modern Mathematics II, T.L. Saaty ed., Wiley, Ny (1964), 93–128.
6.
go back to reference P. Brendel, G. Ellis, M. Juda, M. Mrozek. Fundamental Group Algorithm for low dimensional tesselated CW complexes, preprint 2015, arXiv:1507.03396 [math.AT] P. Brendel, G. Ellis, M. Juda, M. Mrozek. Fundamental Group Algorithm for low dimensional tesselated CW complexes, preprint 2015, arXiv:​1507.​03396 [math.AT]
7.
go back to reference J. Bush, M. Gameiro, S. Harker, H. Kokubu, K. Mischaikow, I. Obayashi, P. Pilarczyk. Combinatorial-topological framework for the analysis of global dynamics, Chaos 22(2012), 047508.MathSciNetCrossRefMATH J. Bush, M. Gameiro, S. Harker, H. Kokubu, K. Mischaikow, I. Obayashi, P. Pilarczyk. Combinatorial-topological framework for the analysis of global dynamics, Chaos 22(2012), 047508.MathSciNetCrossRefMATH
8.
go back to reference G. Chen, K. Mischakow, R.S. Laramee. Efficient morse decompositions of vector fields, IEEE Trans. Visual. Comput. Graph. 14(2008), 848–862.CrossRef G. Chen, K. Mischakow, R.S. Laramee. Efficient morse decompositions of vector fields, IEEE Trans. Visual. Comput. Graph. 14(2008), 848–862.CrossRef
9.
go back to reference C. Conley. Isolated Invariant Sets and the Morse Index, Amer. Math. Soc., CBMS Regional Conf. Series Math. 38(1978). C. Conley. Isolated Invariant Sets and the Morse Index, Amer. Math. Soc., CBMS Regional Conf. Series Math. 38(1978).
14.
go back to reference M. Jöllenbeck, V. Welker. Resolution of the residue class field via algebraic discrete Morse theory, Memoirs of the AMS 197, American Mathematical Society, Providence, 2009.MATH M. Jöllenbeck, V. Welker. Resolution of the residue class field via algebraic discrete Morse theory, Memoirs of the AMS 197, American Mathematical Society, Providence, 2009.MATH
15.
go back to reference T. Kaczynski, K. Mischaikow, M. Mrozek. Computational Homology, Applied Mathematical Sciences 157, Springer-Verlag, 2004. T. Kaczynski, K. Mischaikow, M. Mrozek. Computational Homology, Applied Mathematical Sciences 157, Springer-Verlag, 2004.
16.
go back to reference T. Kaczynski, M. Mrozek, Th. Wanner. Towards a Formal Tie Between Combinatorial and Classical Vector Field Dynamics, IMA Preprint Series #2443, November 2014, accepted to Journal of Computational Dynamics. T. Kaczynski, M. Mrozek, Th. Wanner. Towards a Formal Tie Between Combinatorial and Classical Vector Field Dynamics, IMA Preprint Series #2443, November 2014, accepted to Journal of Computational Dynamics.
17.
go back to reference V. Kovalevsky. Finite Topology as Applied to Image Analysis, Computer Vision, Graphics and Image Processing 45(1989), 141–161.CrossRef V. Kovalevsky. Finite Topology as Applied to Image Analysis, Computer Vision, Graphics and Image Processing 45(1989), 141–161.CrossRef
19.
go back to reference S. Lefschetz. Algebraic Topology, American Mathematical Society Colloquium Publications, Vol. 27, American Mathematical Society, New York, 1942. S. Lefschetz. Algebraic Topology, American Mathematical Society Colloquium Publications, Vol. 27, American Mathematical Society, New York, 1942.
20.
go back to reference W.S. Massey. A Basic Course in Algebraic Topology, volume 127 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1991. W.S. Massey. A Basic Course in Algebraic Topology, volume 127 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1991.
21.
go back to reference E.G. Minian. Some remarks on Morse theory for posets, homological Morse theory and finite manifolds, Topology and its Applications 159(2012), 2860–2869.MathSciNetCrossRefMATH E.G. Minian. Some remarks on Morse theory for posets, homological Morse theory and finite manifolds, Topology and its Applications 159(2012), 2860–2869.MathSciNetCrossRefMATH
22.
go back to reference M. Mrozek. Leray functor and cohomological Conley index for discrete dynamical systems, Trans. Amer. Math. Soc., 318(1990),149–178.MathSciNetCrossRefMATH M. Mrozek. Leray functor and cohomological Conley index for discrete dynamical systems, Trans. Amer. Math. Soc., 318(1990),149–178.MathSciNetCrossRefMATH
24.
go back to reference W. Reich, D. Schneider, Ch. Heine, A. Wiebel, G. Chen, and G. Scheuermann. Combinatorial Vector Field Topology in Three Dimensions, in R. Peikert et al. (eds.), Topological Methods in Data Analysis and Visualization II, Mathematics and Visualization, doi:10.1007/978-3-642-23175-94, Springer-Verlag, Berlin Heidelberg, 2012. W. Reich, D. Schneider, Ch. Heine, A. Wiebel, G. Chen, and G. Scheuermann. Combinatorial Vector Field Topology in Three Dimensions, in R. Peikert et al. (eds.), Topological Methods in Data Analysis and Visualization II, Mathematics and Visualization, doi:10.​1007/​978-3-642-23175-94, Springer-Verlag, Berlin Heidelberg, 2012.
25.
go back to reference J.W. Robbin, D. Salamon. Dynamical systems, shape theory and the Conley index. Ergodic Theory Dynamical Systems 8* (1988), 375–393.MathSciNetCrossRefMATH J.W. Robbin, D. Salamon. Dynamical systems, shape theory and the Conley index. Ergodic Theory Dynamical Systems 8* (1988), 375–393.MathSciNetCrossRefMATH
26.
go back to reference K. Rybakowski, E. Zehnder. A Morse Equation in Conley Index Theory for Semiflows on Metric Spaces, Ergod. Th. and Dynam. Sys. 5(1985), 123–143.MathSciNetCrossRefMATH K. Rybakowski, E. Zehnder. A Morse Equation in Conley Index Theory for Semiflows on Metric Spaces, Ergod. Th. and Dynam. Sys. 5(1985), 123–143.MathSciNetCrossRefMATH
27.
go back to reference E. Skjöldberg. Combinatorial discrete Morse theory from an algebraic viewpoint, Trans. Amer. Math. Soc. 358(2006), 115–129.MathSciNetCrossRef E. Skjöldberg. Combinatorial discrete Morse theory from an algebraic viewpoint, Trans. Amer. Math. Soc. 358(2006), 115–129.MathSciNetCrossRef
28.
go back to reference E. Snoussi, R. Thomas. Logical identification of all steady states: the concept of feedback loop characteristic states, Bull. Math. Biol. 55(1993), 973–991.CrossRefMATH E. Snoussi, R. Thomas. Logical identification of all steady states: the concept of feedback loop characteristic states, Bull. Math. Biol. 55(1993), 973–991.CrossRefMATH
29.
go back to reference E. Steinitz. Beiträge zur Analysis Situs, Sitzungsbericht Berliner Mathematischen Gesellschaft 7(1908), 29–49.MATH E. Steinitz. Beiträge zur Analysis Situs, Sitzungsbericht Berliner Mathematischen Gesellschaft 7(1908), 29–49.MATH
31.
go back to reference A. Szymczak, E. Zhang. Robust Morse Decompositions of Piecewise Constant Vector Fields, IEEE Transactions on Visualization and Computer Graphics 18(2012), 938–951.CrossRef A. Szymczak, E. Zhang. Robust Morse Decompositions of Piecewise Constant Vector Fields, IEEE Transactions on Visualization and Computer Graphics 18(2012), 938–951.CrossRef
32.
go back to reference R. Wisniewski, J.A. Larsen. Combinatorial Vector Fields for Piecewise Affine Control Systems, Proceedings of IFAC World Congress, Seoul, July 2008. R. Wisniewski, J.A. Larsen. Combinatorial Vector Fields for Piecewise Affine Control Systems, Proceedings of IFAC World Congress, Seoul, July 2008.
Metadata
Title
Conley–Morse–Forman Theory for Combinatorial Multivector Fields on Lefschetz Complexes
Author
Marian Mrozek
Publication date
27-09-2016
Publisher
Springer US
Published in
Foundations of Computational Mathematics / Issue 6/2017
Print ISSN: 1615-3375
Electronic ISSN: 1615-3383
DOI
https://doi.org/10.1007/s10208-016-9330-z

Other articles of this Issue 6/2017

Foundations of Computational Mathematics 6/2017 Go to the issue

Premium Partner