Zum Inhalt
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

Weakly Acyclic Diagrams: A Data Structure for Infinite-State Symbolic Verification

verfasst von : Michael Blondin, Michaël Cadilhac, Xin-Yi Cui, Philipp Czerner, Javier Esparza, Jakob Schulz

Erschienen in: Tools and Algorithms for the Construction and Analysis of Systems

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Das Kapitel untersucht die Erweiterung von geordneten binären Entscheidungsdiagrammen (OBDDs), um symbolische Verifikationen in unendlichen Zuständen zu handhaben, wobei schwach azyklische Diagramme (WADs) als neue Datenstruktur eingeführt werden. WADs repräsentieren schwach azyklische Sprachen, die von deterministischen endlichen Automaten (DFAs) mit bestimmten Zykluseigenschaften erkannt werden, was die Manipulation von endlosen Zustandssystemen ermöglicht. Der Text vertieft sich in die Theorie der WADs und zeigt, wie kleine Modifikationen dynamischer Programmieralgorithmen für OBDDs von oben nach unten auf WADs verallgemeinert werden können. Es stellt außerdem eine erste Anwendung von WADs für die symbolische Überprüfung von Endloszustandsmodellen vor, wobei der Schwerpunkt auf dem Algorithmus für die rückwärtsgerichtete Erreichbarkeit gut strukturierter Übergangssysteme liegt. Das Kapitel behandelt verschiedene Systeme, einschließlich verlustbehafteter Kanalsysteme, Petrinetze und Übertragungsprotokolle, und liefert experimentelle Ergebnisse, die WADs mit etablierten Werkzeugen vergleichen. Darüber hinaus werden damit verbundene Arbeiten zu schwach azyklischen Automaten und dedizierten Datenstrukturen für bestimmte Systeme diskutiert, die einen umfassenden Überblick über das Feld und das Potenzial von WADs bei der symbolischen Verifizierung bieten.
Hinweise
M. Blondin—was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).

1 Introduction

Ordered binary decision diagrams (OBDDs) are a fundamental data structure for the manipulation of Boolean functions [8, 13], widely used to support symbolic model checking of finite-state systems [14, 16]. In this approach to model checking, sets of configurations of a system are encoded as Boolean functions of fixed arity, say n, and the transition relation is encoded as a Boolean function of arity 2n. After fixing a total order on the Boolean variables, the OBDD representation of a Boolean function is unique up to isomorphism, and one can implement different model checking algorithms on top of an OBDD library.
The fundamental operations on sets and relations used by symbolic model-checking algorithms are—besides union, intersection, and complement of sets and relations—the pre and post operations that return the immediate predecessors or successors of a given set of configurations with respect to the transition relation. All of them are efficiently implemented on OBDDs using top-down dynamic programming: a call to an operation for functions of arity n invokes one or more calls to the same or other operations for functions of arity \(n-1\); intermediate results are stored using memoization [9, 14, 16].
OBDDs can be presented in automata-theoretic terms. A Boolean function \(f(x_1, \ldots , x_n)\) with order \(x_1 < \cdots < x_n\) is representable by the language \(L_f :=\{b_1 \cdots b_n \in \{0, 1\}^n : f(b_1, \ldots , b_n) = true \}\). The OBDD for f with this variable order is very close to the minimal DFA (MDFA) for \(L_f\).1 From this point of view, OBDDs are just MDFAs recognizing fixed-length languages—languages whose words have all the same length—and operations on OBDDs are just operations on fixed-length languages canonically represented by their unique MDFAs.
The automata-theoretic view of OBDDs immediately suggests a generalization from fixed-length languages to arbitrary regular languages: canonically represent a regular language by its unique MDFA, and manipulate MDFAs using automata-theoretic operations. (This is for example the approach of the tool MONA [28].) However, in this generalization crucial advantages of OBDD algorithms get lost. Consider the implementation of the post operation that, given a MDFA \(\mathcal {A}\) recognizing a set of configurations and the MDFA \(\mathcal {T}\) recognizing the transition relation, computes the MDFA for the set of immediate successors of \(L(\mathcal {A})\) with respect to \(L(\mathcal {T})\) (see e.g. [20, Chap. 5]). The algorithm proceeds in three steps. First, it constructs an NFA \(\mathcal {A}'\) recognizing the set of immediate successors; then, it applies the powerset construction to determinize \(\mathcal {A}'\) into a DFA \(\mathcal {A}''\); finally, it minimizes \(\mathcal {A}''\) to yield the final MDFA \(\mathcal {A}'''\). The crucial point is that the intermediate DFA \(\mathcal {A}''\) can be exponentially larger than \(\mathcal {A}'''\), and so, in the worst case, post uses exponential space in the size of \(\mathcal {A}'''\), the final output. This is not the case for the OBDD implementation for fixed-length languages, which constructs \(\mathcal {A}'''\) directly from \(\mathcal {A}\) and \(\mathcal {T}\), using only space \(\mathcal {O}(|\mathcal {A}'''|)\).
This observation raises the question of whether OBDDs can be generalized beyond fixed-length languages while maintaining the advantages of OBDD algorithms. We answer it in the affirmative. We introduce weakly acyclic diagrams (WADs), a generalization of OBDDs for the representation of weakly acyclic languages. A language is weakly acyclic if it is recognized by a weakly acyclic DFA, and a DFA is weakly acyclic if every simple cycle of its state-transition diagram is a self-loop. So, for example, the language \(a^* b a^*\) is weakly acyclic. Note that, contrary to fixed-length languages, weakly acyclic languages can be infinite.
We develop the theory of WADs and in particular show that small modifications to the top-down dynamic programming algorithms for OBDDs generalize them to WADs. We also present a first application of WADs to symbolic infinite-state model checking. A fundamental technique in this area is the backwards reachability algorithm for well-structured transition systems [3, 23]. The algorithm computes the set of all predecessors of a given upward-closed set of configurations (with respect to the partial order making the system well-structured). For that, it iteratively computes the immediate predecessors of the current set of configurations, until a fixpoint is reached. Well-structuredness ensures that all intermediate sets remain upward-closed, and termination. We show that for many well-structured transition systems, including lossy channel systems [4, 6], Petri nets [31], and broadcast protocols [21], upward-closed sets of configurations can be easily encoded as weakly acyclic languages, and so the backwards reachability algorithm can be implemented with WADs as data structure.
We implemented our algorithms in wadl, a prototype library for weakly acyclic diagrams. We conduct experiments on over 200 inputs for the backwards reachability algorithm, including instances of lossy channel systems, Petri nets, and broadcast protocols. We compare with some established tools using dedicated data structures and show that our generic approach is competitive.
Related work. Weakly acyclic automata have been studied extensively. However, past work has focused on algebraic, logical, and computational complexity questions, not on their algorithmics as a data structure.
In [15], Brzozowski and Fich showed that weakly acyclic automata, called partially ordered automata there, capture the so-called \(\mathcal {R}\)-trivial languages. They credit Eilenberg for earlier work on \(\mathcal {R}\)-trivial monoids [19]. Weakly acyclic automata have also been called extensive automata, e.g. in [32].
Weak acyclicity has also been defined in terms of nondeterministic automata. As we shall see, one such definition yields a model as expressive as the deterministic one, while another one is more expressive. These models have sometimes been called restricted partially ordered NFAs (rpoNFAs) and partially ordered NFAs (poNFAs). Schwentick, Thérien and Vollmer have shown that poNFAs characterize level 3/2 of the Straubing-Thérien hierarchy [34]. Bouajjani et al. characterized the class of languages accepted by poNFAs as languages closed under permutation rewriting, i.e., languages described by so-called alphabetic pattern constraints [12]. As mentioned, e.g., in the introduction of [30], further equivalences are known in first-order logic.
More recently, Krötzsch, Masopust and Thomazo have studied the computational complexity of basic questions for several types of partially ordered automata [29, 30], and Ryzhikov and Wolf have studied the complexity of problems for upper triangular Boolean matrices [33]; these matrices arise naturally from the transformation monoid of partially ordered automata.
In the context of symbolic verification, there is work on dedicated data structures for the representation of configuration sets for specific systems. In particular, Delzanno et al. introduced covering sharing trees to represent upward-closed sets of Petri net markings [18], which were later extended to interval sharing trees by Ganty et al. [24]. Boigelot and Godefroid studied queue-content decision diagrams to analyze channel systems [11].
Structure of the paper. In Section 2, we introduce and study weakly acyclic languages. In Sections 3 and 4, we introduce weakly acyclic diagrams and explain how to implement many operations. In Section 5, we explain how our framework can be used for infinite-state symbolic verification, and we then report on experimental results in Section 6. We conclude in Section 7. Some proofs and experimental results are deferred to the appendix of the full version [10].

2 Weakly acyclic languages

We assume familiarity with basic automata theory. Let us recall some notions. Let \(\mathcal {A}= (Q, \varSigma , \delta , q_0, F)\) be a deterministic finite automaton (DFA). We write \(\textrm{L}(\mathcal {A})\) to denote the language accepted by \(\mathcal {A}\). The language accepted by a state \(q \in Q\), denoted \(\textrm{L}(q)\), is the language accepted by the DFA \((Q, \varSigma , \delta , q, F)\). We write \(\varSigma ^+\) to denote the set of all nonempty words over alphabet \(\varSigma \).
For every word \(w \in \varSigma ^*\), we define \(\alpha (w)\) as the set of letters that occur within w, e.g. \(\alpha (baacac) = \{a, b, c\}\). The residual of a language \(L \in \varSigma ^*\) with respect to a word \(u \in \varSigma ^*\) is \(L^u :=\{v \in \varSigma ^* : uv \in L\}\). Recall that a language is regular iff it has finitely many residuals. Moreover, every regular language has a unique minimal DFA. The states of a minimal DFA \(\mathcal {A}\) accept distinct residuals of \(\textrm{L}(\mathcal {A})\), and each residual of \(\textrm{L}(\mathcal {A})\) is accepted by a state of \(\mathcal {A}\). For example, let \(L \subseteq \{a, b\}^*\) be the language described by \((a b^* + b + \varepsilon )a b^*\). The minimal DFA for L is depicted in Figure 1. Its residuals are \(L^\varepsilon \), \(L^a\), \(L^b\), \(L^{aa}\) and \(L^{aaa}\).
Fig. 1.
Example of a minimal DFA (whose language is weakly acyclic).
We say that a DFA \(\mathcal {A}= (Q, \varSigma , \delta , q_0, F)\) is weakly acyclic if for every \(q \in Q\), \(w \in \varSigma ^+\) and \(a \in \alpha (w)\), it is the case that \(\delta (q, w) = q\) implies \(\delta (q, a) = q\). For example, Figure 1 depicts a weakly acyclic DFA. Equivalent definitions of weak acyclicity are as follows:
  • the binary relation \(\preceq \) over Q given by “\(q \preceq q'\) if \(\delta (q, w) = q'\) for some word w” is a partial order;
  • each strongly connected component of the underlying directed graph of \(\mathcal {A}\) contains a single state; and
  • the underlying directed graph of \(\mathcal {A}\) does not have any cycle beyond self-loops.
Analogously, a nondeterministic finite automaton (NFA) \(\mathcal {A}= (Q, \varSigma , \delta , q_0, F)\) is weakly acyclic if \(q \in \delta (q, w)\) implies \(\delta (q, a) = \{q\}\) for all \(a \in \alpha (w)\). In other words, the underlying directed graph does not contain any simple cycle beyond self-loops, and nondeterminism with a letter a can only occur from states without self-loops labeled by a.
We make some observations whose proofs appear in the appendix of the long version of this work. Let us observe that the powerset construction (determinization), and DFA minimization, both preserve weak acyclicity.
Proposition 1
Applying the powerset construction to a weakly acyclic NFA yields a weakly acyclic DFA.
Proposition 2
Let \(\mathcal {A}\) be a weakly acyclic DFA. The minimal DFA that accepts \(\textrm{L}(\mathcal {A})\) is also weakly acyclic.
We say that a language is weakly acyclic if it is accepted by a weakly acyclic automaton \(\mathcal {A}\). By the above propositions, this means that \(\mathcal {A}\) can interchangeably be an NFA, a DFA, or a minimal DFA. The lemma below yields a characterization of weakly acyclic languages in terms of residuals rather than automata.
Lemma 1
A regular language \(L \subseteq \varSigma ^*\) is weakly acyclic iff for all \(u \in \varSigma ^*\) and \(v \in \varSigma ^+\) it is the case that \(L^u = L^{uv}\) implies \(L^u = L^{uw}\) for every \(w \in \alpha (v)^*\).
It can also be shown that weakly acyclic languages are precisely the languages described by such weakly acyclic expressions: \(r ::= \emptyset \mid \varGamma ^* \mid \varLambda ^* a r \mid r + r\) where \(\varGamma , \varLambda \subseteq \varSigma \) and \(a \in \varSigma \setminus \varLambda \) (see the appendix of the full version). Note that these expressions are essentially “\(\mathcal {R}\)-expressions”, which are known to be equivalent to weakly acyclic languages via algebraic automata theory. Moreover, [29, Thm. 7], shows that weaky acyclic DFAs and NFAs are equivalent, and the proof is very similar our conversion from weakly acyclic NFAs to expressions given in the appendix of the full version.
Complementing a weakly acyclic DFA preserves weak acyclicity since no cycle is created. Moreover, weakly acyclic regular expressions allow for union. Thus:
Proposition 3
Weakly acyclic languages are closed under complementation, union and intersection.

3 A data structure for weakly acyclic languages

Note that if a language L is weakly acyclic, then so is \(L^a\) for all \(a \in \varSigma \). From this simple observation, one can imagine an infinite deterministic automaton where each state is a weakly acyclic language L, and each a-transition leads to \(L^a\). Let us define this “master automaton” formally.
Definition 1
The master automaton over alphabet \(\varSigma \) is the tuple \(\mathcal {M}= (Q_{\mathcal {M}}, \varSigma , \delta _{\mathcal {M}}, F_{\mathcal {M}})\), where
  • \(Q_{\mathcal {M}}\) is is the set of all weakly acyclic languages over \(\varSigma \);
  • \(\delta _{\mathcal {M}} :Q_{\mathcal {M}} \times \varSigma \rightarrow Q_{\mathcal {M}}\) is given by \(\delta _{\mathcal {M}}(L, a) :=L^a\) for all \(L \in Q_{\mathcal {M}}\) and \(a \in \varSigma \);
  • \(L \in F_{\mathcal {M}}\) iff \(\varepsilon \in L\).
Given weakly acyclic languages K and L, let \(K \preceq L\) denote that \(K = L^w\) for some word w. An immediate consequence of the definition of weakly acyclic DFAs is that \({\preceq }\) is a partial order. The minimal elements of \({\preceq }\) satisfy the equation \(L = L^a\) for every letter a. The equation has exactly two solutions: \(L = \emptyset \) and \(L = \varSigma ^*\). Moreover, we can easily show by structural induction that \({\preceq }\) has no infinite descending chains. This allows to reason about a weakly acyclic language recursively through its residuals until reaching \(\emptyset \) or \(\varSigma ^*\).
Proposition 4
Let L be a weakly acyclic language. The language accepted from the state L of the master automaton is L.
By Proposition 4, we can look at the master automaton as a structure containing DFAs recognizing all the weakly acyclic languages. To make this precise, each weakly acyclic language L determines a DFA \(\mathcal {A}_L = (Q_L, \varSigma , \delta _L, q_{0L}, F_L)\) as follows: \(Q_L\) is the set of states of the master automaton reachable from the state L, \(q_{0L}\) is the state L, \(\delta _L\) is the projection of \(\delta _{\mathcal {M}}\) onto \(Q_L\), and \(F_L :=F_{\mathcal {M}} \cap Q_L\). It is easy to show that \(\mathcal {A}_L\) is the minimal DFA recognizing L.
Proposition 5
For every weakly acyclic language L, automaton \(\mathcal {A}_L\) is the minimal DFA recognizing L.
Proposition 5 allows us to define a data structure for representing finite sets of weakly acyclic languages. Loosely speaking, the structure representing the languages \(\mathcal {L}= \{L_1, \ldots , L_n\}\) is the fragment of the master automaton containing the states recognizing \(L_1, \ldots , L_n\) and their descendants. It is a DFA with multiple initial states which we call the weakly acyclic diagram for \(\mathcal {L}\). Formally:
Definition 2
Let \(\mathcal {L}= \{L_1, \ldots , L_n\}\) be weakly acyclic languages over \(\varSigma \). The weakly acyclic diagram \(\mathcal {A}_\mathcal {L}\) is the tuple \((Q_\mathcal {L}, \varSigma , \delta _\mathcal {L}, Q_{0\mathcal {L}}, F_\mathcal {L})\) where \(Q_\mathcal {L}\) is the subset of states of the master automaton reachable from at least one of \(L_1, \ldots , L_n\); \(Q_{0\mathcal {L}} :=\{L_1, \ldots , L_n\}\); \(\delta _\mathcal {L}\) is the projection of \(\delta _M\) onto \(Q_\mathcal {L}\); and \(F_\mathcal {L}= F_\mathcal {M}\cap Q_\mathcal {L}\).
In order to manipulate weakly acyclic diagrams, we represent them as tables of nodes. Let us fix \(\varSigma = \{a_1, a_2, \ldots , a_m\}\). A node is a triple \(q :(s, b)\) where
  • q is the node identifier;
  • \(s = (q_1, q_2, \ldots , q_m)\) is the successor tuple of the node, where each \(q_i\) is either a node identifier or the special symbol self; and
  • \(b \in \{0, 1\}\) is the acceptance flag that indicates whether q is accepting or not.
We write \(q^{a_i}\) to denote \(q_i\). We write \(\textrm{L}(s, b)\) for the language defined recursively as follows, where \(i \in [1..m]\) and \(w \in \varSigma ^*\):
$$\begin{aligned} \varepsilon \in \textrm{L}(q) &\iff b = 1, \\ a_i w \in \textrm{L}(q) &\iff \left( (q^{a_i} = \textsc {self}\wedge w \in \textrm{L}(q)) \vee (q^{a_i} \ne \textsc {self}\wedge w \in \textrm{L}(q^{a_i}))\right) . \end{aligned}$$
For every node \(q :(s, b)\), we write \(\textrm{L}(q)\) to denote \(\textrm{L}(s, b)\).
We denote the identifiers of the nodes for languages \(\emptyset \) and \(\varSigma ^*\) by \(q_{\emptyset }\) and \(q_{\varSigma ^*}\), respectively, i.e., \(q_{\emptyset }:((\textsc {self}, \ldots , \textsc {self}), 0)\) and \(q_{\varSigma ^*}:((\textsc {self}, \ldots , \textsc {self}), 1)\).
Fig. 2.
Example of the representation of weakly acyclic languages.
Example 1
Let K and L be the languages over \(\varSigma = \{a, b, c\}\) described by the regular expressions \(a^* (\varepsilon + b^+ a \varSigma ^*)\) and \(a b^* a \varSigma ^* + (b + c)\varSigma ^*\). These languages are represented by the table depicted on the left of Figure 2, which corresponds to the diagram on the right. It is readily seen that \(\textrm{L}(q_3) = K\) and \(\textrm{L}(q_4) = L\).    \(\square \)
Note that there is a risk that two distinct nodes represent the same language. For example, suppose \(\varSigma = \{a, b\}\) and that we create a node \(p :((\textsc {self}, q_{\varSigma ^*}), 1)\). We obtain the following table, where \(\textrm{L}(p) = \textrm{L}(q_{\varSigma ^*}) = \{a, b\}^*\):
identifier
succ. tuple
flag
 
a
b
 
p
\(\textsc {self}\)
\(q_{\varSigma ^*}\)
1
\(q_{\varSigma ^*}\)
\(\textsc {self}\)
\(\textsc {self}\)
1
We will avoid this: we will maintain a table of nodes such that the language of distinct nodes is distinct. The procedure make of Algorithm 1 serves this purpose. Given a successor tuple s and an acceptance flag b, it first checks whether there is already an entry for language \(\textrm{L}(s, b)\), and if not it creates one with a new identifier. The identifiers created by make are generated in increasing order. We assume that the table initially contains \(q_{\emptyset }\) and \(q_{\varSigma ^*}\) as the first two identifiers.
Let us explain how \(\texttt {make}\) checks whether a node must be created. Given a tuple \(s = (q_1, q_2, \ldots , q_m)\), we write \(s[r / r']\) to denote the tuple obtained from s by replacing each occurrence \(q_i = r\) by \(r'\). As already explained, when trying to add (sb) to the table, there might already be an entry \(q :(s', b')\) with \(\textrm{L}(s', b') = \textrm{L}(s, b)\). It can be shown that this happens iff \(b' = b\) and there exists \(q' \in s\) such that \(s' = s[q' / \textsc {self}]\) (see the proof of Proposition 6 below in the full version [10]). In fact, it can be shown that such a \(q'\) must be the maximal identifier of s; hence, there is no need to check all elements of s. For example, in the aforementioned problematic case, we have \(s = (\textsc {self}, q_{\varSigma ^*})\) and \(s[q_{\varSigma ^*}/ \textsc {self}] = (\textsc {self}, \textsc {self})\) which is already in the table, so no node is created.
The amortized time of make belongs to \(\mathcal {O}(|\varSigma |)\). Further, it works as intended:
Proposition 6
A table obtained from successive calls to Algorithm 1 does not contain nodes \(q \ne q'\) such that \(\textrm{L}(q) = \textrm{L}(q')\).

4 Operations on weakly acyclic languages

4.1 Unary and binary operations

Algorithms 2 and 3 describe recursive procedures that respectively complement a weakly acyclic language, and intersect two weakly acyclic languages. They are very similar to the classical algorithms for negation and conjunction in OBDDs (see [20, Chap. 6, Algorithms 27 and 26]); they essentially only differ in the usage of self. For this reason, their correcteness and time complexity are established with the same arguments. Let us expand for readers less familiar with OBDDs.
Procedure comp terminates because on each recursive call, \(q^a\) is a smaller identifier than q. Procedure inter terminates because on each recursive call, either \(p^a \ne \textsc {self}\) and \(p^a\) is a smaller identifier than p, or likewise for \(q^a\) and q.
The two procedures may have an exponential complexity since the recursion may recompute the same values repeatedly. However, this is easily avoided with memoization. This will apply to all our algorithms, i.e., each recursive procedure \(\texttt {t}(x)\) can first check in a dictionary whether the output value for x has already been computed, and only proceed to compute it if not. To avoid cluttering the presentation, we will not explicitly describe such memoization in our pseudocode.
The following proposition is established with standard arguments (see the appendix of the full version).
Proposition 7
Algorithm 2 and Algorithm 3 are correct. Moreover, with memoization, they respectively work in time \(\mathcal {O}(|\varSigma | n_q)\) and \(\mathcal {O}(|\varSigma | n_p n_q)\), where \(n_x\) is the number of nodes reachable from node x.
It is easy to adapt Algorithm 3 to other binary operations such as union: It suffices to change the base cases and the way the acceptance flag is set.
Note that testing emptiness, universality and language equality can be done in constant time. Indeed, as the table keeps a unique identifier for each language, we have \(\textrm{L}(q) = \emptyset \) iff \(q = q_{\emptyset }\), \(\textrm{L}(q) = \varSigma ^*\) iff \(q = q_{\varSigma ^*}\), and \(\textrm{L}(p) = \textrm{L}(q)\) iff \(p = q\).

4.2 Fixed-length relations: a general approach

For a relation \(R \subseteq \varSigma ^* \times \varSigma ^*\) and a language \(L \subseteq \varSigma ^*\), we define \(\textrm{Post}_{R}(L) :=\{v \in \varSigma ^* : u \in L, (u, v) \in R\}\) and \(\textrm{Pre}_{R}(L) :=\{u \in \varSigma ^* : (u, v) \in R, v \in L\}\). A relation is said to be fixed-length if \((u, v) \in R\) implies \(|u| = |v|\).
We consider fixed-length regular relations, i.e., those that can be represented by automata over alphabet \(\varSigma \times \varSigma \), which we call transducers. Given a fixed-length regular relation R and a weakly acyclic language L, \(\textrm{Post}_{R}(L)\) and \(\textrm{Pre}_{R}(L)\) may not be weakly acyclic. For example, consider the relation R and the language L depicted by the transducer and automata of Figure 3. We have \(\textrm{Post}_{R}(L) = (a + b)^* b\) which is not weakly acyclic (as its minimal DFA has a nontrivial cycle).
Fig. 3.
Left: A transducer \(\mathcal {T}\) that converts each occurrence of letter c into letter b. Right: A weakly acyclic DFA \(\mathcal {A}\) accepting language \((a + b)^* c\).
Yet, under the guarantee that the resulting language is weakly acyclic, we can compute \(\textrm{Post}_{R}(L)\) and \(\textrm{Pre}_{R}(L)\). The key observation is that if a DFA accepting a weakly acyclic language has a cycle, then this cycle can be “contracted”. Thus, given a transducer \(\mathcal {T}\) and a node q, we apply the powerset contruction on the pairing of T and q, and we contract cycles on the fly. Informally, cycle detection is achieved by testing whether a successor is already on the call stack. Algorithm 4 implements this idea for \(\textrm{Pre}_{R}(L)\). In the pseudocode, symbols S and \(S'\) denote sets of pairs of states.
Let us analyze Algorithm 4. We write \(\textrm{Pre}_{p}(q)\) to denote \(\textrm{Pre}_{\textrm{L}(p)}(\textrm{L}(q))\). Let \(\textrm{L}(S) :=\bigcup _{(p, q) \in S} \textrm{Pre}_{p}(q)\). Given S and \(a \in \varSigma \), we write \(S \xrightarrow {\,}^{a} S'\) where
$$ S' :=\bigcup _{b \in \varSigma } \left\{ (p', q') : (p, q) \in S, p' \in \delta (p, (a, b)), q' = \texttt {succ} \texttt {(}q, b\texttt {)}\right\} . $$
In words, “\(S \xrightarrow {\,}^{a} S'\)” denotes that \(\texttt {pre\text {-}aux} \texttt {(}S\texttt {)}\) constructs the set \(S'\) on Line 8. By extension, we write \(S \xrightarrow {\,}^{\varepsilon } S'\) if \(S = S'\), and we write \(S \xrightarrow {\,}^{a_1 a_2 \cdots a_n} S'\) if \(S = S_0 \xrightarrow {\,}^{a_1} S_1 \xrightarrow {\,}^{a_2} S_2 \cdots \xrightarrow {\,}^{a_n} S_n = S'\) for some \(S_0, S_1, \ldots , S_n\).
Proposition 8
Let \(w \in \varSigma ^*\). If \(S \xrightarrow {\,}^{w} S'\), then \(\textrm{L}(S)^w = \textrm{L}(S')\).
Proposition 9
If \(S_0 \xrightarrow {\,}^{*} S' \xrightarrow {\,}^{*} S \xrightarrow {\,}^{a} S'\) for some \(a \in \varSigma \), and \(\textrm{L}(S_0)\) is weakly acyclic, then \(\textrm{L}(S) = \textrm{L}(S')\).
Proposition 10
Algorithm 4 is correct and terminates.
Proof
Termination follows easily by the fact that only finitely many subsets can be generated (see the appendix of the full version). Let us prove correctness.
Let \(S_0 :=\{(p_0, q)\}\) be the input and let \(\texttt {pre\text {-}aux} \texttt {(}S\texttt {)}\) be a recursive call made by \(\texttt {pre} \texttt {(}S_{0} \texttt {)}\). Recall that \(\textrm{L}(S) = \bigcup _{(p, q) \in S} \textrm{Pre}_{p}(q)\). So, we must show that \(\textrm{L}(\texttt {pre\text {-}aux} \texttt {(}S\texttt {)}) = \textrm{L}(S)\). To do so, we must prove that, for each \(a \in \varSigma \), Algorithm 4 assigns to \(s_a\) the state accepting \(\textrm{L}(S)^a\). Moreover, we must show that the acceptance flag is set correctly. We show the latter first, and then the former.
We have \(\varepsilon \in \bigcup _{(p, q) \in S} \textrm{Pre}_{p}(q)\) iff there exists \((p, q) \in S\) such that \((\varepsilon , \varepsilon ) \in \textrm{L}(p)\) and \(\varepsilon \in \textrm{L}(q)\) iff there exists \((p, q) \in S\) such that \(p \in F\) and \(\textrm{flag}(q) = true \). Thus, the flag is set correctly on Line 12.
Let \(a \in \varSigma \). Let \(S'\) be the set computed on Line 8 for letter a. By definition, we have \(S \xrightarrow {\,}^{a} S'\). We make a case distinction on whether \(S'\) is marked or not.
Case 1: \(S'\) is marked. Since \(S'\) is marked, we have \(S_0 \xrightarrow {\,}^{*} S' \xrightarrow {\,}^{*} S\). Therefore, \(S_0 \xrightarrow {\,}^{*} S' \xrightarrow {\,}^{*} S \xrightarrow {\,}^{a} S'\). By Proposition 9, we have \(\textrm{L}(S) = \textrm{L}(S')\). Moreover, by Proposition 8, we have \(\textrm{L}(S)^a = \textrm{L}(S')\). Thus, \(\textrm{L}(S)^a = \textrm{L}(S') = \textrm{L}(S)\). As \(S'\) is marked, Algorithm 4 executes Line 9, which correctly assigns “\(s_a \leftarrow \textsc {self}\)”.
Case 2: \(S'\) is unmarked. By Proposition 8, \(\textrm{L}(S)^a = \textrm{L}(S')\). As \(S'\) is unmarked, Algorithm 4 executes Line 10, which correctly assigns “\(s_a \leftarrow \texttt {pre\text {-}aux} \texttt {(}S' \texttt {)}\)”.    \(\square \)
Note that Algorithm 4 has exponential worse-case time complexity since this is already the case for the particular case of OBDDs (see [20, Chap. 6.5, pp. 148–149]). Indeed, given an arbitrary NFA A, one can find a (deterministic) transducer and a DFA whose composition yields A, and it is well known that determinizing an NFA blows up its size exponentially in the worst case.

4.3 Fixed-length relations: a more efficient approach

As just mentioned, pairing a transducer and an automaton can result in a nondeterministic automaton. For this reason, Algorithm 4 determinizes on the fly by constructing subsets of states, and minimizes on the fly by contracting cycles.
We describe a setting in which determinism is guaranteed, and hence where cycle contraction is avoided. This allows for a polynomial-time time procedure.
Observe that a weakly acyclic language K over \(\varSigma \times \varSigma \) can be represented with our data structure. As shown in Figure 3, even if K and L are weakly acyclic, it is not necessarily the case that \(\textrm{Post}_{K}(L)\) and \(\textrm{Pre}_{K}(L)\) are weakly acyclic. Yet, we provide a sufficient condition under which weak acyclicity is guaranteed.
Let K and L be weakly acyclic languages respectively over \(\varSigma \times \varSigma \) and \(\varSigma \). We say that K and L are pre-compatible if for every \(a \in \varSigma \) the following holds:
  • there exists at most one \(b \in \varSigma \) such that \(K^{(a, b)} \ne \emptyset \) and \(L^b \ne \emptyset \); and
  • if \(K^{(a, b)} \ne \emptyset \) and \(L^b \ne \emptyset \), then \(K^{(a, b)}\) and \(L^b\) are pre-compatible.
Algorithm 5 describes a procedure that computes a node accepting \(\textrm{Pre}_{K}(L)\). Similarly to Algorithms 2 and 3, a simple analysis yields the following proposition (see the appendix of the full version).
Proposition 11
Algorithm 5 is correct. Moreover, with memoization, it works in time \(\mathcal {O}(|\varSigma |^2 n_p n_q)\), where \(n_x\) is the number of nodes reachable from node x.

5 Weakly acyclic (regular) model checking

In this section, we present interesting examples of systems that can be modeled with WADs. In our setting, a system has a set of configurations equipped with a partial order \({\preceq }\) and a monotone labeled transition relation, i.e., if \(C \xrightarrow {\,}^{t} D\) and \(C' \succeq C\), then \(C' \xrightarrow {\,}^{t} D'\) for some \(D' \succeq C'\). The safety verification problem asks, given configurations C and \(C'\), whether \(C \xrightarrow {\,}^{*} C''\) for some \(C'' \succeq C'\).
For every D, let \(\textrm{Pre}(D) :=\{C : C \xrightarrow {\,}^{*} D\}\). We extend this notation to sets, i.e., \(\textrm{Pre}(X) :=\bigcup _{C \in X} \textrm{Pre}(C)\). Let \(\textrm{Pre}^{*}(X) :=\bigcup _{i \ge 0} \textrm{Pre}^{i}(X)\). A set of configurations X is upward-closed if \({\uparrow \!{X}} = X\), where \({\uparrow \!{X}} :=\{C' : C' \succeq C \text { and } C \in X\}\).
Safety verification amounts to testing whether \(C \in \textrm{Pre}^{*}({\uparrow \!{C'}})\). If the system is well structured, namely if \({\preceq }\) is a well-quasi-order, then it is well-known that \(\textrm{Pre}^{*}({\uparrow \!{C'}}) = \textrm{Pre}^{i}({\uparrow \!{C'}})\) for some \(i \ge 0\). Thus, verification can be done by the backward algorithm: compute \(\textrm{Pre}^{0}({\uparrow \!{C'}}) \cup \textrm{Pre}^{1}({\uparrow \!{C'}}) \cup \cdots \) until reaching a fixed point. So, for well-structured transition systems, the approach is as follows:
  • Choose a suitable alphabet \(\varSigma \);
  • For each configuration D, define an encoding \(\textrm{enc}({\uparrow \!{D}}) \subseteq \varSigma ^*\) of \({\uparrow \!{D}}\) which is a weakly acyclic language;
  • For each t, define a transducer \(\mathcal {T}_t\) for the language \(\{(\textrm{enc}(D), \textrm{enc}(D')) : D \xrightarrow {\,}^{t} D'\} \subseteq (\varSigma \times \varSigma )^*\);
  • Create a weakly acyclic diagram for \(\textrm{L}(q_0) = \textrm{enc}({\uparrow \!{C'}})\) and compute \(\textrm{L}(q_{i+1}) = \textrm{Pre}_{\mathcal {T}_t}(\textrm{L}(q_i))\) iteratively until \(q_{i+1} = q_i\).
Since \({\preceq }\) is a well-quasi-order, any upward-closed set X of configurations has a finite basis, i.e., there exist configurations \(C_1, \ldots , C_n\) such that \(X = \bigcup _{i \in [1..n]} {\uparrow \!{C_i}}\). Thus, as weakly acyclic languages are closed under union, \(\textrm{enc}(X)\) must be weakly acyclic. Moreover, since \(\textrm{Pre}(X)\) is upward-closed by monotonicity, \(\textrm{Pre}_{\mathcal {T}_t}(\textrm{enc}(X))\) is necessarily weakly acyclic.

5.1 Lossy channel systems

A channel system is a finite directed graph whose nodes P are called states, and whose arcs, called transitions, are labeled by \({\mathrm {\texttt {read}}}_{i}(a)\), \({\mathrm {\texttt {write}}}_{i}(a)\) or \({\mathrm {\texttt {nop}}} \) with \(a \in \varGamma \) and \(i \in [1..k]\). There are m processes starting in some states, and evolving by reading from the left and writing to the right of channels, and by moving to other states. A lossy channel system (LCS) is a channel system where any letter may be lost at any moment from any channel.
Fig. 4.
Left: Example of a lossy channel system with \(\varGamma = \{a, b\}\). Right: Transducers encoding respectively the transitions \({\mathrm {\texttt {read}}}_{1}(a)\), \({\mathrm {\texttt {write}}}_{2}(b)\) and \({\mathrm {\texttt {nop}}}\) (under the semi-lossy semantics), where \(s \in \{p, q, r\}\), \(\sigma \in \{a, b\}\) and \(\gamma \) stands for any letter.
For example, consider the channel system depicted on the left of Figure 4. Consider its configuration \(C :=([p, q], ab, \varepsilon , b)\) with two processes in states p and q, and with three channels currently containing ab, \(\varepsilon \) and b. We have, e.g.,
$$ C \xrightarrow {{\mathrm {\texttt {read}}}_{1}(a)} ([q, q], b, \varepsilon , b) \xrightarrow {{\mathrm {\texttt {write}}}_{2}(b)} ([q, p], b, b, b) \xrightarrow {{\mathrm {\texttt {nop}}}} ([q, r], b, b, b). $$
We encode configurations as words over \(\varSigma :=P \cup \varGamma \cup \{\texttt {\#}\}\), e.g. the encoding of C is \(pq \texttt {\#}ab \texttt {\#}\texttt {\#}a \texttt {\#}\). Moreover, we encode operations as transducers with an extra letter \(\texttt {X}\). Let us explain this through an example.
The transducers for the three transitions of our example are depicted on the right of Figure 4. As a preprocessing step for operation \({\mathrm {\texttt {write}}}_{i}(\sigma )\), we pad channels of a configuration on the right with \(\texttt {X}^*\). This way, the first occurrence of \(\texttt {X}\) can be replaced by the letter to insert. This padding can then be removed in a postprocessing step. Furthermore, we model the “lossiness” of the channels through \({\mathrm {\texttt {read}}}_{i}(\sigma )\): when reading an a, all letters in front of the first a may be lost. Transducers for \({\mathrm {\texttt {write}}}_{i}(\sigma )\) and \({\mathrm {\texttt {nop}}}\) are nonlossy. For safety verification, the standard “pure lossy” semantics is equivalent to our “semi-lossy” semantics.
Let us formalize it. Given configurations \(C = (X, w_1, \ldots , w_k)\) and \(C' = (X', w_1', \ldots , w_k')\), we write \(C \preceq C'\) if \(X = X'\) and \(w_i \sqsubseteq w_i'\) for all \(i \in [1..k]\), where \({\sqsubseteq }\) denotes the subword order. For example, \(([p, q], ab, \varepsilon , b) \preceq ([p, q], aab, a, baa)\). By Higman’s lemma, the subword order \({\sqsubseteq }\) is a well-quasi-order. Moreover, equality over a finite set is a well-quasi-order. As the order \({\preceq }\) over configurations is a product of these orders, it is also a well-quasi-order. Moreover:
Proposition 12
We have \(C \xrightarrow {*}_\text {pure-lossy} C''\) for some \(C'' \succeq C'\) iff \(C \xrightarrow {*}_\text {semi-lossy} C''\) for some \(C'' \succeq C'\). Furthermore, \({\xrightarrow {\!}_\text {semi-lossy}}\) is monotone.
Note that \(\textrm{enc}({\uparrow \!{C}})\) is weakly acyclic. Indeed, for \(C = ([p_1, \ldots , p_m], w_1, \ldots , w_k)\), the language \(\textrm{enc}({\uparrow \!{C}})\) can be expressed by a weakly acyclic expression:
$$ p_1 \cdots p_m\ \texttt {\#}\prod _{i \in [1..k]} \bigg (\prod _{a \in w_i} (\varGamma \setminus \{a\})^* a\bigg ) \varGamma ^*\ \texttt {\#}. $$

5.2 Petri nets

A Petri net is a tuple (PTF) where P is a finite set of places, T is a finite set of transitions disjoint from P, and \(F :(P \times T) \cup (T \cup P) \rightarrow \mathbb {N}\). A marking is an assignment \(\boldsymbol{m} :P \rightarrow \mathbb {N}\) of tokens to places. A transition t is enabled in \(\boldsymbol{m}\) if \(\boldsymbol{m}(p) \ge F(p, t)\) for every place p. If t is enabled, then it can be fired, which leads to the marking \(\boldsymbol{m}'\) obtained by removing F(pt) tokens and adding F(tp) tokens to each place p, i.e., \(\boldsymbol{m}'(p) = \boldsymbol{m}(p) - F(p, t) + F(t, p)\).
Fig. 5.
Left: Example of a Petri net where \(P = \{p, q\}\), \(T = \{t\}\), \(F(p, t) = 2\), \(F(t, p) = 1 = F(q, t)\) and \(F(t, q) = 3\). Right: Transducer encoding transition t.
Consider, e.g., the Petri net depicted on the left of Figure 5. From the marking \([p \mapsto 3, q \mapsto 1]\), written as \(\boldsymbol{m} :=(3, 1)\), we have, e.g., \(\boldsymbol{m} \xrightarrow {\,}^{t} (2, 3) \xrightarrow {\,}^{t} (1, 5)\).
We encode markings over https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-90660-2_2/MediaObjects/652620_1_En_2_Figg_HTML.gif , e.g. the encoding of \(\boldsymbol{m}\) is https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-90660-2_2/MediaObjects/652620_1_En_2_Figh_HTML.gif . Moreover, we encode operations as transducers with an extra letter \(\texttt {X}\). Let us explain this through an example. The transducer for the transition of our example is depicted on the right of Figure 5. As a preprocessing step, we pad places of a configuration on the right with \(\texttt {X}^*\). This symbol can be used to remove and add tokens. The padding is then removed in a postprocessing step.
We write \(\boldsymbol{m} \le \boldsymbol{m}'\) if \(\boldsymbol{m}(p) \le \boldsymbol{m}'(p)\) for all p. By Dickson’s lemma, \({\le }\) is a well-quasi-order. Note that \(\textrm{enc}({\uparrow \!{\boldsymbol{m}}})\) is weakly acyclic. Indeed, for \(\boldsymbol{m} = (n_1, \ldots , n_k)\), it is the language of this weakly acyclic expression: https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-90660-2_2/MediaObjects/652620_1_En_2_Figi_HTML.gif .

5.3 Broadcast protocols

A broadcast protocol is a finite directed graph whose nodes P are called states, and where each arc (transition) is labeled by \({a}\), \({b}{\texttt {!}}\), \({b}{\texttt {?}}\), \({c}{\texttt {!!}}\) or \({c}{\texttt {??}}\) where \(a \in \varGamma \), \(b \in \varGamma '\) and \(c \in \varGamma ''\) (three disjoint alphabets). There are m processes starting in some states, and evolving by local, rendez-vous or broadcast communication:
  • A local update a only changes the state of a single process;
  • A rendez-vous occurs when two processes respectively take a \({b}{\texttt {!}}\)-transition and a \({b}{\texttt {?}}\)-transition;
  • A broadcast occurs when a process takes a \({c}{\texttt {!!}}\)-transition; when this happens, any other process that can take a \({c}{\texttt {??}}\)-transition takes it.
For example, consider the broadcast protocol depicted on the left of Figure 6. Let \(C :=[p, p, p, p]\) be the configuration with four processes in state p. We have, e.g., \( C \xrightarrow {\,}^{b} [q, r, p, p] \xrightarrow {\,}^{b} [q, r, q, r] \xrightarrow {\,}^{c} [p, q, p, r] \xrightarrow {\,}^{c} [p, p, p, q]\). We encode configurations as words over \(\varSigma :=P\), e.g. [prpq] becomes prpq. The transducers for our example are depicted on the middle and right of Figure 6.
Fig. 6.
Left: Example of a broadcast protocol with \(P = \{p, q, r\}\), \(\varGamma = \emptyset \), \(\varGamma ' = \{b\}\) and \(\varGamma '' = \{c\}\). Middle and right: Transducers encoding the transitions, where \(s \in \{p, q, r\}\).

6 Experimental results

We developed a prototype C++ library for weakly acyclic diagrams, which we refer to as wadl. It implements all operations described in Section 4, and backwards reachability for lossy channel systems, Petri nets and broadcast protocols, as in Section 5. To benchmark the performance, we use four different sets:
  • Lossy channel systems. We collected 13 instances used by the tools BML [25] and McScM [26, 27]. They arise from a mutual exclusion algorithm and from protocols (communication, business, etc.) The instance ring2 comes from a parameterized family. Thus, we generalized it to ring3, ring4, etc.
  • Petri nets. We collected 173 instances used in many papers on Petri net safety verification (e.g. see [22]). They model protocols (mutual exclusion, communication, etc.); concurrent C and Erlang programs; provenance analysis of messages of a medical messaging system and a bug-tracking system.
  • Broadcast protocols. We collected 38 broadcast protocol instances from the benchmark set of dodo, a tool for regular model checking that uses the encoding of Section 5.3 for broadcast protocols [17, 35]. The instances correspond to safety properties of several cache-coherence protocols.
  • Regular model checking problems. Backwards reachability can also be applied to systems which are not well structured, as long as all the intermediate sets of configurations remain weakly acyclic. The algorithm may not terminate, but if it does then it yields the correct answer. We collected 35 further instances from dodo modeling parameterized mutual exclusion algorithms, variants of the dining philosophers, algorithms for termination detection and leader election, and token passing algorithms.
We carried experiments for the four above categories with the goal of testing the potential and versatility of WADs. We used a machine with an 8-Core \(\text {11}^\text {th}\) Gen Intel®Core™i7-1165G7 CPU @ 2.80GHz running on Kubuntu 24.04 with 31.1GiB of memory. The time was determined with the sum of the user and sys times given by Linux tool time. We used a timeout of 10 minutes per instance, except for Petri nets where we used a timeout of 20 minutes2.
Lossy channel systems. We made a few optimizations compared to Section 5. First, states are represented in binary to avoid using |P| letters. Second, we tweaked the transducers so that they yield the pre-compatibility of Section 4.3. The transducers of Figure 4 do not necessarily satisfy it as the initial states use letters (xy) and \((x', y')\) with \(x = x'\). This can be fixed, e.g. make the initial state of the top transducer loop on (qq) and (rr), but not (pp). This slightly changes the semantics: only the first process in p can execute. Yet, as processes in a common state are indistinguishable, only the state count matters and so the change does not alter safety verification. Moreover, in our benchmarks, processes evolve in disjoint subgraphs and hence can never even be in a common state.
We compared with BML [25] which runs the backward algorithm with pruning (modes: csre, mof, si). Pruning can be disabled to yield the basic algorithm. We also compared to McScM [26, 27] which (in its default mode) uses an adaptive extrapolated backwards computation. The results are depicted on the left of Figure 7. wadl solves the most instances and does so generally faster, although the competition is close. We further tested the same tools on ring3, ring4, etc. Our tool terminates on large n (e.g. ring99 in 2m38s and ring187 is the largest solved), but the other tools did not scale, as depicted in Figure 1.
Fig. 7.
Cumulative number of instances decided over time (semi-log scale) for lossy channel systems excluding \(\{\texttt {ring3}, \texttt {ring4}, \ldots \}\) (left), and Petri nets (right). wadl+mist refers to the best outcome on instances solved by at least one of the two tools.
Table 1.
Time in seconds to solve ringn (to = timeout, so = stack overflow).
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-90660-2_2/MediaObjects/652620_1_En_2_Tab1_HTML.png
Dummy
Petri nets. We compared with mist [18, 24], an established efficient tool that, in particular, offers an implementation of the backward algorithm without pruning, as in our case, but using interval sharing trees as the data structure. The results are depicted on the right of Figure 7. mist is competitive, but we solved more instances in total. There are 31 instances that we solved that mist did not; 2 instances for the other way around; and 58 instances solved by both tools.
Broadcast protocols. We solved 32/38 instances, with 25 solved within 100ms. We do not compare with dodo as its goal is not to check the property, but rather to compute small invariants explaining why it holds. So, although we would look good in comparison, it would be meaningless (the runtimes of dodo reported in [35] are at least ten times higher than ours in every instance).
Regular model checking. While there is a priori no reason why the instances should be checkable with WADs, we solved 22/35 instances (all within 10ms). We further timed out on 6 instances, and could not verify 7 instances. For the latter, a cycle was contracted in Line 9 of Algorithm 4, and hence we cannot be certain that the result is weakly acyclic.
The fact we solved many instances suggests that weakly acyclic languages may be common in regular model checking. In fact, popular examples used to introduce the technique, like the token-passing protocol, FIFO channel, and stack examples of the surveys [1, 2, 5, 7], or the alternating bit protocol of [4] are modeled with weakly acyclic languages.

7 Conclusion

We introduced weakly acyclic diagrams as a general data structure that extends binary decision diagrams to (possibly) infinite languages, while maintaining their algorithmic advantages. Many instances in regular model checking fall within the class of weakly acyclic languages. Moreover, weakly acyclic diagrams allow for the manipulation and verification of various well-structured transition systems such as lossy channel systems, Petri nets and broadcast protocols.
As future work, it would be interesting to see whether the good algorithmic properties can be retained under a slight extension of nondeterminism, e.g. for languages like \((a + b)^*b\) which are simple but not weakly acyclic per se.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
See e.g. [20, Chap. 6], which even introduces a slight generalization of DFAs such that the OBDD is the unique minimal deterministic generalized automaton for \(L_f\).
 
2
During the reviewing process, a reviewer requested a longer timeout for this case.
 
Literatur
2.
Zurück zum Zitat Abdulla, P.A.: Regular model checking: Evolution and perspectives. In: Model Checking, Synthesis, and Learning – Essays Dedicated to Bengt Jonsson on The Occasion of His 60th Birthday. Lecture Notes in Computer Science, vol. 13030, pp. 78–96. Springer (2021). https://doi.org/10.1007/978-3-030-91384-7_5 Abdulla, P.A.: Regular model checking: Evolution and perspectives. In: Model Checking, Synthesis, and Learning – Essays Dedicated to Bengt Jonsson on The Occasion of His 60th Birthday. Lecture Notes in Computer Science, vol. 13030, pp. 78–96. Springer (2021). https://​doi.​org/​10.​1007/​978-3-030-91384-7_​5
3.
Zurück zum Zitat Abdulla, P.A., Cerans, K., Jonsson, B., Tsay, Y.: General decidability theorems for infinite-state systems. In: Proc. \(\text{11}^\text{th }\) Annual IEEE Symposium on Logic in Computer Science (LICS). pp. 313–321. IEEE Computer Society (1996). https://doi.org/10.1109/LICS.1996.561359 Abdulla, P.A., Cerans, K., Jonsson, B., Tsay, Y.: General decidability theorems for infinite-state systems. In: Proc. \(\text{11}^\text{th }\) Annual IEEE Symposium on Logic in Computer Science (LICS). pp. 313–321. IEEE Computer Society (1996). https://​doi.​org/​10.​1109/​LICS.​1996.​561359
6.
Zurück zum Zitat Abdulla, P.A., Kindahl, M.: Decidability of simulation and bisimulation between lossy channel systems and finite state systems (extended abstract). In: Proc. \(\text{6 }^\text{ th }\) International Conference on Concurrency Theory (CONCUR). vol. 962, pp. 333–347. Springer (1995). https://doi.org/10.1007/3-540-60218-6_25 Abdulla, P.A., Kindahl, M.: Decidability of simulation and bisimulation between lossy channel systems and finite state systems (extended abstract). In: Proc. \(\text{6 }^\text{ th }\) International Conference on Concurrency Theory (CONCUR). vol. 962, pp. 333–347. Springer (1995). https://​doi.​org/​10.​1007/​3-540-60218-6_​25
9.
Zurück zum Zitat Andersen, H.R.: An introduction to binary decision diagrams (1998) Andersen, H.R.: An introduction to binary decision diagrams (1998)
19.
Zurück zum Zitat Eilenberg, S.: Automata, Languages, and Machines, vol. B. Academic Press (1976) Eilenberg, S.: Automata, Languages, and Machines, vol. B. Academic Press (1976)
20.
Zurück zum Zitat Esparza, J., Blondin, M.: Automata theory: An algorithmic approach. MIT Press (2023) Esparza, J., Blondin, M.: Automata theory: An algorithmic approach. MIT Press (2023)
22.
Zurück zum Zitat Esparza, J., Ledesma-Garza, R., Majumdar, R., Meyer, P.J., Niksic, F.: An SMT-based approach to coverability analysis. In: Proc. \(\text{26 }^\text{ th }\) International Conference on Computer Aided Verification (CAV). pp. 603–619. Springer (2014). https://doi.org/10.1007/978-3-319-08867-9_40 Esparza, J., Ledesma-Garza, R., Majumdar, R., Meyer, P.J., Niksic, F.: An SMT-based approach to coverability analysis. In: Proc. \(\text{26 }^\text{ th }\) International Conference on Computer Aided Verification (CAV). pp. 603–619. Springer (2014). https://​doi.​org/​10.​1007/​978-3-319-08867-9_​40
24.
Zurück zum Zitat Ganty, P., Meuter, C., Delzanno, G., Kalyon, G., Raskin, J.F., Van Begin, L.: Symbolic data structure for sets of \(k\)-uples. Tech. Rep. 570, Université Libre de Bruxelles, Belgium (2007) Ganty, P., Meuter, C., Delzanno, G., Kalyon, G., Raskin, J.F., Van Begin, L.: Symbolic data structure for sets of \(k\)-uples. Tech. Rep. 570, Université Libre de Bruxelles, Belgium (2007)
27.
Zurück zum Zitat Heußner, A., Gall, T.L., Sutre, G.: McScM: A general framework for the verification of communicating machines. In: Proc. \(\text{18 }^\text{ th }\) International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). pp. 478–484. Springer (2012). https://doi.org/10.1007/978-3-642-28756-5_34 Heußner, A., Gall, T.L., Sutre, G.: McScM: A general framework for the verification of communicating machines. In: Proc. \(\text{18 }^\text{ th }\) International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). pp. 478–484. Springer (2012). https://​doi.​org/​10.​1007/​978-3-642-28756-5_​34
28.
Zurück zum Zitat Klarlund, N., Møller, A.: MONA Version 1.4 User Manual. BRICS, Department of Computer Science, University of Aarhus (January 2001), notes Series NS-01-1. Available from http://www.brics.dk/mona/ Klarlund, N., Møller, A.: MONA Version 1.4 User Manual. BRICS, Department of Computer Science, University of Aarhus (January 2001), notes Series NS-01-1. Available from http://​www.​brics.​dk/​mona/​
32.
Zurück zum Zitat Éric Pin, J.: Varieties of formal languages. North Oxford, London and Plenum (1986) Éric Pin, J.: Varieties of formal languages. North Oxford, London and Plenum (1986)
33.
34.
Zurück zum Zitat Schwentick, T., Thérien, D., Vollmer, H.: Partially-ordered two-way automata: A new characterization of DA. In: Proc. \(\text{5 }^\text{ th }\) International Conference on Developments in Language Theory (DLT). vol. 2295, pp. 239–250. Springer (2001). https://doi.org/10.1007/3-540-46011-X_20 Schwentick, T., Thérien, D., Vollmer, H.: Partially-ordered two-way automata: A new characterization of DA. In: Proc. \(\text{5 }^\text{ th }\) International Conference on Developments in Language Theory (DLT). vol. 2295, pp. 239–250. Springer (2001). https://​doi.​org/​10.​1007/​3-540-46011-X_​20
Metadaten
Titel
Weakly Acyclic Diagrams: A Data Structure for Infinite-State Symbolic Verification
verfasst von
Michael Blondin
Michaël Cadilhac
Xin-Yi Cui
Philipp Czerner
Javier Esparza
Jakob Schulz
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-90660-2_2