Skip to main content
Erschienen in: Theory of Computing Systems 8/2021

Open Access 19.06.2021

Reachability Problems on Reliable and Lossy Queue Automata

verfasst von: Chris Köcher

Erschienen in: Theory of Computing Systems | Ausgabe 8/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We study the reachability problem for queue automata and lossy queue automata. Concretely, we consider the set of queue contents which are forwards resp. backwards reachable from a given set of queue contents. Here, we prove the preservation of regularity if the queue automaton loops through some special sets of transformation sequences. This is a generalization of the results by Boigelot et al. and Abdulla et al. regarding queue automata looping through a single sequence of transformations. We also prove that our construction is possible in polynomial time.
Hinweise
This is the full version of the conference contribution [1].

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Nearly all problems in verification ask whether in a program or automaton one can reach some given configurations from other given configurations. In some computational models this question is decidable, e.g., in finite state machines, pushdown automata [24], or counter automata without zero-tests (resp. Petri-nets or vector addition systems) [5, 6]. In some other, mostly Turing-complete computational models this reachability problem is undecidable. For example, the reachability problem for two-counter automata with zero-tests (so called Minsky-machines) is undecidable [7].
For queue automata reachability is undecidable [8], while this problem is decidable for so-called lossy queue automata [9] which are allowed to forget any parts of their content at any time. In this case, for a regular set of configurations, the set of reachable configurations is regular [10] but in general it is impossible to compute a finite automaton accepting this set [11, 12]. Surprisingly, the set of backwards reachable configurations is effectively regular [9], even though this construction is not primitive recursive [13, 14]. This positive reachability result for lossy queue automata was generalized by Abdulla et al. [15] and by Finkel and Schnoebelen [16] to so-called well-structured transition systems, which are systems with an infinite state space and some special restrictions on their transitions.
Another variation of queue automata are automata with priority queues: here, each queue entry has some priority and entries with high priority can supersede or overtake entries with low priority. Haase et al. [17] have proven that the reachability problem is decidable for these priority queue automata with superseding semantics, but it is undecidable for priority queue automata with overtaking semantics.
In this paper we will focus on the reachability problem for reliable and lossy queue automata. Due to its undecidability resp. inefficiency, one may consider approximations of this problem. One trivial approach is to simulate the automaton’s computation step by step until a given configuration (or a given set of configurations) is found. In this case, starting from a given set of configurations we simply add or remove a single letter from the queue’s contents. An even better and more efficient approach is to consider so-called “meta-transformations” as described in [18, 19]. Such a meta-transformation is a combination of multiple transitions of the queue. In particular, given a loop in the queue’s control component one can combine iterations of this loop to one big step of the queue automaton. With this trick it is possible to explore infinitely many contents of the queue in a small amount of time.
Considering reliable queue automata, we know from Boigelot et al. [19] that, starting from a regular language of queue contents, the set of reachable queue contents after iterated application of a single transformation sequence is effectively regular. The authors of that paper also generalized this result to automata having multiple queues. However, in this case we need some special restrictions to the considered loops. Bouajjani and Habermehl [20] extended this result to arbitrary loops. But this requires to consider some proper super-class of the regular languages representing the queues contents. Abdulla et al. also considered in [21] loops in automata having some lossy queues and proved the preservation of regularity.
From the under-approximation considering single loops, one may also obtain so-called flat automata. These are automata in which each state belongs to at most one loop. For example, Leroux and Sutre [22] considered flat counter automata and proved that a large class of counter automata are flat or at least flattable. Additionally, flat counter automata have an NP-complete reachability problem [23]. The reachability problem is also NP-complete if we consider flat reliable or lossy queue automata [24, 25].
Here, we consider some extension of the results from Boigelot et al. and Abdulla et al. Concretely, we consider iterations through certain regular languages, so-called read-write independent sets. Such language is the product of some language consisting of write action sequences, only, with another language consisting of read action sequences. For these new meta-transformations we prove the preservation of regularity of sets of configurations. We will see that our construction is possible in polynomial time. Additionally, we show some extensions of our result. For example, we demonstrate that from our main result, one can derive the result of Boigelot et al. To this end, we show how to turn a single transformation sequence into a read-write independent set yielding the same set of reachable queue contents.
Additionally, we consider another type of meta-transformations: sets of transformations which are closed under some special (context-sensitive) commutations of the atomic transformations. For such meta-transformations, the set of reachable configurations is also effectively regular. Moreover, if we start from a context-free set of configurations, the set of reachable configurations is effectively context-free, again. Both constructions can be carried out in polynomial time.
In the last section of this paper we consider so-called partially lossy queue automata which were first introduced in [26, 27]. This is a generalization of reliable and lossy queue automata where we can specify which letters can be forgotten at any time. These partially lossy queues can also be seen as some kind of the aforementioned priority queues with superseding semantics: here, the forgettable letters have low priority and the unforgettable letters have high priority. In this case, letters with low priority can be superseded by any letter and letters with high priority cannot be superseded by any letter.
We will see, that the sets of reachable configurations can be computed from the ones of a reliable queue automaton. Hence, all of our results do also hold for arbitrary partially lossy queue automata. In particular, we will see that the results from [19, 21] follow from our result. So, we also have a new, unified proof of these two results.

2 Preliminaries

2.1 Words and Languages

At first, we have to introduce some basic definitions. To this end, let Γ be an alphabet. A word vΓ is a prefix of wΓ iff wvΓ. Similarly, v is a suffix of w iff wΓv and v is an infix of w iff wΓvΓ. The complementary prefix (resp. suffix) of w wrt. v is the word w/vΓ (resp. vwΓ) with w = w/vv (resp. w = vvw). The right quotient of a language \(L\subseteq {\varGamma }^{*}\) wrt. HCode \(K\subseteq {\varGamma }^{*}\) is the language L/K = {uΓ|∃vK : uvL}. Similarly, we can define the left quotientKL = {vΓ|∃uK : uvL} of L wrt. K.
For a word \(w=a_{1}a_{2}{\dots } a_{n}\in {\varGamma }^{*}\) we define its reversal by \({w}^{\text {R}}:=a_{n}{\dots } a_{2}a_{1}\). The reversal of a language L is LR = {wR|wL}. The shuffle of two languages L and K is the following language:
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figa_HTML.png
Let \(\sim \) be an equivalence relation on Γ. The equivalence class of vΓ wrt. \(\sim \) is \([v]_{\sim }=\{u\in {\varGamma }^{*} | u\sim v\}\). A language \(L\subseteq {\varGamma }^{*}\) is closed under \(\sim \) if for each vL we have \([v]_{\sim }\subseteq L\).
Let \(S\subseteq {\varGamma }\). Then the projection \(\pi _{S}\colon {\varGamma }^{*}\to S^{*}\) to S is the monoid homomorphism induced by πS(a) = a for each aS and πS(a) = ε for each aΓS. Additionally, for wΓ we write |w|S := |πS(w)|.

2.2 Finite Automata

A (nondeterministic) finite automaton (NFA for short) is a quintuple \(\mathfrak {A}=(Q,{\varGamma },I,{\varDelta },F)\) where Q is a finite set of states, \(I,F\subseteq Q\) are the sets of initial and final states, and \({\varDelta }\subseteq Q\times {\varGamma }\times Q\) is the transition relation. Then, the configuration graph of \(\mathfrak {A}\) is \(\mathcal {G}_{\mathfrak {A}}:=(Q,{\varDelta })\) which is a finite, edge-labeled, and directed graph. For p, qQ and wΓ we write \(p\overset {w}{\rightarrow }_{\mathfrak {A}} q\) if there is a w-labeled path in \(\mathcal {G}_{\mathfrak {A}}\) from p to q. For \(Q_{1},Q_{2}\subseteq Q\) and wΓ we write \(Q_{1}\overset {w}{\rightarrow }_{\mathfrak {A}} Q_{2}\) if there are q1Q1 and q2Q2 with \(q_{1}\overset {w}{\rightarrow }_{\mathfrak {A}} q_{2}\). The accepted language of \(\mathfrak {A}\) is \(L(\mathfrak {A}):=\{w\in {\varGamma }^{*} | I\overset {w}{\rightarrow }_{\mathfrak {A}} F\}\). A language \(L\subseteq {\varGamma }^{*}\) is regular, if there is an NFA \(\mathfrak {A}\) accepting L. The class of regular languages is effectively closed under Boolean operations, left and right quotients, shuffle, reversal, and projections.
Let \(\mathfrak {A}=(Q,{\varGamma },I,{\varDelta },F)\) be an NFA, \(Q_{i},Q_{f}\subseteq Q\). Then we set \(\mathfrak {A}_{Q_{i}\to Q_{f}}:=(Q,{\varGamma },Q_{i},{\varDelta },Q_{f})\), i.e., \(\mathfrak {A}_{Q_{i}\to Q_{f}}\) is the NFA constructed from \(\mathfrak {A}\) with initial states Qi and final states Qf. For example, we have
$$L(\mathfrak{A})=\bigcup_{q\in Q}L(\mathfrak{A}_{I\to q})L(\mathfrak{A}_{q\to F}) .$$

3 Queue and Pushdown Automata

In this section we will recall basic knowledge on (fifo-)queues and (lifo-)stacks. Both data structures can store entries from a given alphabet A. Then, the contents of such a queue or stack are words from A. For each letter aA we have two actions (or transformations): writing of a into the structure (denoted by a) and reading of a from the structure (denoted by \(\overline {a}\)). We assume that the alphabet \(\overline {A}\) containing each such reading operation \(\overline {a}\) is a disjoint copy of A. By \(\varSigma _{A}:=A\cup \overline {A}\) we denote the set of all actions on the data structure. For \(w=a_{1}a_{2}{\dots } a_{n}\in A^{*}\) we also write \(\overline {w}:=\overline {a_{1}} \overline {a_{2}}\dots \overline {a_{n}}\) and for \(L\subseteq A^{*}\) we write \(\overline {L}:=\{\overline {w} | w\in L\}\).
In the following two subsections we will consider queues and stacks separately.

3.1 Queue Automata

In queues (or channels) we always write letters on one end of the queue’s content and read them from the other end. Hence, writing the letter aA into the queue with content wA results in wa and reading a from aw yields the queue content w. It is impossible to read a from the empty queue or whenever the queue’s content is bw with ab.
Formally, a queue automaton is a tuple \(\mathfrak {Q}=(Q,{\varGamma },A,I,{\varDelta },F)\) where Q is a finite set of states, Γ and A are two (not necessarily disjoint) alphabets, \(I,F\subseteq Q\) are the sets of initial and final states, respectively, and \({\varDelta }\subseteq Q\times ({\varGamma }\cup \{\varepsilon \})\times (\varSigma _{A}\cup \{\varepsilon \})\times Q\) is the transition relation. A configuration of \(\mathfrak {Q}\) is a tuple from \(\text {Conf}_{\mathfrak {Q}}:=Q\times A^{*}\). We denote the set of initial configurations by \(\text {Init}_{\mathfrak {Q}}:=I\times \{\varepsilon \}\) and the set of accepting configurations by \(\text {Final}_{\mathfrak {Q}}:=F\times A^{*}\). For p, qQ, v, wA, and αΓ∪{ε} we write \((p,v)\overset {\alpha }{\rightarrow }_{\mathfrak {Q}}(q,w)\) if one of the following holds:
(1)
there is aA with (p, α, a, q) ∈Δ and va = w,
 
(2)
there is aA with \((p,\alpha ,\overline {a},q)\in {\varDelta }\) and v = aw, or
 
(3)
we have (p, α, ε, q) ∈Δ and v = w.
 
Then \(\mathcal {G}_{\mathfrak {Q}}:=(\text {Conf}_{\mathfrak {Q}},\bigcup _{\alpha \in {\varGamma }\cup \{\varepsilon \}}\overset {\alpha }{\rightarrow }_{\mathfrak {Q}})\) is called the configuration graph of \(\mathfrak {Q}\). For \((p,v),(q,w)\in \text {Conf}_{\mathfrak {Q}}\) and γΓ we write \((p,v)\overset {\gamma }{\rightarrow }_{\mathfrak {Q}}(q,w)\) if there is a γ-labeled path from (p, v) to (q, w) in (p, v) to (q, w) in \(\mathcal {G}_{\mathfrak {Q}}\). The accepted language of \(\mathfrak {Q}\) is \(L(\mathfrak {Q}):=\{\gamma \in {\varGamma }^{*}\mid \text {Init}_{\mathfrak {Q}}\overset {\gamma }{\rightarrow }_{\mathfrak {Q}}\text {Final}_{\mathfrak {Q}}\}\).
It is well-known that queue automata can simulate Turing-machines (cf. [8]). Hence, queue automata accept exactly the class of recursively enumerable languages.
In the following, we remove the input tape of our queue automata and focus on the behavior of the memory component (i.e. the queue). Formally, we describe a queue’s behavior by a function ∘ associating a word vA and a sequence of transformations \(t\in \varSigma _{A}^{*}\) with another word vtA which is the queue’s content after application of t on the content v. Since it is impossible to read a from a queue with content ε or bw with ab, the function ∘ is not total. However, we may introduce a new content ⊥∉A (the so-called error state) and set vt = ⊥ whenever the application of t on v is not possible.
Definition 3.1
Let A be an alphabet and ⊥∉A. Then the map \(\circ \colon (A^{*}\cup \{\bot \})\times \varSigma _{A}^{*}\to (A^{*}\cup \{\bot \})\) is defined for each vA, a, bA with ab, and \(t\in \varSigma _{A}^{*}\) as follows:
(i)
vε = v
 
(ii)
vat = vat
 
(iii)
\(av\circ \overline {a} t=v\circ t\)
 
(iv)
\(bv\circ \overline {a} t=\varepsilon \circ \overline {a} t=\bot \circ t=\bot \)
 
We will say “vt is undefined” if vt = ⊥.
Let \(\mathfrak {Q}=(Q,{\varGamma },A,I,{\varDelta },F)\) be some queue automaton. Construct the following NFA (with ε-transitions) \(\mathfrak {T}=(Q,\varSigma _{A},I,{\varDelta }^{\prime },F)\) with
$${\varDelta}^{\prime}=\{(p,\alpha,q) | \exists \gamma\in{\varGamma}\cup\{\varepsilon\}\colon(p,\gamma,\alpha,q)\in{\varDelta}\} .$$
Then \((\varepsilon \circ L(\mathfrak {T}))\setminus \{\bot \}\) is exactly the set of all queue contents after any computation of \(\mathfrak {Q}\). Note that \(\mathfrak {Q}\) will not end up in an error state if it is impossible to read the letter from the queue’s head position. Instead the queue automaton will stop in such situation. Hence, we exclude ⊥ from \(\varepsilon \circ L(\mathfrak {T})\).
More generally, we will consider sets LT for some set of initial queue contents \(L\subseteq A^{*}\) and some set of transformation sequences \(T\subseteq \varSigma _{A}^{*}\). At this juncture, it suffices to regard only regular languages \(T\subseteq \varSigma _{A}^{*}\) since the control component \(\mathfrak {T}\) of a queue automaton is always an NFA. All in all, we may define our reachability problems as follows:
Problem 3.2
Let A be an alphabet, \(L\subseteq A^{*}\) be a set of queue contents, and \(T\subseteq \varSigma _{A}^{*}\) be a regular set of transformation sequences. The set of queue contents that are reachable from L via T is
$$\textsc{Reach}(L,T):=(L\circ T)\setminus\{\bot\}$$
and the set of queue contents that can reach L via T is
$$\textsc{BackReach}(L,T):=\{v\in A^{*} | (v\circ T)\cap L\neq\emptyset\} .$$
From the definition of ∘ we already know that va = va and \(av\circ \overline {a}=v\) holds. In this sense, we may see some duality between the write and read actions a and \(\overline {a}\). This duality can be extended as follows: by \(d\colon \varSigma _{A}^{*}\to \varSigma _{A}^{*}\) we denote the map defined by
$$d(\varepsilon)=\varepsilon,\quad d(av)=d(v)\overline{a}\text{, and}\quad d(\overline{a} v)=d(v)a$$
for each aA and \(v\in \varSigma _{A}^{*}\). We can see that d is a bijective antimorphism and an involution. Additionally, from [28, Lemma 3.3] we know that vt = w holds if, and only if, wRd(t) = vR for each v, wA and \(t\in \varSigma _{A}^{*}\). From this equivalence we also obtain the following duality between Reach and BackReach:
Theorem 3.3
Let A be an alphabet, \(K,L\subseteq A^{*}\), and \(T\subseteq \varSigma _{A}^{*}\). Then we have
$$\textsc{BackReach}(L,T)={\textsc{Reach}({L}^{\text{R}},d(T))}^{\text{R}} .$$
Proof
Let v ∈BackReach(L, T). By definition we have (vT) ∩ L. So, there are wL and tT with vt = w. By [28, Lemma 3.3] we also have wRd(t) = vR implying vR ∈Reach(LR, d(T)), i.e., v ∈Reach(LR, d(T))R.
Conversely, let v ∈Reach(LR, d(T))R. We know vR ∈Reach(LR, d(T)). Hence, there is wL and tT such that wRd(t) = vR. Due to [28, Lemma 3.3] we also have vt = w implying w ∈ (vT) ∩ L. In other words, we have v ∈BackReach(L, T). □
So, with the help of Theorem 3.3 it suffices to consider forwards reachability from now on.
Now, let \(L\subseteq A^{*}\) be a recursively enumerable language of queue contents and \(T\subseteq \varSigma _{A}^{*}\) a regular language of queue transformations. Then the language Reach(L, T) is (effectively) recursively enumerable. However, since queue automata can simulate Turing-machines, the language Reach(L, T) can be any recursively enumerable language - even if L and T are somewhat “simple” languages:
Remark 3.4
Let \(K\subseteq {\varGamma }^{*}\) be a recursively enumerable language. Then there is a (type-0) grammar \(\mathfrak {G}=(N,{\varGamma },P,S)\) with \(K=L(\mathfrak {G})\). Let #NΓ be some new letter. We set our alphabet A := NΓ∪{#} (recall that this is the set of possible queue entries). We construct the language of transformations \(T\subseteq \varSigma _{A}^{*}\) as follows:
$$T:=\bigl(\{\overline{\ell}r | (\ell,r)\in P\}\cup\{\overline{a} a | a\in N\cup{\varGamma}\cup\{\#\}\}\bigr)^{*}\overline{\#} ,$$
i.e., the queue automaton can apply any rule from \(\mathfrak {G}\) and move any letter from the head to its end. Then we have
$$\textsc{Reach}(\{\#S\},T)\cap{\varGamma}^{*}=L(\mathfrak{G})=K .$$
In other words, Reach(L, T) can be any recursively enumerable language K even if L is a singleton and T is essentially the Kleene-closure of a finite set of transformation sequences.
Due to Remark 3.4 there are regular languages L and T such that Reach(L, T) is undecidable. Therefore, we need some approximation to decide whether a given regular set of configurations can be reached from the regular language L of queue inputs by application of the transformation sequences from T. A trivial approach is to simulate the computation of the queue automaton step-by-step. That means, starting with L we iteratively compute the set of all queue contents which are reachable from L after n steps. Formally, for the set \(T_{n}\subseteq \varSigma _{A}^{*}\) of prefixes of length at most n of T we compute Ln = Reach(L, Tn) for increasing \(n\in \mathbb {N}\). Unfortunately, this algorithm is not very efficient: consider a finite language of queue contents \(L\subseteq A^{*}\) and a regular language of transformation sequences \(T\subseteq \varSigma _{A}^{*}\). Then \(T{\cap \varSigma _{A}^{n}}\) is finite for any \(n\in \mathbb {N}\) and, hence, Ln = Reach(L, Tn) is finite.
Boigelot et al. improved this trivial approximation in [18, 19] by the introduction of so-called meta-transformations. These are certain regular languages \(S\subseteq \varSigma _{A}^{*}\) such that the sets Reach(L, S) (for any regular set \(L\subseteq A^{*}\)) are effectively regular. Then the trivial approximation can be modified as follows: whenever we compute Ln+ 1 from Ln we search for such meta-transformation in the queue automaton’s control component starting from Tn and apply these on Ln. In [19] the authors considered meta-transformations of the form S = {t} for some \(t\in \varSigma _{A}^{*}\). In fact, this approach is more efficient than the trivial one, since we can explore an infinite state space in just one step of the algorithm.
From the following proposition we can obtain some further simple meta-transformations. Concretely, we consider the case that T contains only sequences of write actions or only sequences of read actions.
Proposition 3.5
Let A be an alphabet and \(L,T\subseteq A^{*}\). Then the following statements hold:
(1)
Reach(L, T) = LT
 
(2)
\(\textsc {Reach}(L,\overline {T})={T}\backslash {L}\)
 
Note that this proposition is a generalization of Theorems 1 and 2 in [18]. In that paper, Boigelot and Godefroid have proven the effective recognizability of Reach(L, t) and \(\textsc {Reach}(L,\overline {t})\) where \(L\subseteq A^{*}\) is regular and tA is some single transformation sequence.
Proof
First, consider equation (1): let w ∈Reach(L, T) = (LT) ∖{⊥}. Then there are vL and tT with w = vt. Since \(t\in T\subseteq A^{*}\) we have, by iterated application of (ii) in Definition 3.1, w = vt = vtLT. Now, let wLT. Then there are vL and tT with w = vt. Again, by application of (ii) in Definition 3.1 we have w = vt = vtLT. Since \(L,T\subseteq A^{*}\) we infer that w ∈ (LT) ∖{⊥} = Reach(L, T) holds.
Using (iii) in Definition 3.1, we can similarly prove equation (2). □
Combining Theorem 3.3 and Proposition 3.5 we obtain the following two equations:
$$\textsc{BackReach}(L,T)={L}/{T}\quad\text{and}\quad\textsc{BackReach}(L,\overline{T})=TL$$
for each pair of languages \(L,T\subseteq A^{*}\).
Now, let \(L,T\subseteq A^{*}\) be two regular languages accepted by the NFAs \(\mathfrak {L}\) and \(\mathfrak {T}\), respectively. Then, using the classical constructions, we can construct an NFA accepting Reach(L, T) in quadratic time. An NFA accepting \(\textsc {Reach}(L,\overline {T})\) can be constructed in cubic time. The number of states of these NFAs is linear in the number of states in \(\mathfrak {L}\) and \(\mathfrak {T}\).
If we require these languages to be accepted by a DFA, then we additionally need to determinize the given NFAs resulting in exponential size and time. The complexities of BackReach are similar to the ones of Reach due to the duality stated in Proposition 3.5. However, if \(\mathfrak {L}\) is a DFA, we still can compute a DFA accepting BackReach(L, T) in cubic time having a linear number of states (in this case we only modify the accepting states of \(\mathfrak {L}\)).
Later in this paper we consider two further types of meta-transformations T having mappings L↦Reach(L, T) and L↦BackReach(L, T) which preserve regularity efficiently.

3.2 Pushdown Automata

Recall that stacks (or pushdowns) have the same set of actions on their content as queues. In other words, we are able to write a letter into the stack’s content or read a letter from the stack. While queues apply their read and write actions on different ends of their content, stacks execute these actions always on the same end. Formally, writing a letter aA into the stack wA results in the content aw and reading a from aw makes w. Note that it is impossible to read a from bw where ab. Similarly, the stack blocks when reading a from ε.
In this paper, a pushdown automaton (or PDA for short) is defined similarly to queue automata. Also the definitions of their configuration graphs and accepted languages are similar. The only exception is Transition (1) which has to be rephrased as follows:
(1’)
there is aA with (p, α, a, q) ∈Δ and av = w,
 
Note that our definition of pushdown automata slightly differs from the classical definitions in textbooks (cf. for example [29]). While our automata apply at most one action on their stack on each transition, in the classical definition the PDAs always read one letter and write a sequence of letters afterwards. However, both definitions are equivalent. A classical PDA can be transformed into our model by splitting transitions into a sequence of transitions applying exactly one action. Conversely, a transition writing a from our definition can be translated into transitions applying \(\overline {b} ba\) for each bA.
Due to the equivalence of these models, we are allowed to call a language \(L\subseteq {\varGamma }^{*}\) context-free if there is a PDA \(\mathfrak {P}\) with \(L=L(\mathfrak {P})\).
Let \(C\subseteq \text {Conf}_{\mathfrak {P}}\) be a set of configurations of \(\mathfrak {P}\). Then we denote the set of configurations of \(\mathfrak {P}\) reachable from C by
$$\text{post}^{*}(C):=\{d\in \text{Conf}_{\mathfrak{P}} | \exists\gamma\in{\varGamma}^{*}\colon C\overset{\gamma}{\rightarrow}_{\mathfrak{P}} d\} .$$
The following result is well-known in verification:
Theorem 3.6 (3, 4, 30)
Let \(\mathfrak {P}=(Q,{\varGamma },A,I,{\varDelta },F)\) be a PDA, pQ, and \(\mathfrak {A}\) be an NFA over A. Then we can compute an NFA \(\mathfrak {B}_{q}\) over A for each qQ such that
$$\text{post}^{*}(\{p\}\times L(\mathfrak{A}))=\bigcup_{q\in Q}\{q\}\times L(\mathfrak{B}_{q})$$
holds. The construction of the NFAs \(\mathfrak {B}_{q}\) is possible in polynomial time.
Concretely, using the construction from [3], we obtain NFAs \(\mathfrak {B}_{q}\) having \(\mathcal {O}({n_{\mathfrak {P}}+n_{\mathfrak {A}}})\) states, where \(n_{\mathfrak {P}}\) and \(n_{\mathfrak {A}}\) is the number of states of some given PDA \(\mathfrak {P}\) and NFA \(\mathfrak {A}\), respectively.

4 Behavioral Equivalence of Queue Transformations

The first type of meta-transformations we want to consider are languages that are closed under the so-called behavioral equivalence. To this end, let vA be an arbitrary queue content and aA. Then we have
$$v\circ aa\overline{a}=vaa\circ \overline{a} =(va\circ\overline{a})\cdot a=v\circ a\overline{a} a .$$
In other words, the queue transformation sequences \(aa\overline {a}\) and \(a\overline {a} a\) have the same effect on any queue content. Then we say that these two sequences behave equivalently. On the other hand, we have \(\varepsilon \circ a\overline {a}=\varepsilon \neq \bot =\varepsilon \circ \overline {a} a\) which witnesses that \(a\overline {a}\) and \(\overline {a} a\) do not behave equivalently.
Formally, this equivalence is defined as follows:
Definition 4.1
Let A be an alphabet and \(s,t\in \varSigma _{A}^{*}\). Then s and t behave equivalently (denoted by st) if vs = vt for each vA. The induced relation ≡ is called the behavioral equivalence.
In other words, we have st if for each queue input the application of s and t lead to the same output of the queue automaton. As we have seen above we know \(aa\overline {a}\equiv a\overline {a} a\) and \(a\overline {a}\not \equiv \overline {a} a\).
This equivalence relation was first introduced by Huschenbett et al. in [28]. It was used in that paper to define the transformation monoid of a queue (the so-called queue monoid). This monoid consists of the equivalence classes of the behavioral equivalence with composition. Moreover, they proved that ≡ is a congruence on \(\varSigma _{A}^{*}\) which is described by a finite set of equations. We recall these context-sensitive commutations in the following theorem:
Theorem 4.2 ([28, Theorem 4.3])
Let A be an alphabet. Then ≡ is the least congruence on \(\varSigma _{A}^{*}\) satisfying the following equations for each a, bA:
(1)
\(a\overline {b}\equiv \overline {b} a\) if ab,
 
(2)
\(a\overline {a}\overline {b}\equiv \overline {a} a\overline {b}\), and
 
(3)
\(ba\overline {a}\equiv b\overline {a} a\).
 
In this section we want to prove that regular that, for regular languages which are closed under the behavioral equivalence, the mappings L↦Reach(L, T) and L↦BackReach(L, T) effectively and efficiently preserve regularity. We prove this with the help of the following corollary of Theorem 4.2 stating that each transformation sequence \(t\in \varSigma _{A}^{*}\) has some equivalently behaving transformation sequence \(s\in \varSigma _{A}^{*}\) which is in some sense “simple”:
Proposition 4.3 ([28, Lemma 5.2])
Let A be an alphabet and \(t\in \varSigma _{A}^{*}\). Then there is \(s\in \overline {A}^{*}A^{*}\overline {A}^{*}\) with st. From a given word t we can compute such a word s in polynomial time.
Proof (idea)
We construct some confluent and terminating semi-Thue system \(\mathfrak {R}\) by ordering the equations in Theorem 4.2 from left to right. Then from \(t\in \varSigma _{A}^{*}\) we can compute a unique word \(r\in A^{*}\!\left (\bigcup _{a\in A}a\overline {a}\right )^{*}\!A^{*}\). This word r identifies the whole equivalence class of t. Assume that \(r=\overline {r_{1}} a_{1}\overline {a_{1}}a_{2}\overline {a_{2}}{\dots } a_{n}\overline {a_{n}} r_{2}\) (where r1, r2A, \(a_{1},\dots ,a_{n}\in A\)) is this unique word. Then we can set \(s:=\overline {r_{1}} a_{1}a_{2}{\dots } a_{n}r_{2} \overline {a_{1}a_{2}{\dots } a_{n}}\) and we have trs. □
Remark 4.4
The algorithm from Proposition 4.3 returns a word \(s\in \overline {A}^{*}A^{*}\overline {A}^{*}\) which is unique for each word from the equivalence class [t]. However, there are words tA having multiple \(s\in \overline {A}^{*}A^{*}\overline {A}^{*}\) which behave equivalent. For example, let a, bA be two distinct letters and \(t=a\overline {a}\overline {b} a\). Then the algorithm outputs \(\overline {a}\overline {b} aa\) but we also have \(\overline {a} aa\overline {b},aa\overline {a}\overline {b}\in [t]_{\equiv }\).
The behavioral equivalence of queue transformations was further considered in [27, 28]. Concretely, in that papers the authors studied those regular languages which are closed under the behavioral equivalence ≡. In [27, Theorem 4.1] we defined some kind of rational expressions constructing these sets as well as some MSO-logic describing them. In particular, let \(T\subseteq \varSigma _{A}^{*}\) be a language that is closed under ≡. Then, we know that T is regular if, and only if, \(T\cap \overline {A}^{*}A^{*}\overline {A}^{*}\) is regular due to [28, Theorem 9.4].
Example 4.5
Let \(R,W\subseteq A^{*}\) be regular languages. Then R can be accepted by some NFA \(\mathfrak {R}=(Q,A,I,{\varDelta },F)\). In this case, we have
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figb_HTML.png
which is regular (recall that [L] is the closure of \(L\subseteq \varSigma _{A}^{*}\) under ≡ and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figc_HTML.gif is the shuffle of two languages). Hence, https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figd_HTML.gif is regular and closed under ≡.
Now, let aA. Then \([(a\overline {a})^{*}]_{\equiv }\) is not regular since (by Theorem 4.2) we can prove \([(a\overline {a})^{*}]_{\equiv }\cap \overline {A}^{*}A^{*}\overline {A}^{*}=\{a^{n}\overline {a}^{n} | n\in \mathbb {N}\}\) which is not regular.
The following lemma states that closure under behavioral equivalence is decidable for regular languages:
Lemma 4.6
Let A be an alphabet. Given some regular language \(T\subseteq {\Sigma }_{A}^{\ast }\) of queue transformations, it is decidable whether T is closed under behavioral equivalence ≡.
Let A be an alphabet. Given some regular language \(T\subseteq {\Sigma }_{A}^{\ast }\) of queue transformations, it is decidable whether T is closed under behavioral equivalence ≡.
Proof (idea)
We can check this question with the help of some rational transduction (cf. [31], Chapter III): let τ be the transduction with the graph
$$G:=I_{\varSigma_{A}}^{*}\{(\ell,r),(r,\ell)\mid\ell\equiv r\text{ is an equation in Theorem~4.2}\}I_{\varSigma_{A}}^{*}\cup I_{\varSigma_{A}}^{*}$$
where \(I_{\varSigma _{A}}=\{(\alpha ,\alpha )\mid \alpha \in \varSigma _{A}\}\) is the identity on ΣA. It is a simple task to prove that G is rational in \(\varSigma _{A}^{*}\times \varSigma _{A}^{*}\). Then \(T\subseteq \varSigma _{A}^{*}\) is closed under ≡ if, and only if, T = τ(T) holds. We can check this equation since τ(T) is effectively regular.□
However, it is undecidable whether the closure of a given regular language T under ≡ is regular as well [28, Theorem 8.4].
Finally, we are able to prove the main theorem in this section:
Theorem 4.7
Let A be an alphabet, \(L\subseteq A^{*}\) be regular, and \(T\subseteq \varSigma _{A}^{*}\) be regular and closed under ≡. Then Reach(L, T) and BackReach(L, T) are effectively regular. In particular, from NFAs accepting L and T we can construct NFAs accepting Reach(L, T) and BackReach(L, T) in polynomial time.
Proof
We first prove that
$$\textsc{Reach}(L,T)=\textsc{Reach}(L,T\cap\overline{A}^{*}A^{*}\overline{A}^{*})$$
holds. The inclusion “\(\supseteq \)” is trivial. Towards the converse inclusion, let w ∈Reach(L, T). Then there are vL and tT with vt = w. Due to Proposition 4.3 there is \(s\in \overline {A}^{*}A^{*}\overline {A}^{*}\) with st. By definition of ≡ we also have vs = vt = w. Since T is closed under ≡ we have \(s\in T\cap \overline {A}^{*}A^{*}\overline {A}^{*}\). This finally implies \(w\in \textsc {Reach}(L,T\cap \overline {A}^{*}A^{*}\overline {A}^{*})\).
Next, we compute \(\textsc {Reach}(L,T\cap \overline {A}^{*}A^{*}\overline {A}^{*})\). To this end, let \(\mathfrak {T}=(Q,\varSigma _{A},I,{\varDelta },F)\) be an NFA with \(L(\mathfrak {T})=T\). We partition \(T\cap \overline {A}^{*}A^{*}\overline {A}^{*}\) as follows: let p, qQ be any pair of states. Then we can compute the following three regular languages in polynomial time:
$$\overline{K_{1}^{p,q}}=L(\mathfrak{T}_{I\to p})\cap \overline{A}^{*} ,\quad K_{2}^{p,q}=L(\mathfrak{T}_{p\to q})\cap A^{*} \text{, and}\quad \overline{K_{3}^{p,q}}=L(\mathfrak{T}_{q\to F})\cap\overline{A}^{*} .$$
Then it is easy to see that \(T\cap \overline {A}^{*}A^{*}\overline {A}^{*}=\bigcup _{p,q\in Q}\overline {K^{p,q}_{1}}K_{2}^{p,q}\overline {K_{3}^{p,q}}\) holds. Hence, we have
$$ \begin{array}{@{}rcl@{}} \textsc{Reach}(L,T)&=&\textsc{Reach}\!\left( L,\bigcup\limits_{p,q\in Q}\overline{K_{1}^{p,q}}K_{2}^{p,q}\overline{K_{3}^{p,q}}\right)\\ &=&\bigcup\limits_{p,q\in Q}\textsc{Reach}(L,\overline{K_{1}^{p,q}}K_{2}^{p,q}\overline{K_{3}^{p,q}}) . \end{array} $$
So, let p, qQ. By Proposition 3.5 reading from the queue corresponds to taking the left-quotient and writing into the queue corresponds to concatenation. Therefore, we have:
$$\textsc{Reach}(L,\overline{K_{1}^{p,q}}K_{2}^{p,q}\overline{K_{3}^{p,q}})={K_{3}^{p,q}}\backslash{(({K_{1}^{p,q}}\backslash{L})\cdot K_{2}^{p,q})} .$$
Hence, due to closure properties of the class of regular languages, Reach(L, T) is effectively regular. Since all of the needed closure properties are also efficient and since we are considering only \(\mathcal {O}({|Q|^{2}})\) many languages \(K_{i}^{p,q}\), an NFA accepting Reach(L, T) can be computed in polynomial time. This NFA has a cubic number of states.
Finally, we have to show that BackReach(L, T) is effectively and efficiently regular. Recall that BackReach(L, T) = Reach(LR, d(T))R holds. Due to [28, Proposition 3.4] the language d(T) is still closed under behavioral equivalence. Additionally, d(T) is effectively regular since we only have to replace a by \(\overline {a}\) and vice versa and to invert the edges of the automaton accepting T. Since the class of regular languages is efficiently closed under reversal we can compute an automaton accepting BackReach(L, T) in polynomial time which has a cubic number of states. □
Note that, due to the proof of this theorem, the map L↦Reach(L, T) preserves regularity if \(T\subseteq \overline {A}^{*}A^{*}\overline {A}^{*}\) holds. It is also possible to extend the result of the previous theorem to context-free languages:
Theorem 4.8
Let A be an alphabet, \(L\subseteq A^{*}\) be context-free, and \(T\subseteq \varSigma _{A}^{*}\) be regular and closed under ≡. Then Reach(L, T) and BackReach(L, T) are effectively context-free. In particular, from a PDA accepting L and an NFA accepting T, we can construct PDAs accepting Reach(L, T) and BackReach(L, T), respectively, in polynomial time.
Proof
(idea) This is similar to the proof of Theorem 4.7 due to the effective and efficient closure properties of context-free languages (note that the left or right quotient of a context-free language wrt. a regular language is context-free, again). □

5 Read-Write Independence

In this section we want to consider another kind of meta-transformations: cyclic regular languages. In other words, given two regular languages \(L\subseteq A^{*}\) and \(T\subseteq \varSigma _{A}^{*}\), we want to compute an NFA accepting Reach(L, T). The case, where T is a singleton, was first considered by Boigelot et al. in [18, 19] (and similarly by Abdulla et al. [21] for lossy queues). In these papers the authors proved that Reach(L, T) is effectively regular in this case. So, a natural question would be to ask, whether this result also holds if T is no singleton. Unfortunately, we have seen in Remark 3.4 that Reach(L, T) can be undecidable if T is a finite language. The following example proves that Reach(L, T) is not necessarily regular anymore even if T consists of two words:
Example 5.1
Let A be an alphabet and a, bA be distinct letters. Then we have
$$\textsc{Reach}(\{a\},\{\overline{a} bb,\overline{b} a\}^{*})\cap\{a\}^{*}=\{a^{2^{n}} | n\in\mathbb{N}\}$$
which is not even context-free.
In both cases, Example 5.1 and Remark 3.4, the write actions of any sequence tT depend on the read actions in t. So, we are able to copy data from the head of the queue into its tail. Then, we can see the queue as a Turing-tape and we are able to move the head on this tape in any direction. Hence, we consider languages \(T\subseteq \varSigma _{A}^{*}\) in which for each pair s, tT there is another word rT consisting of the write actions from s and the read actions from t. In this case, independently of the word from πA(T) we write into the queue, we can read any word from \(\pi _{\overline {A}}(T)\). Formally, we are considering the following sets of sequences of transformations:
Definition 5.2
Let A be an alphabet. A set \(T\subseteq \varSigma _{A}^{*}\) is read-write independent if, for each s, tT, we have \(\pi _{A}(s)\pi _{\overline {A}}(t)\in T\). In other words, T is read-write independent if, and only if, \(\pi _{A}(T)\pi _{\overline {A}}(T)\subseteq T\) holds.
We may see read-write independent sets T as some kind of a Cartesian product of a set of sequences of write actions \(W\subseteq A^{*}\) with a set of read action sequences \(\overline {R}\subseteq \overline {A}^{*}\) where for each element \((w,\overline {r})\in W\times \overline {R}\) we have the transformation \(w\overline {r}\in T\). Some simple read-write independent sets are listed in the following example:
Example 5.3
Let \(W,R\subseteq A^{*}\). Then \(W\overline {R}\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Fige_HTML.gif are read-write independent sets.
Since the class of regular languages is closed under projections and concatenation and due to the decidability of the inclusion problem, we can decide whether a given regular language \(T\subseteq \varSigma _{A}^{*}\) is read-write independent.
For our further considerations of read-write independent sets we need the following lemma. It states that we can “de-shuffle” those languages:
Lemma 5.4 ([26, Lemma 3.11])
Let A be an alphabet, \(L\subseteq A^{*}\), and \(T\subseteq \varSigma _{A}^{*}\) be read-write independent. Then we have
$$\textsc{Reach}(L,T)=\textsc{Reach}(L,\pi_{A}(T)\pi_{\overline{A}}(T)). $$
Note that Lemma 5.4 does not hold for arbitrary languages \(T\subseteq \varSigma _{A}^{*}\). For example, consider L = {ε} and \(T=\{\overline {a} a\}\). Then we know \(\varepsilon \circ \overline {a} a=\bot \) and \(\varepsilon \circ a\overline {a}=\varepsilon \) resulting in \(\textsc {Reach}(\varepsilon ,\overline {a} a)=\emptyset \subsetneq \{\varepsilon \}=\textsc {Reach}(\varepsilon ,a\overline {a})\). However, the following inequation holds for any language \(T\subseteq \varSigma _{A}^{*}\) - even if T is not read-write independent:
$$\textsc{Reach}(L,T)\subseteq\textsc{Reach}(L,\pi_{A}(T)\pi_{\overline{A}}(T)) .$$
Now, consider https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figf_HTML.gif for some word \(t\in \varSigma _{A}^{*}\). This language is read-write independent. Due to Theorem 4.2 T is also closed under behavioral equivalence, i.e., we can compute Reach(L, T) in polynomial time. However, T is not necessarily closed. Hence, we cannot apply Theorem 4.7 to compute Reach(L, T). By Lemma 5.4 we infer
$$\textsc{Reach}(L,T^{*})=\textsc{Reach}(L,(\pi_{A}(T)\pi_{\overline{A}}(T))^{*})=\textsc{Reach}(L,(\pi_{A}(t)\pi_{\overline{A}}(t))^{*}) .$$
Since \(\pi _{A}(t)\pi _{\overline {A}}(t)\in \varSigma _{A}^{*}\) the map L↦Reach(L, T) preserves regularity efficiently by [19].
In the following, we will prove that, provided T is any regular and read-write independent language, the mapping L↦Reach(L, T) preserves regularity effectively and efficiently (cf. Theorem 5.11). By Lemma 5.4 it suffices to consider languages \(T=W\overline {R}\) where \(W,R\subseteq A^{*}\) are two regular languages. But before we show this general case, we make some additional assumptions on the languages W and R. Afterwards we derive the general case from this particular case. Concretely, we consider regular subsets \(W\overline {R}\subseteq A^{*}\overline {A}^{*}\) where A is some alphabet having a special letter $ which marks the beginning of each word from W and is used for synchronization between writing and reading actions. In this connection, we have to ensure that the $ can be read whenever it occurs on the queue’s head. We do this by insertion of arbitrarily many \(\overline {\$}\)’s at any position in R. In other words, we require https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figg_HTML.gif .
Theorem 5.5
Let A be an alphabet and $A be some letter. Additionally, let \(L\subseteq (A\setminus \{\$\})^{*}\), \(W\subseteq \$(A\setminus \{\$\})^{*}\), and \(R\subseteq A^{*}\) be regular languages such that https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figh_HTML.gif holds. Then \(\textsc {Reach}(L,(W\overline {R})^{*})\) is effectively regular. In particular, from NFAs accepting L, W, and R we can construct an NFA accepting \(\textsc {Reach}(L,(W\overline {R})^{*})\) in polynomial time.
The proof of this result can be found on page 17.

5.1 The Reduction to Pushdown Automata

We prove Theorem 5.5 by reduction to the reachability problem in pushdown automata. A first, trivial idea would be a simple replacement of the queue by a stack, i.e., from the queue’s content v we reach w if, and only if, the PDA reaches w from v. Unfortunately, this construction is not possible since our queue automaton modifies its content at both ends which cannot be simulated with a single stack. Hence, we need a more abstract presentation of the queue’s contents.
To this end, we consider some non-failing computation \(t\in (W\overline {R})^{*}\) of the queue with initial content vL, i.e., vt≠⊥. So, let \(v_{0},\dots ,v_{m}\in A^{*}\) and \(\alpha _{1},\dots ,\alpha _{m}\in \varSigma _{A}\) with v0 = v, vi+ 1 = viαi+ 1≠⊥ for each 0 ≤ i < m, and \(t=\alpha _{1}\dots \alpha _{m}\in (W\overline {R})^{*}\). By vi+ 1 = viαi+ 1≠⊥ we have \(\overline {v_{i}}\overline {\pi _{A}(\alpha _{i+1})}=\pi _{\overline {A}}(\alpha _{i+1})\overline {v_{i+1}}\) for each 0 ≤ i < m. Hence, we have \(\overline {v_{0}}\overline {\pi _{A}(t)}=\pi _{\overline {A}}(t)\overline {v_{m}}\). Since \(t\in (W\overline {R})^{*}\) holds, we have πA(t) ∈ W and, therefore, v0πA(t) ∈ LW. Let \(\mathfrak {C}\) be an NFA accepting the regular language LW (this is the set of all possible queue C ontents). Then there is an accepting run \(p_{0},\dots ,p_{\ell }\) in \(\mathfrak {C}\) labeled with v0πA(t).
Due to closure properties, the language \((W\overline {R})^{*}\) is regular. Let \(\mathfrak {T}\) be some NFA accepting this language (i.e., \(\mathfrak {T}\) accepts all possible T ransformation sequences). Then there is an accepting run \(s_{0},\dots ,s_{m}\) in \(\mathfrak {T}\) labeled with \(t=\alpha _{1}\dots \alpha _{m}\).
Now, we want to abstract any configuration (si, vi) of our queue automaton with the help of the following information:
1.
the state si from \(\mathfrak {T}\) which corresponds to the control state of our queue automaton,
 
2.
two states \(p_{x_{i}}\) and \(p_{y_{i}}\) from \(\mathfrak {C}\) such that \(p_{x_{i}},\dots ,p_{y_{i}}\) is a run in \(\mathfrak {C}\) labeled with vi, and
 
3.
the natural number |vi| $.
 
Initially, we abstract (s0, v0) by (p0, p|v|, s0,0) since \(p_{0},\dots ,p_{|v|}\) is a run in \(\mathfrak {C}\) labeled with v0 = v and |v0| $ = |v| $ = 0 by \(v\in L{\subseteq }(A{\setminus }\{\$\})^{*}\). Next, we can obtain the abstraction of (si+ 1, vi+ 1) from (si, vi) as follows: let \((p_{x_{i}},p_{y_{i}},s_{i},n_{i})\) be the abstraction of (si, vi). By the choice of our run in \(\mathfrak {T}\) we have some edge \(s_{i}\xrightarrow {\alpha _{i+1}}{~}_{\mathfrak {T}} s_{i+1}\). Additionally, we have to distinguish the following two cases (as depicted in Fig. 1)
1.
If αi+ 1 = aA, we can extend the run \(p_{x_{i}},\dots ,p_{y_{i}}\) by the edge \(p_{y_{i}}\overset {a}{\rightarrow }_{\mathfrak {C}} p_{y_{i}+1}\). Additionally, if αi+ 1 = $ then the number of $’s in vi will be increased. Hence, we abstract (si+ 1, vi+ 1) in this case by \((p_{x_{i}},p_{y_{i}+1},s_{i+1},n_{i}+|a|_\$)\).
 
2.
If \(\alpha _{i+1}=\overline {a}\in \overline {A}\), the run \(p_{x_{i}},\dots ,p_{y_{i}}\) starts with the edge \(p_{x_{i}}\overset {a}{\rightarrow }_{\mathfrak {C}} p_{x_{i}+1}\). If a = $ then the number of $’s in vi decreases. The resulting abstraction of (si+ 1, vi+ 1) in this case is \((p_{x_{i}+1},p_{y_{i}},s_{i+1},n_{i}-|a|_\$)\).
 
All in all, \((p_{x_{i}},p_{y_{i}},s_{i},n_{i})\) is an abstraction of the queue automaton’s configuration (si, vi). These information can be simulated with the help of some pushdown automaton \(\mathfrak {P}\). In this case, the control states of \(\mathfrak {P}\) are composed of the states \(p_{x_{i}}\), \(p_{y_{i}}\), and si and the stack contains \(\$^{n_{i}}\). Note that this PDA is essentially a (partially blind) one-counter automaton, but due to technical reasons we will utilize this more powerful automata model.
To this end, let \(\mathfrak {C}=(Q_{\mathfrak {C}},A,I_{\mathfrak {C}},{\varDelta }_{\mathfrak {C}},F_{\mathfrak {C}})\) be an NFA accepting LW and \(\mathfrak {T}=(Q_{\mathfrak {T}},\varSigma _{A},I_{\mathfrak {T}},{\varDelta }_{\mathfrak {T}},F_{\mathfrak {T}})\) be an NFA accepting \((W\overline {R})^{*}\). W.l.o.g., we can assume that both, \(\mathfrak {C}\) and \(\mathfrak {T}\), are trim in the sense that each state is reachable from the initial state and can reach some final state. Additionally, we assume that \(\mathfrak {C}\) and \(\mathfrak {T}\) have exactly one final state called \(f_{\mathfrak {C}}\) resp. \(f_{\mathfrak {T}}\). Note that we can compute these two automata in polynomial time from NFAs accepting L, W, and R.
Recall that the queue’s configuration is abstracted by states from \(\mathfrak {C}\) and \(\mathfrak {T}\) and by some natural number. Then the PDA \(\mathfrak {P}=(Q_{\mathfrak {P}},\varSigma _{A},\{\$\},I_{\mathfrak {P}},{\varDelta }_{\mathfrak {P}},F_{\mathfrak {P}})\) is defined as follows:
  • \(Q_{\mathfrak {P}}:=Q_{\mathfrak {C}}\times Q_{\mathfrak {C}}\times Q_{\mathfrak {T}}\). Here, the first and second component represent the two states characterizing the queue’s content as described above. The third component represents the control state of the queue automaton.
  • \(I_{\mathfrak {P}}:=I_{\mathfrak {C}}\times Q_{L}\times I_{\mathfrak {T}}\) where \(Q_{L}:=\{q\in Q_{\mathfrak {C}} | \exists v\in L\colon I_{\mathfrak {C}}\overset {v}{\rightarrow }_{\mathfrak {C}} q\}\) is the set of states being reachable via L
  • \(F_{\mathfrak {P}}:=Q_{\mathfrak {C}}\times F_{\mathfrak {C}}\times F_{\mathfrak {T}}\)
  • \({\varDelta }_{\mathfrak {P}}\) contains exactly the following transitions for aA, \(p,p',q,q^{\prime }\in Q_{\mathfrak {C}}\), and \(s,s^{\prime }\in Q_{\mathfrak {T}}\):
    (W)
    Simulate writing of the letter a into the queue: \(((p,q,s),a,\pi _\$(a),(p,q^{\prime },s^{\prime }))\in {\varDelta }_{\mathfrak {P}}\) if \((q,a,q^{\prime })\in {\varDelta }_{\mathfrak {C}}\) and \((s,a,s^{\prime })\in {\varDelta }_{\mathfrak {T}}\).
     
    (R)
    Simulate reading of the letter a from the queue: \(((p,q,s),\overline {a},\overline {\pi _\$(a)},(p^{\prime },q,s^{\prime }))\in {\varDelta }_{\mathfrak {P}}\) if \((p,a,p^{\prime })\in {\varDelta }_{\mathfrak {C}}\) and \((s,\overline {a},s^{\prime })\in {\varDelta }_{\mathfrak {T}}\).
     
In other words, we have the following four cases:
1.
\(((p,q,s),\$^{n})\overset {a}{\rightarrow }_{\mathfrak {P}}((p,q^{\prime },s^{\prime }),\$^{n})\) iff aA ∖{$}, \(q\overset {a}{\rightarrow }_{\mathfrak {C}} q^{\prime }\), and \(s\overset {a}{\rightarrow }_{\mathfrak {T}} s^{\prime }\).
 
2.
\(((p,q,s),\$^{n})\overset {\$}{\rightarrow }_{\mathfrak {P}}((p,q^{\prime },s^{\prime }),\$^{n+1})\) iff \(q\overset {\$}{\rightarrow }_{\mathfrak {C}} q^{\prime }\) and \(s\overset {\$}{\rightarrow }_{\mathfrak {T}} s^{\prime }\).
 
3.
\(((p,q,s),\$^{n})\overset {\overline {a}}{\rightarrow }_{\mathfrak {P}}((p^{\prime },q,s^{\prime }),\$^{n})\) iff aA ∖{$}, \(p\overset {a}{\rightarrow }_{\mathfrak {C}} p^{\prime }\), and \(s\overset {\overline {a}}{\rightarrow }_{\mathfrak {T}} s^{\prime }\).
 
4.
\(((p,q,s),\$^{n})\overset {\overline {\$}}{\rightarrow }_{\mathfrak {P}}((p^{\prime },q^{\prime },s^{\prime }),\$^{n-1})\) iff n > 0, \(p\overset {\$}{\rightarrow }_{\mathfrak {C}} p^{\prime }\), and \(s\overset {\overline {\$}}{\rightarrow }_{\mathfrak {T}} s^{\prime }\).
 
Now, we assign to the configuration ((p, q, s),$n) the set of all words being the labeling of some run from p to q in \(\mathfrak {C}\) and containing n appearances of the letter $ (which marks the beginning of a word from W ). Formally, our assignment is the mapping \(\llbracket . \rrbracket \colon \text {Conf}_{\mathfrak {P}}\to 2^{A^{*}}\) with
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figi_HTML.png
for each \(p,q\in Q_{\mathfrak {C}}\), \(s\in Q_{\mathfrak {T}}\), and \(n\in \mathbb {N}\). This language represents the set of all configurations (s, v) whose abstraction (as explained above) is ((p, q, s),n).
Next, we prove that the set of reachable queue contents coincides with this semantics of the reachable, accepting configurations of the PDA \(\mathfrak {P}\).
Proposition 5.6
We have \(\textsc {Reach}(L,(W\overline {R})^{*})=\bigcup _{\sigma \in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}}\llbracket {\sigma }\rrbracket \).
With the help of Proposition 5.6 we are able to prove Theorem 5.5. So, we will first prove this theorem and afterwards we show the correctness of the PDA \(\mathfrak {P}\) and its semantics.
Proof
(of Theorem 5.5) Due to Theorem 3.6 we can compute NFAs describing the set of configurations \(\text {post}^{*}({\text {Init}_{\mathfrak {P}}})\) in polynomial time. So, for \((p,q,s)\in Q_{\mathfrak {P}}\) let \(\mathfrak {A}_{(p,q,s)}\) be an NFA such that
$$\text{post}^{*}(\text{Init}_{\mathfrak{P}})=\bigcup_{(p,q,s)\in Q_{\mathfrak{P}}}\{(p,q,s)\}\times L(\mathfrak{A}_{(p,q,s)})$$
holds. Since, $ is the only stack symbol in \(\mathfrak {P}\), we get \(L(\mathfrak {A}_{(p,q,s)})\subseteq \$^{*}\). Furthermore, \(\$^{n}\in L(\mathfrak {A}_{(p,q,s)})\) holds if, and only if, \(((p,q,s),\$^{n})\in \text {Conf}_{\mathfrak {P}}\) is reachable from some initial configuration of \(\mathfrak {P}\).
The following language is regular as well:
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figj_HTML.png
Later we will see \(K=\textsc {Reach}(L,(W\overline {R})^{*})\). But before, we want to give some intuition on the definition of K. This language contains all words vA such that
  • v is the label of some run in \(\mathfrak {C}\) from p to q, where q is accepting in \(\mathfrak {C}\) (note that p is not necessarily initial),
  • v contains n $’s (for some \(n\in \mathbb {N}\)), and
  • the configuration ((p, q, s),$n) is reachable in \(\mathfrak {P}\) from some initial configuration.
Since each intermediate step of our computation is possible in polynomial time, we can compute an NFA accepting K in polynomial time as well.
Finally, we prove \(K=\textsc {Reach}(L,(W\overline {R})^{*})\). Applying Proposition 5.6 it suffices to prove \(K=\bigcup _{\sigma \in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}}\llbracket {\sigma }\rrbracket \).
First, let v ∈[[σ]] for some \(\sigma =((p,q,s),\$^{n})\in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\). Then we have \((p,q,s)\in F_{\mathfrak {P}}\) and \(\$^{n}\in L(\mathfrak {A}_{(p,q,s)})\). Hence, we have
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figk_HTML.png
Conversely, let vK. Then there is \((p,q,s)\in F_{\mathfrak {P}}\) with
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figl_HTML.png
Set n := |v|$. Then we have \(\$^{n}\in L(\mathfrak {A}_{(p,q,s)})\) implying that ((p, q, s),$n) is reachable from an initial configuration of \(\mathfrak {P}\). Additionally, this configuration is final since \((p,q,s)\in F_{\mathfrak {P}}\). In other words, we have \(((p,q,s),\$^{n})\in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\). Finally, we have
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figm_HTML.png

5.2 The Correctness of the Proposition 5.6

Next, we want to prove the correctness of Proposition 5.6. We prove this with the help of two lemmas each proving one inclusion. First, we show that each reachable queue content belongs to the semantics of some reachable configuration of \(\mathfrak {P}\). To this end, we consider some vL and \(t\in (W\overline {R})^{*}\) with vt≠⊥. We construct a t-labeled run of \(\mathfrak {P}\) such that the ith intermediate result vi is covered by the semantics of the ith step in our constructed run.
Concretely, we have runs \(p_{0},\dots ,p_{\ell }\) and \(s_{0},\dots ,s_{m}\) in \(\mathfrak {C}\) and \(\mathfrak {T}\) labeled with vπA(t) and t, respectively, from an initial state to some accepting state. The ith configuration σi on our run of \(\mathfrak {P}\) consists of two states \(p_{x_{i}}\) and \(p_{y_{i}}\) (where 0 ≤ xiyi), si, and the number of $’s on the sub-path \(p_{x_{i}},\dots ,p_{y_{i}}\). Then we will see that vi ∈[[σi]], which finally implies vm = vt ∈[[σm]].
Example 5.7
Consider L = {ε}, W = {$a, $b}, and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Fign_HTML.gif . Then the languages LW and \((W\overline {R})^{*}\) are accepted by the NFAs \(\mathfrak {C}\) and \(\mathfrak {T}\) in Fig. 2. Let \(t:=\$b\overline {\$}\overline {b}\in (W\overline {R})^{*}\). Then we have
$$\varepsilon\circ{\$}b\overline{\$}\overline{b}=\$\circ b\overline{\$}\overline{b}=\$b\circ\overline{\$}\overline{b}=b\circ\overline{b}=\varepsilon\neq\bot .$$
Consider the accepting runs \(p_{1}\overset {\$}{\rightarrow }_{\mathfrak {C}} p_{2}\overset {b}{\rightarrow }_{\mathfrak {C}} p_{1}\) and \(s_{1}\overset {\$}{\rightarrow }_{\mathfrak {T}} s_{2}\overset {b}{\rightarrow }_{\mathfrak {T}} s_{3}\overset {\overline {\$}}{\rightarrow }_{\mathfrak {T}} s_{3}\overset {\overline {b}}{\rightarrow }_{\mathfrak {T}} s_{4}\) in \(\mathfrak {C}\) and \(\mathfrak {T}\) labeled with vπA(t) = $b and t, respectively. Then we construct the following run in \(\mathfrak {P}\):
$$ \begin{array}{@{}rcl@{}} \text{Init}_{\mathfrak{P}}\ni((p_{1},p_{1},s_{1}),0)& \overset{\$}{\rightarrow}_{\mathfrak{P}}&((p_{1},p_{2},s_{2}),1)\overset{b}{\rightarrow}_{\mathfrak{P}}((p_{1},p_{1},s_{3}),1)\\ &\overset{\overline{\$}}{\rightarrow}_{\mathfrak{P}}&((p_{2},p_{1},s_{3}),0)\overset{\overline{b}}{\rightarrow}_{\mathfrak{P}}((p_{1},p_{1},s_{4}),0)\in\text{Final}_{\mathfrak{P}} . \end{array} $$
Then we can see vt = ε ∈[[((p1, p1, s4), 0)]].
Lemma 5.8
Let \(t\in (W\overline {R})^{*}\) and vL with vt≠⊥. Then there is \(\sigma \in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\) with (vt) ∈[[σ]].
Proof
Let \(t=w_{1}\overline {r_{1}}{\dots } w_{k}\overline {r_{k}}\) with \(w_{1},\dots ,w_{k}\in W\) and \(r_{1},\dots ,r_{k}\in R\). We have \(vw_{1}{\dots } w_{k}\in LW^{*}=L(\mathfrak {C})\). Hence, there is a run \(p_{0},\dots ,p_{\ell }\) labeled with \(vw_{1}\dots \) wk in \(\mathfrak {C}\) with \(p_{0}\in I_{\mathfrak {C}}\) and \(p_{\ell }\in F_{\mathfrak {C}}\). Additionally, we have \(t\in (W\overline {R})^{*}=L(\mathfrak {T})\) and therefore a run \(s_{0},\dots ,s_{m}\) with labeling t in \(\mathfrak {T}\) with \(s_{0}\in I_{\mathfrak {T}}\) and \(s_{m}\in F_{\mathfrak {T}}\).
By \(t\in \varSigma _{A}^{*}\) there are \(\alpha _{0},\dots ,\alpha _{m-1}\in \varSigma _{A}\) with \(t=\alpha _{0}\dots \alpha _{m-1}\). Since vt≠⊥ there are \(v_{0},\dots ,v_{m}\in A^{*}\) with v0 = v and vi+ 1 = viαi for each 0 ≤ i < m. This implies \(v_{i}=v\circ \alpha _{0}\dots \alpha _{i-1}\) and, hence, \(\overline {v}\overline {\pi _{A}(\alpha _{0}\dots \alpha _{i-1})}=\pi _{\overline {A}}(\alpha _{0}\dots \alpha _{i-1})\overline {v_{i}}\). Since \(v\pi _{A}(\alpha _{0}\dots \alpha _{i-1})\) is a prefix of \(vw_{1}{\dots } w_{k}\) we infer that vi is a factor of the word \(vw_{1}{\dots } w_{k}\). Therefore, vi is the labeling of some fragment of the run \(p_{0},\dots ,p_{\ell }\) in \(\mathfrak {C}\).
Now, we want to construct a run \(\sigma _{0},\dots ,\sigma _{m}\) in \(\mathfrak {P}\) from an initial configuration to a final configuration with labeling t. To this end, we define
  • \(x_{i}:=|\alpha _{0}\dots \alpha _{i-1}|_{\overline {A}}\),
  • \(y_{i}:=|v|+|\alpha _{0}\dots \alpha _{i-1}|_{A}\), and
  • ni := |vi|$.
By definition we have \(0\leq x_{i}\leq |t|_{\overline {A}}\leq m\) and |v|≤ yi ≤|vt|A = for each 0 ≤ im. Set \(\sigma _{i}:=((p_{x_{i}},p_{y_{i}},s_{i}),\$^{n_{i}})\in \text {Conf}_{\mathfrak {P}}\) for each 0 ≤ im. We will prove that \(\sigma _{0},\dots ,\sigma _{m}\) is a run in \(\mathfrak {P}\) with labeling t from \(\text {Init}_{\mathfrak {P}}\) to \(\text {Final}_{\mathfrak {P}}\) such that vi ∈[[σi]]. But first, we have to show \(n_{i}=|\alpha _{0}\dots \alpha _{i-1}|_\$-|\alpha _{0}\dots \alpha _{i-1}|_{\overline {\$}}\) for each 0 ≤ im. We do this by induction on i. The case i = 0 is obvious since n0 = 0 by \(v_{0}=v\in L\subseteq (A\setminus \{\$\})^{*}\) and \(\alpha _{0}\dots \alpha _{i-1}=\varepsilon \). Now, let i ≥ 0. The induction hypothesis holds for i and we prove the equation for i + 1. Then we have to consider three cases:
1.
\(\alpha _{i}\notin \{\$,\overline {\$}\}\). Then we have
$$ \begin{array}{@{}rcl@{}} n_{i+1}&=|v_{i+1}|_\$=|v_{i}|_\$=n_{i}=|\alpha_{0}\dots\alpha_{i-1}|_\$-|\alpha_{0}\dots\alpha_{i-1}|_{\overline{\$}}\\ &=|\alpha_{0}\dots\alpha_{i}|_\$-|\alpha_{0}\dots\alpha_{i}|_{\overline{\$}} . \end{array} $$
 
2.
αi = $. Then we have
$$ \begin{array}{@{}rcl@{}} n_{i+1}&=|v_{i+1}|_\$=|v_{i}|_\$+1=n_{i}+1=|\alpha_{0}\dots\alpha_{i-1}|_\$-|\alpha_{0}\dots\alpha_{i-1}|_{\overline{\$}}+1\\ &=|\alpha_{0}\dots\alpha_{i-1}\$|_\$-|\alpha_{0}\dots\alpha_{i-1}\$|_{\overline{\$}}=|\alpha_{0}\dots\alpha_{i}|_\$-|\alpha_{0}\dots\alpha_{i}|_{\overline{\$}} . \end{array} $$
 
3.
\(\alpha _{i}=\overline {\$}\). Then, by \(v_{i+1}=v_{i}\circ \overline {\$}\neq \bot \) we have vi = $vi+ 1. Hence, we have
$$ \begin{array}{@{}rcl@{}} n_{i+1}&=|v_{i+1}|_\$=|v_{i}|_\$-1=n_{i}-1=|\alpha_{0}\dots\alpha_{i-1}|_\$-|\alpha_{0}\dots\alpha_{i-1}|_{\overline{\$}}-1\\ &=|\alpha_{0}\dots\alpha_{i-1}\overline{\$}|_\$-|\alpha_{0}\dots\alpha_{i-1}\overline{\$}|_{\overline{\$}}=|\alpha_{0}\dots\alpha_{i}|_\$-|\alpha_{0}\dots\alpha_{i}|_{\overline{\$}} . \end{array} $$
 
To prove that \(\sigma _{m}\in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\) and vt ∈[[σm]] we demonstrate that \(\sigma _{0}\in \text {Init}_{\mathfrak {P}}\), \(\sigma _{i}\overset {\alpha _{i}}{\rightarrow }_{\mathfrak {P}}\sigma _{i+1}\), vi+ 1 ∈[[σi+ 1]], and \(\sigma _{m}\in \text {Final}_{\mathfrak {P}}\). This is done by induction on i.
First, we show \(\sigma _{0}\in \text {Init}_{\mathfrak {P}}\). By definition, we have x0 = n0 = 0 and y0 = |v|. Due to the choice of the run \(p_{0},\dots ,p_{\ell }\) we have \(p_{0}\in I_{\mathfrak {C}}\) and \(p_{0}\overset {v}{\rightarrow }_{\mathfrak {C}} p_{|v|}\) and therefore \(p_{|v|}\in \{q\in Q_{\mathfrak {C}} | \exists u\in L\colon I_{\mathfrak {C}}\overset {u}{\rightarrow }_{\mathfrak {C}} q\}=Q_{L}\). Additionally, by the choice of \(s_{0},\dots ,s_{m}\) we have \(s_{0}\in I_{\mathfrak {T}}\). Hence, \(\sigma _{0}=((p_{0},p_{|v|},s_{0}),\varepsilon )\in \text {Init}_{\mathfrak {P}}\). By \(v\in L\subseteq (A\setminus \{\$\})^{*}\) we can also infer https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figo_HTML.gif .
Next, let i ≥ 0. We have to consider two cases:
(W)
αiA. Then we have xi+ 1 = xi, yi+ 1 = yi + 1, and ni+ 1 = ni + |αi|$. By the choice of the run \(p_{0},\dots ,p_{\ell }\) we have \((p_{y_{i}},\alpha _{i},p_{y_{i+1}})\in {\varDelta }_{\mathfrak {C}}\) and by choice of \(s_{0},\dots ,s_{m}\) we have \((s_{i},\alpha _{i},s_{{i+1}})\in {\varDelta }_{\mathfrak {T}}\). Hence, there is a transition
$$((p_{x_{i}},p_{y_{i}},s_{i}),\alpha_{i},\pi_{\$}(\alpha_{i}),(p_{x_{i+1}},p_{y_{i+1}},s_{{i+1}}))\in{\varDelta}_{\mathfrak{P}}$$
and, therefore, \(\sigma _{i}\overset {\alpha _{i}}{\rightarrow }_{\mathfrak {P}}\sigma _{i+1}\). Furthermore, we have
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figp_HTML.png
 
(R)
\(\alpha _{i}=\overline {a}\in \overline {A}\). Here, we have xi+ 1 = xi + 1, yi+ 1 = yi, and ni+ 1 = ni −|a|$ ≥ 0. Due to \(v_{i+1}=v_{i}\circ \overline {a}\neq \bot \) we have vi = avi+ 1. Since \(p_{x_{i}},\dots ,p_{y_{i}}\) is a run labeled with vi and a is a prefix of vi, this run begins with an a-edge. This implies \((p_{x_{i}},a,p_{x_{i+1}})\in {\varDelta }_{\mathfrak {C}}\). Additionally, since \(s_{0},\dots ,s_{m}\) is a run labeled with t, we have \((s_{i},\alpha _{i},s_{{i+1}})\in {\varDelta }_{\mathfrak {T}}\). Hence, we have
$$((p_{x_{i}},p_{y_{i}},s_{i}),\overline{a},\overline{\pi_\$(a)},(p_{x_{i+1}},p_{y_{i+1}},s_{{i+1}}))\in{\varDelta}_{\mathfrak{P}}$$
implying \(\sigma _{i}\overset {\alpha _{i}}{\rightarrow }_{\mathfrak {P}}\sigma _{i+1}\). By the induction hypothesis we have vi ∈[[σi]] = https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figq_HTML.gif . Since vi = avi+ 1 and \((p_{x_{i}},a,p_{x_{i+1}})\in {\varDelta }_{\mathfrak {C}}\) we know that
https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figr_HTML.png
which implies vi+ 1 = avi ∈[[σi+ 1]] (recall that vi+ 1 = avi holds).
 
Finally, we have ym = implying \(p_{y_{m}}=p_{\ell }\in F_{\mathfrak {C}}\) and \(s_{m}\in F_{\mathfrak {T}}\). Hence, we have \(\sigma _{m}\in \text {Final}_{\mathfrak {P}}\) which finishes our proof. □
Let vL and \(t\in (W\overline {R})^{*}\) with vt≠⊥. Recall that we have proven Lemma 5.8 by combining runs in \(\mathfrak {C}\) and \(\mathfrak {T}\) to a t-labeled run in \(\mathfrak {P}\) which simulates the computation vt. A first approach to prove the converse inclusion would be the following: let \(\sigma \in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\) and w ∈[[σ]]. Then there is a run in \(\mathfrak {P}\) from \(\text {Init}_{\mathfrak {P}}\) to σ labeled with \(t\in \varSigma _{A}^{*}\). From this run we obtain some accepting run in \(\mathfrak {T}\) labeled with t implying \(t\in L(\mathfrak {T})=(W\overline {R})^{*}\). Unfortunately, we cannot infer vt≠⊥ as the following example proves:
Example 5.9
We continue Example 5.7. Consider the following accepting run of \(\mathfrak {P}\):
$$ \begin{array}{@{}rcl@{}} \text{Init}_{\mathfrak{P}}\ni((p_{1},p_{1},s_{1}),0)\!&\overset{\$}{\rightarrow}_{\mathfrak{P}}&\!((p_{1},p_{2},s_{2}),1)\overset{a}{\rightarrow}_{\mathfrak{P}}((p_{1},p_{1},s_{3}),1)\\ \!&\overset{\overline{\$}}{\rightarrow}_{\mathfrak{P}}&\!((p_{2},p_{1},s_{3}),0)\!\overset{\overline{b}}{\rightarrow}_{\mathfrak{P}}\!((p_{1},p_{1},s_{4}),0)\!=:\!\sigma\!\in\!\text{Final}_{\mathfrak{P}} . \end{array} $$
Then we have [[σ]] = {ε} and, indeed, \(\varepsilon \in \textsc {Reach}(L,(W\overline {R})^{*})\). However, \(t={\$}a\overline {\$}\overline {b}\) is not an allowed computation of our queue automaton since
$$\varepsilon\circ{\$}a\overline{\$}\overline{b}={\$}\circ a\overline{\$}\overline{b}={\$}a\circ\overline{\$}\overline{b}=a\circ\overline{b}=\bot .$$
The reason of this problem is the lack of memory of our pushdown automaton \(\mathfrak {P}\) which allows that the subsequences of write and read actions, respectively, do not match. However, we can avoid this problem by a modification of write actions in our run t. Since the application of a read action in \(\mathfrak {P}\) always requires a step in the NFA \(\mathfrak {C}\), we can obtain a transformation sequence \(t^{\prime }\in (W\overline {R})^{*}\) in which we only read letters that have been written into the queue before. This will finally result in \(w=v\circ t^{\prime }\in \textsc {Reach}(L,(W\overline {R})^{*})\).
Lemma 5.10
Let \(\sigma \in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\). Then we have
$$\llbracket{\sigma}\rrbracket\subseteq\textsc{Reach}(L,(W\overline{R})^{*}) .$$
Proof
Let \(\sigma =((p,q,s),\$^{n})\in \text {post}^{*}(\text {Init}_{\mathfrak {P}})\cap \text {Final}_{\mathfrak {P}}\) and v ∈[[σ]]. Since vhttps://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figs_HTML.gif , and \(q\in F_{\mathfrak {C}}\) there are a suffix y of a word from LW and \({\$}z_{1},\dots ,{\$}z_{n}\in W\) such that \(v=y{\$}z_{1}{\$}z_{2}\dots {\$}z_{n}\). By v ∈[[σ]] we have \(y\in L(\mathfrak {C}_{p\to f_{\mathfrak {C}}})\cap (A\setminus \{\$\})^{*}\). Furthermore, there is some word \(t\in \varSigma _{A}^{*}\) labeling a path from \(\text {Init}_{\mathfrak {P}}\) to σ. Since every transition of the PDA \(\mathfrak {P}\) simulates a transition of the NFA \(\mathfrak {T}\), we obtain \(t\in L(\mathfrak {T})=(W\overline {R})^{*}\). Hence, there are k ≥ 0, \({\$}w_{1},\dots ,{\$}w_{k}\in W\), and \(r_{1},\dots ,r_{k}\in R\) with \(t={\$}w_{1}\overline {r_{1}}\dots {\$}w_{k}\overline {r_{k}}\).
The PDA \(\mathfrak {P}\) lacks a memory of the concrete paths in \(\mathfrak {C}\) and \(\mathfrak {T}\) and, hence, lacks a memory of the letters that have been written into the queue before. Therefore, it is possible that the transformation t cannot be applied to any word from L (i.e., Lt = {⊥}). But due to this lack of memory we can replace the infixes \({\$}w_{1},\dots ,{\$}w_{k}\) in t by arbitrary words from W. Hence, we construct a new transformation \(t^{\prime }\in L(\mathfrak {T})=(W\overline {R})^{*}\) which is a labeling of some other path in \(\mathfrak {P}\) from \(\text {Init}_{\mathfrak {P}}\) to σ which corresponds to some valid computation of the queue automaton (i.e., \(L\circ t^{\prime }\neq \{\bot \}\)).
Recall that $-transitions in \(\mathfrak {P}\) increase the number of $ s in the stack and \(\overline {\$}\)-transitions in \(\mathfrak {P}\) decrease the number of $ s in the stack. Therefore, since t is some labeling of a path in \(\mathfrak {P}\) each prefix of t contains at least as many $ s as \(\overline {\$}\). Hence, we have \(|r_{1}{\dots } r_{i}|_{\$}\leq |{\$}w_{1}\dots {\$}w_{i}|_{\$}=i\) for each 1 ≤ ik.
Due to \(r_{1},\dots ,r_{k}\in R\subseteq A^{*}\) there is \(\ell \in \mathbb {N}\) and words \(x_{0},\dots ,x_{\ell }\in (A\setminus \{\$\})^{*}\) with \(r_{1}{\dots } r_{k}=x_{0}{\$}\dots {\$}x_{\ell }\). Hence, since \(|r_{1}{\dots } r_{i}|_{\$}\leq i\) we know that \(r_{1}{\dots } r_{i}\) is a prefix of \(x_{0}{\$}\dots {\$}x_{i}\) for each 1 ≤ i. In particular, we have k = |t|$ and \(\ell =|t|_{\overline {\$}}\) implying k = n.
Now, we distinguish two cases: first, suppose = 0, i.e., k = n and \(x_{0}=r_{1}{\dots } r_{k}\in A^{*}\). Therefore, a path in \(\mathfrak {P}\) with labeling t from \(\text {Init}_{\mathfrak {P}}\) to σ requires some $-free path in \(\mathfrak {C}\) labeled with \(r_{1}{\dots } r_{k}\) from \(I_{\mathfrak {C}}\) to \(p\in Q_{\mathfrak {C}}\) (this path is represented in the first component of \(\mathfrak {P}\)’s states). Due to \(L(\mathfrak {C})=LW^{*}\), \(L\subseteq (A\setminus \{\$\})^{*}\), and \(W\subseteq {\$}(A\setminus \{\$\})^{*}\), the word \(r_{1}{\dots } r_{k}\) is some prefix of a word from L. Hence, we have \(x_{0}\in L(\mathfrak {C}_{I_{\mathfrak {C}}\to p})\cap (A\setminus \{\$\})^{*}\) and \(y\in L(\mathfrak {C}_{p\to f_{\mathfrak {C}}})\cap (A\setminus \{\$\})^{*}\) implying x0yL. Now, we prove
$$v=x_{0}y\circ{\$}z_{1}\overline{r_{1}}{\$}\dots{\$}z_{k}\overline{r_{k}}\in\textsc{Reach}(L,(W\overline{R})^{*}) .$$
Then we can prove the following
$$ \begin{array}{@{}rcl@{}} x_{0}y\circ{\$}z_{1}\overline{r_{1}}{\$}\dots{\$}z_{k}\overline{r_{k}}&=&r_{1}{\dots} r_{k}y\circ {\$}z_{1}\overline{r_{1}}{\$}\dots{\$}z_{k}\overline{r_{k}}\\ &=&r_{2}{\dots} r_{k}y{\$}z_{1}\circ {\$}z_{2}\overline{r_{2}}{\$}\dots{\$}z_{k}\overline{r_{k}}\\ & \ \vdots\\ &=&r_{i+1}{\dots} r_{k}y{\$}z_{1}\dots{\$}z_{i}\circ {\$}z_{i+1}\overline{r_{i+1}}{\$}\dots{\$}z_{k}\overline{r_{k}}\\ & \ \vdots\\ &=&y{\$}z_{1}\dots{\$}z_{k}=v . \end{array} $$
Since \({\$}z_{1},\dots ,\$z_{k}\in W\) and \(r_{1},\dots ,r_{k}\in R\) we have \(\$z_{1}\overline {r_{1}}\$\dots \$z_{k}\overline {r_{k}}\in (W\overline {R})^{*}\). Then, from x0yL we can infer \(v\in \textsc {Reach}(L,(W\overline {R})^{*})\).
Next, we assume ≥ 1. Since t is the labeling of some path from \(\text {Init}_{\mathfrak {P}}\) to σ in \(\mathfrak {P}\) we can prove (by observing the first component of \(\mathfrak {P}\)’s state) that there is a path in \(\mathfrak {C}\) from \(I_{\mathfrak {C}}\) to p labeled with \(r_{1}{\dots } r_{k}=x_{0}{\$}\dots {\$}x_{\ell }\). By definition of \(L(\mathfrak {C})=LW^{*}\), \(L\subseteq (A\setminus \{\$\})^{*}\), and \(W\subseteq {\$}(A\setminus \{\$\})^{*}\) we have x0L, \({\$}x_{1},\dots ,{\$}x_{\ell -1}\in W\), and $x is a prefix of a word in W.
Since \(x_{0}{\$}\dots {\$}x_{\ell }\in L(\mathfrak {C}_{I_{\mathfrak {C}}\to p})\) and \(v=y{\$}z_{1}{\$}\dots {\$}z_{n}\in L(\mathfrak {C}_{p\to F_{\mathfrak {C}}})\subseteq \llbracket {\sigma }\rrbracket \) we have
$$x_{0}{\$}\dots{\$}x_{\ell} y{\$}z_{1}{\$}\dots{\$}z_{n}\in L(\mathfrak{C})=LW^{*} ,$$
i.e., $xyW. We want to prove now
$$v=x_{0}\circ {\$}x_{1}\overline{r_{1}}{\$}x_{2}\overline{r_{2}}{\dots} {\$}x_{\ell} y\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\in\textsc{Reach}(L,(W\overline{R})^{*}) .$$
First, we prove by induction on 1 ≤ i < that
$$x_{0}\circ {\$}x_{1}\overline{r_{1}}{\$}x_{2}\overline{r_{2}}{\dots} {\$}x_{i}\overline{r_{i}}={r_{1}{\dots} r_{i}}\backslash{(x_{0}{\$}\dots{\$}x_{i})}$$
holds. To this end, let i = 1. Then we have
$$x_{0}\circ {\$}x_{1}\overline{r_{1}}=x_{0}{\$}x_{1}\circ \overline{r_{1}}={r_{1}}\backslash{(x_{0}{\$}x_{1})}$$
which is defined since r1 is a prefix of x0$x1 as mentioned above. Now, let 1 < i < . Then we have
$$ \begin{array}{@{}rcl@{}} x_{0}\circ {\$}x_{1}\overline{r_{1}}{\$}x_{2}\overline{r_{2}}{\dots} {\$}x_{i}\overline{r_{i}}&=&{r_{1}{\dots} r_{i-1}}\backslash{(x_{0}{\$}\dots{\$}x_{i-1})}\circ{\$}x_{i}\overline{r_{i}} \qquad\text{(by ind.~hyp.)}\\ &=&{r_{1}{\dots} r_{i-1}}\backslash{(x_{0}{\$}\dots{\$}x_{i-1})}\cdot {\$}x_{i}\circ\overline{r_{i}}\\ &=&{r_{1}{\dots} r_{i-1}}\backslash{(x_{0}{\$}\dots{\$}x_{i-1}{\$}x_{i})}\circ\overline{r_{i}}\\ &=&{r_{i}}\backslash{\bigl({r_{1}{\dots} r_{i-1}}\backslash{(x_{0}{\$}\dots{\$}x_{i})}\bigr)}\\ &=&{r_{1}{\dots} r_{i}}\backslash{(x_{0}{\$}\dots{\$}x_{i})} . \end{array} $$
The last two equations hold since \(r_{1}{\dots } r_{i}\) is a prefix of \(x_{0}{\$}\dots {\$}x_{i}\) as we have mentioned above. Next, we can prove the following equalities:
$$ \begin{array}{@{}rcl@{}} && x_{0}\circ {\$}x_{1}\overline{r_{1}}{\$}x_{2}\overline{r_{2}}{\dots} {\$}x_{\ell} y\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & =&{r_{1}{\dots} r_{\ell-1}}\backslash{(x_{0}{\$}\dots{\$}x_{\ell-1})}\circ {\$}x_{\ell} y\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & = &{r_{1}{\dots} r_{\ell-1}}\backslash{(x_{0}{\$}\dots{\$}x_{\ell-1}{\$}x_{\ell} y)}\circ\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & = &{r_{1}{\dots} r_{\ell-1}}\backslash{(r_{1}{\dots} r_{k}y)}\circ\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & = & r_{\ell}{\dots} r_{k}y\circ\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & = & r_{\ell+1}{\dots} r_{k}y\circ {\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\\ & = & r_{\ell+2}{\dots} r_{k}y{\$}z_{1}\circ {\$}z_{2}\overline{r_{\ell+2}}\dots{\$}z_{n}\overline{r_{k}}\\ & {\vdots} &\ \\ & = & y{\$}z_{1}{\$}\dots{\$}z_{n}=v \end{array} $$
From \({\$}x_{1},\dots ,{\$}x_{\ell } y,{\$}z_{1},\dots ,{\$}z_{n}\in W\) and \(r_{1},\dots ,r_{k}\in R\) we can infer
$${\$}x_{1}\overline{r_{1}}{\$}x_{2}\overline{r_{2}}{\dots} {\$}x_{\ell} y\overline{r_{\ell}}{\$}z_{1}\overline{r_{\ell+1}}\dots{\$}z_{n}\overline{r_{k}}\in (W\overline{R})^{*} .$$
With x0L we can finally infer \(v\in \textsc {Reach}(L,(W\overline {R})^{*})\). □

5.3 The Main Result

Until now we have seen the effective preservation of regularity if the read-write independent set \(T\subseteq \varSigma _{A}^{*}\) satisfies a special condition, namely, \(T=W\overline {R}\) where https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figt_HTML.gif . From this special case we infer now the effective preservation of regularity for arbitrary read-write independent sets.
Theorem 5.11 (Main Theorem)
Let A be an alphabet, \(L\subseteq A^{*}\) be regular, and \(T\subseteq \varSigma _{A}^{*}\) be read-write independent and regular. Then Reach(L, T) and BackReach(L, T) are effectively regular. In particular, from NFAs accepting L and T we can compute NFAs accepting Reach(L, T) and BackReach(L, T) in polynomial time.
We first consider the effective and efficient regularity of Reach(L, T). Recall that we are able to de-shuffle T by Lemma 5.4, i.e., we have
$$\textsc{Reach}(L,T)=\textsc{Reach}(L,\pi_{A}(T)\pi_{\overline{A}}(T)) .$$
So, when computing Reach(L, T) it suffices to only consider the de-shuffled words in T. The following lemma states that we are allowed to insert the synchronizing letter $ into our de-shuffled words:
Lemma 5.12
Let A be an alphabet, $A be another symbol, and \(L,W,R\subseteq A^{*}\). Set \(W^{\prime }:={\$}W\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figu_HTML.gif . Then we have \(\textsc {Reach}(L,W\overline {R})=\pi _{A}(\textsc {Reach}(L,W^{\prime }\overline {R^{\prime }}))\).
Proof
First, let \(x\in \textsc {Reach}(L,W\overline {R})\). Then there are vL, wW, and rR with \(v\circ w\overline {r}=x\neq \bot \). Due to Definition 3.1 we have rx = vw. We can construct https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figv_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figw_HTML.gif satisfying \(r^{\prime }x^{\prime }=v{\$}w\), i.e., we have \(x^{\prime }={r^{\prime }}\backslash {v{\$}w}\). Hence, the following holds:
$$\bot\neq x^{\prime}={r^{\prime}}\backslash{v{\$}w}=v{\$}w\circ\overline{r^{\prime}}=v\circ{\$}w\overline{r^{\prime}}\in\textsc{Reach}(L,W^{\prime}\overline{R^{\prime}})$$
implying \(x=\pi _{A}(x^{\prime })\in \pi _{A}(\textsc {Reach}(L,W^{\prime }\overline {R^{\prime }}))\).
Now, let \(x\in \pi _{A}(\textsc {Reach}(L,W^{\prime }\overline {R^{\prime }}))\). Then there is \(x^{\prime }\in \textsc {Reach}(L,W^{\prime }\overline {R^{\prime }})\) with \(x=\pi _{A}(x^{\prime })\), i.e., we have vL, \(w^{\prime }\in W^{\prime }\), and \(r^{\prime }\in R^{\prime }\) with \(v\circ w^{\prime }\overline {r^{\prime }}=x^{\prime }\neq \bot \). Again, by Definition 3.1 we have \(r^{\prime }x^{\prime }=vw^{\prime }\). Since πA is a homomorphism, we can infer \(\pi _{A}(r^{\prime })\pi _{A}(x^{\prime })=\pi _{A}(v)\pi _{A}(w^{\prime })\) and, therefore, \(\pi _{A}(x^{\prime })={\pi _{A}(r^{\prime })}\backslash {\pi _{A}(v)\pi _{A}(w^{\prime })}\). Hence, the following equations hold:
$$\bot\neq x=\pi_{A}(x^{\prime})={\pi_{A}(r^{\prime})}\backslash{\pi_{A}(v)\pi_{A}(w^{\prime})}=\pi_{A}(v)\circ\pi_{A}(w^{\prime})\overline{\pi_{A}(r^{\prime})} .$$
Since πA(v) = v, \(\pi _{A}(w^{\prime })\in \pi _{A}(W^{\prime })=W\), and \(\pi _{A}(r^{\prime })\in \pi _{A}(R^{\prime })=R\) we can finally infer \(x\in \textsc {Reach}(L,W\overline {R})\).□
Now we can prove our main theorem in this section:
Proof
(of Theorem 5.11) Let W := πA(T) and \(R:=\pi _{\overline {A}}(T)\) which are both regular by closure properties of regular languages. We introduce a new letter $A. Then we can compute NFAs accepting \(W^{\prime }:={\$}W\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figx_HTML.gif . By Theorem 5.5 we know that \(\textsc {Reach}(L,(W^{\prime }\overline {R^{\prime }})^{*})\) is effectively regular as well. By iterated application of the Lemmas 5.4 and 5.12 we can infer that
$$\textsc{Reach}(L,T^{*})=\textsc{Reach}(L,(W\overline{R})^{*})=\pi_{A}(\textsc{Reach}(L,(W^{\prime}\overline{R^{\prime}})^{*}))$$
holds. Hence, due to the closure properties of the class of regular languages, Reach(L, T) is effectively regular. Note that the modifications of W and R as well as the projection to A are possible in linear time and space. Hence, an NFA accepting Reach(L, T) can be computed still in polynomial time.
Finally, we have to consider BackReach(L, T). Due to Lemma 5.4 and Theorem 3.3 we have
$$\textsc{BackReach}(L,T^{*})=\textsc{BackReach}(L,(W\overline{R})^{*})={\textsc{Reach}({L}^{\text{R}},({R}^{\text{R}}\overline{{W}^{\text{R}}})^{*})}^{\text{R}} .$$
By closure properties and the statement above we obtain the effective and efficient regularity of BackReach(L, T). □
We can use Theorem 5.11 to prove the effective preservation of regularity of some other language classes. The following corollary lists some of them:
Corollary 5.13
Let A be an alphabet, \(L\subseteq A^{*}\) be regular, and \(T\subseteq \varSigma _{A}^{*}\) be regular. Then Reach(L, T) and BackReach(L, T) are effectively regular if
(1)
\(T=\overline {R_{1}}W\overline {R_{2}}\) for some regular sets \(W,R_{1},R_{2}\subseteq A^{*}\),
 
(2)
T = {t} for some \(t\in \varSigma _{A}^{*}\) (cf. [19]), or
 
(3)
\(T\subseteq A^{*}\cup \overline {A}^{*}\).
 
In all of these cases the computation of NFAs accepting Reach(L, T) and BackReach(L, T), respectively, is possible in polynomial time.
Proof
First, we prove (1). Then we have
$$(\overline{R_{1}}W\overline{R_{2}})^{*}=\{\varepsilon\}\cup\overline{R_{1}}(W\overline{R_{2}R_{1}})^{*}W\overline{R_{2}} .$$
Then, due to Proposition 3.5 and Theorem 5.11 \(\textsc {Reach}(L,(\overline {R_{1}}W\overline {R_{2}})^{*})\) is effectively regular.
Next, we consider (2). Due to Proposition 4.3 we can compute a word \(s\in \overline {A}^{*}A^{*}\overline {A}^{*}\) with st. Using (1) we know that Reach(L, s) is effectively regular. Hence Reach(L, t) is regular as well.
Finally, we consider (3). Let \(W,R\subseteq A^{*}\) with \(T=W\cup \overline {R}\). Then we have \(T^{*}=(W^{*}\overline {R}^{*})^{*}\). Hence, due to Theorem 5.11 Reach(L, T) is effectively regular. □
As we have seen, Theorem 5.11 implies the effective preservation of regularity for a large class of sets of transformation sequences. However, we think our result can be generalized to an even larger class of languages. Recall that \(T\subseteq \varSigma _{A}^{*}\) is read-write independent if for each pair s, t of words in T there is some particular de-shuffled combination \(\pi _{A}(s)\pi _{\overline {A}}(t)\) of these words in T. A possible generalization is to drop the requirement that this combination is de-shuffled:
Conjecture 5.14
Let A be an alphabet, \(L\subseteq A^{*}\) be regular, and \(T\subseteq \varSigma _{A}^{*}\) be regular such that for each s, tT there is rT with πA(r) = πA(s) and \(\pi _{\overline {A}}(r)=\pi _{\overline {A}}(t)\) holds. We conjecture that in this case Reach(L, T) is effectively regular.
The proof of Theorem 5.11 does not work in this case. At least the utilization of Lemma 5.4, where we de-shuffle the words from T, is impossible in certain cases. For example, we have \(\textsc {Reach}(\{\varepsilon \},(\overline {a} aa)^{*})=\{\varepsilon \}\neq a^{*}=\textsc {Reach}(\{\varepsilon \},(aa\overline {a})^{*})\). However, possibly the construction of our PDA \(\mathfrak {P}\) in the proof of Theorem 5.5 can be modified to this more general case.

6 Partially Lossy Queues

Until now we have only considered queues which are reliable. We can also prove the results from the previous sections for (partially) lossy queue automata. These partially lossy queue automata are queue automata with an additional uncontrollable action which is forgetting parts of its contents that are specified by a so-called lossiness alphabet.
Definition 6.1
A lossiness alphabet is a tuple \({\mathscr{L}}=(F,U)\) where F and U are two finite sets with FU = . We call F the set of forgettable letters and U the set of unforgettable letters.
From a given lossiness alphabet \({\mathscr{L}}=(F,U)\) we also obtain the alphabet \(A_{{\mathscr{L}}}=F\cup U\) of all possible queue contents and the alphabet \(\varSigma _{{\mathscr{L}}}=A_{{\mathscr{L}}}\cup \overline {A_{{\mathscr{L}}}}\) of all queue actions.
In fact, a partially lossy queue is allowed to forget any letter from F in its content at any time. Here, we first consider partially lossy queues with restricted lossiness. Concretely, we consider only the computations in which the queue forgets letters when necessary. That is, if the queue tries to read some letter which is preceded by some other, forgettable letters.
Formally, the computations of such restricted partially lossy queues are defined as follows:
Definition 6.2
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(\bot \notin A_{{\mathscr{L}}}\). Then the map \(\circ _{{\mathscr{L}}}\colon (A_{{\mathscr{L}}}^{*}\cup \{\bot \})\times \varSigma _{{\mathscr{L}}}^{*}\to (A_{{\mathscr{L}}}^{*}\cup \{\bot \})\) is defined for each \(v\in A_{{\mathscr{L}}}^{*}\), \(a,b\in A_{{\mathscr{L}}}\), and \(t\in \varSigma _{{\mathscr{L}}}^{*}\) as follows:
1.
\(v\circ _{{\mathscr{L}}}\varepsilon =v\)
 
2.
\(v\circ _{{\mathscr{L}}} at=va\circ _{{\mathscr{L}}} t\)
 
3.
\(bv\circ _{{\mathscr{L}}}\overline {a} t=\begin {cases} v\circ _{{\mathscr{L}}} t & \text {if }a=b\\ v\circ _{{\mathscr{L}}}\overline {a} t & \text {if }b\in F\setminus \{a\}\\ \bot & \text {otherwise} \end {cases}\)
 
4.
\(\varepsilon \circ _{{\mathscr{L}}}\overline {a} t=\bot \circ _{{\mathscr{L}}} t=\bot \)
 
Let A be some alphabet and v, wA be two words. Then v is a subword of w (denoted by vw) if we have https://static-content.springer.com/image/art%3A10.1007%2Fs00224-021-10031-2/MediaObjects/224_2021_10031_Figy_HTML.gif . The induced relation \(\preceq \subseteq (A^{*})^{2}\) is a partial ordering on A. Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(v,w\in A_{{\mathscr{L}}}^{*}\). We say that v is an \({\mathscr{L}}\)-subword of w (denoted by \(v\preceq _{{\mathscr{L}}} w\)) if πU(w) ≼ vw holds, i.e., v is a subword of w which contains at least all unforgettable letters from w. It is easy to see, that ≼(, U) is the equality relation and ≼(F, ) is the subword relation.
Next, we want to describe the computations of non-restricted partially lossy queues. In [26] we have proven that the set of reachable queue contents after application of \(t\in \varSigma _{{\mathscr{L}}}^{*}\) on some content \(v\in A_{{\mathscr{L}}}^{*}\) is the set of all \({\mathscr{L}}\)-subwords of \(v\circ _{{\mathscr{L}}}t\). To this end, we define up- and downclosures of some language \(L\subseteq A_{{\mathscr{L}}}^{*}\) with respect to \(\preceq _{{\mathscr{L}}}\): the downclosure of is L
$$\downarrow_{\mathcal{L}}{L}:=\{v\in A_{\mathcal{L}}^{*}\mid\exists w\in L\colon v\preceq_{\mathcal{L}} w\} .$$
Similarly, the upclosure of L is
$$\uparrow_{\mathcal{L}}{L}:=\{w\in A_{\mathcal{L}}^{*}\mid\exists v\in L\colon v\preceq_{\mathcal{L}} w\} .$$
Hence, the set of reachable contents of a (non-restricted) partially lossy queue after application of t on the input v is \(\downarrow _{{\mathscr{L}}}{(v\circ _{{\mathscr{L}}}t)}\). Therefore, we define our reachability problems as follows:
Problem 6.3
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet, \(L\subseteq A_{{\mathscr{L}}}^{*}\) be a set of queue contents, and \(T\subseteq \varSigma _{{\mathscr{L}}}^{*}\) be a regular set of transformation sequences. The set of queue contents that are reachable from L via T is
$$ \textsc{Reach}_{\mathcal{L}}(L,T):=\uparrow_{\mathcal{L}}{((L\circ_{\mathcal{L}} T)\setminus\{\bot\})} $$
and the set of queue contents that can reach L via T is
$$\textsc{BackReach}_{\mathcal{L}}(L,T):=\uparrow_{\mathcal{L}}{\{v\in A_{\mathcal{L}}^{*} | (v\circ_{\mathcal{L}} T)\cap L\neq\emptyset\}} .$$
Partially lossy queue automata with lossiness alphabets \({\mathscr{L}}=(\emptyset ,U)\) are reliable. Hence, we have \(\textsc {Reach}_{{\mathscr{L}}}=\textsc {Reach}\) and \(\textsc {BackReach}_{{\mathscr{L}}}=\textsc {BackReach}\) in this case. In this reliable case we have a strong duality between forwards and backwards reachability. However, this duality does not hold for arbitrary lossiness alphabets: if \({\mathscr{L}}=(F,U)\) is a lossiness alphabet with aF we have \(\textsc {Reach}_{{\mathscr{L}}}(\{\varepsilon \},a)=\{\varepsilon ,a\}\), which cannot be transformed into \(\textsc {BackReach}(\{\varepsilon \},\overline {a})=F^{*}aF^{*}\) using reversal. Hence, we have to consider forwards and backwards reachability in this case. Anyway, we will see later in this section that we can reduce forwards and backwards reachability for arbitrary partially lossy queues to reachability in reliable queues.
Now, we consider fully lossy queues: let \({\mathscr{L}}=(F,\emptyset )\) be a lossiness alphabet. Then, for regular languages \(L\subseteq A_{{\mathscr{L}}}^{*}=F^{*}\) and \(T\subseteq \varSigma _{{\mathscr{L}}}^{*}\), the set \(\textsc {Reach}_{{\mathscr{L}}}(L,T)\) has a decidable membership problem [9] and, since it is downwards closed under the subword ordering ≼ [10], it is regular. Though, we cannot compute an NFA accepting this set [11, 12] and this holds even if we start from the empty queue, only (i.e., L = {ε}). Surprisingly, the set \(\textsc {BackReach}_{{\mathscr{L}}}(L,T)\) is effectively regular [9], but the computation of an NFA accepting this language is not primitive recursive [13, 14].
Hence, again we try to approximate the reachability problem with the help of meta-transformations. To this end, we need another definition:
Definition 6.4
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(w=a_{1}a_{2}{\dots } a_{n}\in A_{{\mathscr{L}}}^{*}\) with \(a_{1},a_{2},\dots ,a_{n}\in A_{{\mathscr{L}}}\). The set of the reduced \({\mathscr{L}}\)-superwords of w is
$$\text{redsup}_{\mathcal{L}}(w):=\{w_{1}a_{1}w_{2}a_{2}{\dots} w_{n}a_{n}\mid\forall 1\leq i\leq n\colon w_{i}\in (F\setminus\{a_{i}\})^{*}\} .$$
Let \(w\in A_{{\mathscr{L}}}^{*}\). Then it is easy to see, that a reduced \({\mathscr{L}}\)-superword \(v\in \text {redsup}_{{\mathscr{L}}}(w)\) also is an \({\mathscr{L}}\)-superword of w, i.e., \(w\preceq _{{\mathscr{L}}}v\). However, in general \(w\preceq _{{\mathscr{L}}}v\) does not imply that \(v\in \text {redsup}_{{\mathscr{L}}}(w)\) since, e.g., for \(w\preceq _{{\mathscr{L}}}v\) it is allowed to add some forgettable letters at the end of w. If F = holds, then there is exactly one reduced \({\mathscr{L}}\)-superword \(\text {redsup}_{{\mathscr{L}}}(w)=\{w\}\). Note that \(\text {redsup}_{{\mathscr{L}}}(w)\) is effectively regular. We can also extend this notion to languages:
Lemma 6.5
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(L\subseteq A_{{\mathscr{L}}}^{*}\) be regular. Then the language \(\text {redsup}_{{\mathscr{L}}}(L):=\bigcup _{w\in L}\text {redsup}_{{\mathscr{L}}}(w)\) is effectively regular. An NFA accepting \(\text {redsup}_{{\mathscr{L}}}(L)\) can be computed in polynomial time.
Proof
(idea) Let \(\mathfrak {A}=(Q_{\mathfrak {A}},A_{{\mathscr{L}}},I_{\mathfrak {A}},{\varDelta }_{\mathfrak {A}},F_{\mathfrak {A}})\) be an NFA accepting L. We can compute an NFA \(\mathfrak {B}=(Q_{\mathfrak {B}},A_{{\mathscr{L}}},I_{\mathfrak {B}},{\varDelta }_{\mathfrak {B}},F_{\mathfrak {B}})\) as follows:
  • \(Q_{\mathfrak {B}}:=Q_{\mathfrak {A}}\times A_{{\mathscr{L}}}\),
  • \(I_{\mathfrak {B}}:=I_{\mathfrak {A}}\times A_{{\mathscr{L}}}\),
  • \(F_{\mathfrak {B}}:=F_{\mathfrak {A}}\times A_{{\mathscr{L}}}\), and
  • \({\varDelta }_{\mathfrak {B}}:=\{((p,a),a,(q,b))\!\mid \! (p,a,q)\in {\varDelta }_{\mathfrak {A}}\}\cup \{((p,b),a,(p,b))\mid a\in F\setminus \{b\}\}\).
In other words, we simulate \(\mathfrak {A}\) in the first component of the states of \(\mathfrak {B}\). In the second component we guess the letter which \(\mathfrak {A}\) reads on its next step. With the help of this information we are able to read some other forgettable letters. Hence, we obtain \(L(\mathfrak {B})=\text {redsup}_{{\mathscr{L}}}(L)\). □
Now, we can state the following connection between partially lossy computations \(\circ _{{\mathscr{L}}}\) and reduced \({\mathscr{L}}\)-superwords:
Lemma 6.6
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(v,w,t\in A_{{\mathscr{L}}}^{*}\). Then we have \(v\circ _{{\mathscr{L}}}\overline {t}=w\) if, and only if, there is \(s\in \text {redsup}_{{\mathscr{L}}}(t)\) with v = sw.
Proof
We prove this by induction on the length of t. First, assume t = ε. Then we have \(v=v\circ _{{\mathscr{L}}}\overline {t}=w\), \(\varepsilon \in \text {redsup}_{{\mathscr{L}}}(t)\), and v = εw = w.
Next, let \(t=at^{\prime }\) for some \(a\in A_{{\mathscr{L}}}\) and \(t^{\prime }\in A_{{\mathscr{L}}}^{*}\). Assume \(v\circ _{{\mathscr{L}}}\overline {t}=w\). Then we have \(w=v\circ _{{\mathscr{L}}}\overline {t}=(v\circ _{{\mathscr{L}}}\overline {a})\circ _{{\mathscr{L}}}\overline {t^{\prime }}\). By definition of \(\circ _{{\mathscr{L}}}\) there are v1 ∈ (F ∖{a}) and \(v_{2}\in A_{{\mathscr{L}}}^{*}\) with v = v1av2, \(v\circ _{{\mathscr{L}}}\overline {a}=v_{2}\), and \(v_{2}\circ _{{\mathscr{L}}}\overline {t^{\prime }}=w\). By induction hypothesis there is \(s^{\prime }\in \text {redsup}_{{\mathscr{L}}}(t^{\prime })\) with \(v_{2}=s^{\prime }w\). Set \(s:=v_{1}as^{\prime }\). Then we see \(s\in \text {redsup}_{{\mathscr{L}}}(at^{\prime })=\text {redsup}_{{\mathscr{L}}}(t)\) and \(v=v_{1}av_{2}=v_{1}as^{\prime }w=sw\).
Conversely, let \(s\in \text {redsup}_{{\mathscr{L}}}(t)\) with v = sw. Then by definition there is s1 ∈ (F ∖{a}) and \(s_{2}\in A_{{\mathscr{L}}}^{*}\) with s = s1as2 and \(s_{2}\in \text {redsup}_{{\mathscr{L}}}(t^{\prime })\). By v = sw there is \(v_{2}\in A_{{\mathscr{L}}}^{*}\) with v = sw = s1as2w = s1av2, i.e., v2 = s2w. By induction hypothesis we have \(v_{2}\circ _{{\mathscr{L}}}\overline {t^{\prime }}=w\). We also have \(v\circ _{{\mathscr{L}}}\overline {a}=v_{2}\) implying
$$v\circ_{\mathcal{L}}\overline{t}=(v\circ_{\mathcal{L}}\overline{a})\circ_{\mathcal{L}}\overline{t^{\prime}}=v_{2}\circ_{\mathcal{L}}\overline{t^{\prime}}=w .$$
With the help of Lemma 6.6 we can finally prove the following reductions from reachability in partially lossy queues to reachability in reliable queues:
Proposition 6.7
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet and \(L,T\subseteq A^{*}_{{\mathscr{L}}}\). Then the following statements hold:
1.
\((L\circ _{{\mathscr{L}}}T)\setminus \{\bot \}=(L\circ T)\setminus \{\bot \}\)
 
2.
\((L\circ _{{\mathscr{L}}}\overline {T})\setminus \{\bot \}=(L\circ \overline {\text {redsup}_{{\mathscr{L}}}(T)})\setminus \{\bot \}\)
 
3.
\(\textsc {Reach}_{{\mathscr{L}}}(L,T)=\downarrow _{{\mathscr{L}}}{\textsc {Reach}(L,T)}\)
 
4.
\(\textsc {Reach}_{{\mathscr{L}}}(L,\overline {T})=\downarrow _{{\mathscr{L}}}{\textsc {Reach}(L,\overline {\text {redsup}_{{\mathscr{L}}}(T)})}\)
 
5.
\(\textsc {BackReach}_{{\mathscr{L}}}(L,T)=\uparrow _{{\mathscr{L}}}{\textsc {BackReach}(\uparrow _{{\mathscr{L}}}{L},T)}\)
 
6.
\(\textsc {BackReach}_{{\mathscr{L}}}(L,\overline {T})=\uparrow _{{\mathscr{L}}}{\textsc {BackReach}(\uparrow _{{\mathscr{L}}}{L},\overline {\text {redsup}_{{\mathscr{L}}}(T)})}\)
 
Proof
First, we prove (1):
$$ \begin{array}{@{}rcl@{}} (L\circ_{\mathcal{L}} T)\setminus\{\bot\}=LT=(L\circ T)\setminus\{\bot\} . \end{array} $$
To prove (2), let \(x\in (L\circ _{{\mathscr{L}}}\overline {T})\setminus \{\bot \}\). Then there are wL and tT with \(x=w\circ _{{\mathscr{L}}}\overline {t}\). By Lemma 6.6 there is \(s\in \text {redsup}_{{\mathscr{L}}}(t)\subseteq \text {redsup}_{{\mathscr{L}}}(T)\) with w = sx. Then we have \(x=w\circ \overline {s}\) and, hence, \(x\in (L\circ \overline {\text {redsup}_{{\mathscr{L}}}(T)})\setminus \{\bot \}\).
Now, let \(x\in (L\circ \overline {\text {redsup}_{{\mathscr{L}}}(T)})\). Then there are wL and \(s\in \text {redsup}_{{\mathscr{L}}}(T)\) with \(x=w\circ \overline {s}\) and, therefore, sx = w. There is tT with \(s\in \text {redsup}_{{\mathscr{L}}}(t)\). By Lemma 6.6 we have \(x=w\circ _{{\mathscr{L}}}\overline {t}\) and, therefore, \(x\in (L\circ _{{\mathscr{L}}}\overline {T})\setminus \{\bot \}\).
The equations (3)-(6) are direct consequences of (1) and (2) as well as Theorem 3.3. □
Finally, we can prove that our results from the previous sections also hold for arbitrary partially lossy queues:
Theorem 6.8
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet, \(L\subseteq A_{{\mathscr{L}}}^{*}\) be regular, and \(T\subseteq \varSigma _{{\mathscr{L}}}^{*}\) be regular and closed under \(\equiv _{{\mathscr{L}}}\) (where \(s\equiv _{{\mathscr{L}}}t\) if \(v\circ _{{\mathscr{L}}}s=v\circ _{{\mathscr{L}}}t\) for each \(v\in A_{{\mathscr{L}}}^{*}\)). Then \(\textsc {Reach}_{{\mathscr{L}}}(L,T)\) and \(\textsc {BackReach}_{{\mathscr{L}}}(L,T)\) are effectively regular.
Theorem 6.9
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet, \(L\subseteq A_{{\mathscr{L}}}^{*}\) be regular, and \(T\subseteq \varSigma _{{\mathscr{L}}}^{*}\) be regular and read-write independent. Then \(\textsc {Reach}_{{\mathscr{L}}}(L,T^{*})\) and \(\textsc {BackReach}_{{\mathscr{L}}}(L,T^{*})\) are effectively regular.
Theorem 6.10
Let \({\mathscr{L}}=(F,U)\) be a lossiness alphabet, \(L\subseteq A_{{\mathscr{L}}}^{*}\) be regular, and \(T\subseteq \varSigma _{{\mathscr{L}}}^{*}\) be regular. Then \(\textsc {Reach}_{{\mathscr{L}}}(L,T^{*})\) and \(\textsc {BackReach}_{{\mathscr{L}}}(L,T^{*})\) are effectively regular if
(1)
\(T=\overline {R_{1}}W\overline {R_{2}}\) for some regular sets \(W,R_{1},R_{2}\subseteq A_{{\mathscr{L}}}^{*}\),
 
(2)
T = {t} for some \(t\in \varSigma _{{\mathscr{L}}}^{*}\) (cf. [19, 21]), or
 
(3)
\(T\subseteq A_{{\mathscr{L}}}^{*}\cup \overline {A_{{\mathscr{L}}}}^{*}\).
 

7 Conclusion and Open Problems

In this paper we considered the reachability problem of reliable and lossy queue automata having exactly one queue. We joined these two models to so-called partially lossy queue automata (plq automata, for short). These automata are allowed to forget a specified subset of their contents at any time. Depending on this specified set, the reachability problem of these automata is either undecidable or inefficient. Hence, Boigelot et al. [19] and Abdulla et al. [21] tried to approximate the reachability problem with the help of so-called meta-transformations. These are regular languages of transformation sequences such that we can easily compute the set of reachable queue contents. Here, we considered two special kinds of meta-transformations:
1.
the set of possible sequences of queue transformations is closed under certain (context-sensitive) commutations of the atomic transformations.
 
2.
the plq automaton alternates between writing of words from a regular language and reading of words from another regular language. This is a generalization of the results [19, 21] where the authors considered queue automata looping through a single sequence of transformations.
 
In both cases we could prove that, starting with a regular language of queue contents the queue reaches a regular set of new contents.
Until now it is open, whether we can extend our second kind of meta-transformations to plq automata looping through a sequence of multiple such regular languages of write and read actions. We could also try to generalize the flat queue automata to automata consisting of simple paths and single loops as well as components which are closed under the aformentioned commutations or alternating between write and read action sequences. Possibly the decidability of these “semi-flat” queue automata is still decidable in NP. We may also ask, in which cases a plq automaton having multiple queues reaches a recognizable set of states reachable from a recognizable set of initial contents. For example, in [19] there is also a result considering multiple reliable queues looping through a sequence of transformations. So, we could also try to generalize our result to multiple queues.
We could also consider automata having other data structures as their memory. So, we could also consider automata with multiple pushdowns. Since these automata are as powerful as queue automata or Turing-machines, we also have to approximate the reachability problem.

Acknowledgment

The author would like to thank Dietrich Kuske and the anonymous reviewers of the conference version [1] and the submitted version of this full paper for their helpful suggestions to improve this paper.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Köcher, C.: Reachability problems on partially lossy queue automata. In: RP 2019, Lecture Notes in Computer Science, vol. 11674, pp. 149–163 Springer (2019) Köcher, C.: Reachability problems on partially lossy queue automata. In: RP 2019, Lecture Notes in Computer Science, vol. 11674, pp. 149–163 Springer (2019)
2.
Zurück zum Zitat Bouajjani, A., Esparza, J., Maler, O.: Reachability analysis of pushdown automata: Application to model-checking. In: Mazurkiewicz, A., Winkowski, J. (eds.) CONCUR 1997, Lecture Notes in Computer Science, vol. 1243, pp. 135–150. Springer (1997) Bouajjani, A., Esparza, J., Maler, O.: Reachability analysis of pushdown automata: Application to model-checking. In: Mazurkiewicz, A., Winkowski, J. (eds.) CONCUR 1997, Lecture Notes in Computer Science, vol. 1243, pp. 135–150. Springer (1997)
4.
Zurück zum Zitat Esparza, J., Hansel, D., Rossmanith, P., Schwoon, S.: Efficient algorithms for model checking pushdown systems. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000, Lecture Notes in Computer Science, vol. 1855, pp. 232–247. Springer (2000) Esparza, J., Hansel, D., Rossmanith, P., Schwoon, S.: Efficient algorithms for model checking pushdown systems. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000, Lecture Notes in Computer Science, vol. 1855, pp. 232–247. Springer (2000)
6.
Zurück zum Zitat Leroux, J., Schmitz, S.: Demystifying reachability in vector addition systems. In: 2015 30th Annual ACM/IEEE Symposium on Logic in Computer Science, pp. 56–67. IEEE (2015) Leroux, J., Schmitz, S.: Demystifying reachability in vector addition systems. In: 2015 30th Annual ACM/IEEE Symposium on Logic in Computer Science, pp. 56–67. IEEE (2015)
7.
Zurück zum Zitat Minsky, M.L.: Computation: finite and infinite machines. Prentice-Hall, Inc., Upper Saddle River (1967)MATH Minsky, M.L.: Computation: finite and infinite machines. Prentice-Hall, Inc., Upper Saddle River (1967)MATH
14.
Zurück zum Zitat Chambart, P., Schnoebelen, P.: The Ordinal Recursive Complexity of Lossy Channel Systems. In: LICS 2008, pp. 205–216. IEEE Computer Society Press (2008) Chambart, P., Schnoebelen, P.: The Ordinal Recursive Complexity of Lossy Channel Systems. In: LICS 2008, pp. 205–216. IEEE Computer Society Press (2008)
19.
Zurück zum Zitat Boigelot, B., Godefroid, P., Willems, B., Wolper, P.: The power of QDDs. In: Static analysis, Lecture Notes in Computer Science, vol. 1302, pp. 172–186. Springer (1997) Boigelot, B., Godefroid, P., Willems, B., Wolper, P.: The power of QDDs. In: Static analysis, Lecture Notes in Computer Science, vol. 1302, pp. 172–186. Springer (1997)
23.
Zurück zum Zitat Bozga, M., Iosif, R., Konečný, F.: Safety problems are NP-complete for flat integer programs with octagonal loops. In: McMillan, K.L., Rival, X. (eds.) Verification, model checking, and abstract interpretation. https://doi.org/10.1007/978-3-642-54013-4_14, pp 242–261. Springer, Berlin, Heidelberg (2014) Bozga, M., Iosif, R., Konečný, F.: Safety problems are NP-complete for flat integer programs with octagonal loops. In: McMillan, K.L., Rival, X. (eds.) Verification, model checking, and abstract interpretation. https://​doi.​org/​10.​1007/​978-3-642-54013-4_​14, pp 242–261. Springer, Berlin, Heidelberg (2014)
24.
Zurück zum Zitat Finkel, A., Praveen, M.: Verification of flat FIFO systems. Log. Methods Comput. Sci. 16(4), 4:1-4:29 (2020). 10.23638/LMCS-16(4:4)2020CrossRefMATH Finkel, A., Praveen, M.: Verification of flat FIFO systems. Log. Methods Comput. Sci. 16(4), 4:1-4:29 (2020). 10.23638/LMCS-16(4:4)2020CrossRefMATH
27.
Zurück zum Zitat Köcher, C.: Rational, Recognizable, and Aperiodic Sets in the Partially Lossy Queue Monoid. In: STACS 2018, LIPIcs, vol. 96, pp. 45:1–45:14. Dagstuhl Publishing (2018) Köcher, C.: Rational, Recognizable, and Aperiodic Sets in the Partially Lossy Queue Monoid. In: STACS 2018, LIPIcs, vol. 96, pp. 45:1–45:14. Dagstuhl Publishing (2018)
29.
Zurück zum Zitat Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to automata theorey, languages, and computation, Third. Pearson, London (2006)MATH Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to automata theorey, languages, and computation, Third. Pearson, London (2006)MATH
Metadaten
Titel
Reachability Problems on Reliable and Lossy Queue Automata
verfasst von
Chris Köcher
Publikationsdatum
19.06.2021
Verlag
Springer US
Erschienen in
Theory of Computing Systems / Ausgabe 8/2021
Print ISSN: 1432-4350
Elektronische ISSN: 1433-0490
DOI
https://doi.org/10.1007/s00224-021-10031-2

Weitere Artikel der Ausgabe 8/2021

Theory of Computing Systems 8/2021 Zur Ausgabe