Skip to main content
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

Synthesis with Guided Environments

verfasst von : Orna Kupferman, Ofer Leshkowitz

Erschienen in: Tools and Algorithms for the Construction and Analysis of Systems

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Das Kapitel stellt das Konzept der Synthese mit geführten Umgebungen (SGE) vor, einem neuartigen Ansatz zur Systemsynthese, bei dem das System die Handlungen der Umwelt lenken kann. Diese Methode ist besonders in Szenarien mit teilweiser Sichtbarkeit vorteilhaft, in denen das System nicht alle Eingangssignale beobachten kann. Das Kapitel untersucht die technischen Herausforderungen und Vorteile von SGE, einschließlich der Zersetzung der Zufriedenheitsaufgabe zwischen dem System und der Umwelt. Es geht um den Speicher, der von der Umgebung genutzt wird und wie er optimiert werden kann, sowie um die automatisierungsbasierte Lösung für SGE. Das Kapitel behandelt auch den Bereich der Programme, die von den Wandlern mit geführten Umgebungen (TGEs) verwendet werden und wie diese effizient dargestellt werden können. Darüber hinaus vergleicht er den SGE-Ansatz mit traditionellen Synthesemethoden und verteilten Systemen und hebt die einzigartigen Vorteile und technischen Herausforderungen von SGE hervor. Das Kapitel endet mit Anweisungen für zukünftige Forschung, einschließlich dynamischer Ausblendung und Führung von Signalen und begrenzter SGE. Die detaillierte Analyse und die technische Tiefe machen dieses Kapitel zu einer wertvollen Ressource für Spezialisten, die fortgeschrittene Synthesetechniken verstehen und umsetzen wollen.

1 Introduction

Synthesis is the automated construction of a system from its specification [5]. Given a linear temporal logic (LTL) formula \(\varphi \) over sets I and O of input and output signals, the goal is to return a transducer that realizes \(\varphi \). At each moment in time, the transducer reads a truth assignment, generated by the environment, to the signals in I, and generates a truth assignment to the signals in O. This process continues indefinitely, generating an infinite computation in \((2^{I \cup O})^\omega \). The transducer realizes \(\varphi \) if its interactions with all environments generate computations that satisfy \(\varphi \) [23].
The fact the system has to satisfy its specification in all environments has led to a characterization of the environment as hostile. In particular, in the game that corresponds to synthesis, the objective of the environment is to violate the specification \(\varphi \). In real-life applications, the satisfaction of the specification is often also in the environment’s interest. In particular, in cases where the system serves or guides the environment, we can expect the environment to follow guidance from the system. We introduce and study synthesis with guided environments (SGE, for short), where the system may harness the knowledge and computational power of the environment during the interaction, guiding it how to assign values to some of the output signals.
Specifically, in SGE, the set O of output signals is partitioned into sets \(C\) and \(G\) of controlled and guided signals. Then, a system that is synthesized by SGE assigns values to the signals in \(C\) and guides the environment how to assign values to the signals in \(G\). Clearly, not all output signals may be guided to the environment. For example, physical actions of the system, like closing a gate or raising a robot arm, cannot be performed by the environment. In addition, it may be the case that the system cannot trust the environment to follow instructions for some of the output signals. As argued above, however, in many cases we can expect the environment to follow the system’s guidance, and require the system to satisfy the specification only when the environment follows its guidance. Indeed, when we go to the cinema and end up seeing a bad movie, we cannot blame a recommendation system that guided us not to choose this movie. The recommendation system is bad (violates its specification, in our synthesis story) only if a user that follows its guidance is disappointed.1
One advantage of SGE is that it enables a decomposition of the satisfaction task between the system and the environment. We will get back to this point after we describe the setting in more detail. The main advantage of SGE, however, has to do with partial visibility, namely the setting where the set I of input signals is partitioned into sets V and \(H\) of visible and hidden signals, and the systems views only the signals in V. Partial visibility makes synthesis much harder, as the hidden signals still appear in the specification, yet the behavior of the system should be independent of their truth values.
Synthesis with partial visibility has been the subject of extensive research. One line of work studies the technical challenges in solving the problem [7, 20, 25]. Essentially, in a setting with full visibility, the interaction between the system and the environment induces a single computation and hence a single run of a deterministic automaton for the specification. Partial visibility forces the algorithm to maintain subsets of states of the automaton. Indeed, now the interaction induces several computations, obtained by the different possible assignments to the hidden signals. This makes synthesis with partial visibility exponentially harder than synthesis with full visibility for specifications given by automata. When the specification is given by an LTL formula, this exponential price is dominated by the doubly-exponential translation of LTL formulas to deterministic automata, thus LTL synthesis with partial visibility is 2EXPTIME-complete, and is not harder than LTL synthesis with full visibility [26].
A second line of work studies different settings in which partial visibility is present. This includes, for example, distributed systems [21, 24, 28], systems with controlled sensing [2, 9] or with assumptions about visibility [14], and systems that maintain privacy [18, 31]. Finally, researchers have studied alternative forms of partial visibility (e.g., perspective games, where visibility of all signals is restricted in segments of the interaction), as well as partial visibility in richer settings (e.g., multi-agent [3, 4, 15] or stochastic [8, 16] systems) or in problems that are strongly related to synthesis (e.g., control [17], planning [10], and rational synthesis [6, 13]).
To the best of our knowledge, in all settings in which synthesis with partial visibility has been studied so far, there is no attempt to make use of the fact that the assignments to the hidden signals are known to the environment. In SGE, the instructions that the system sends to the environment may refer to the values of the hidden signals. Thus, the outcome of SGE is a transducer with a guided environment (TGE): a transducer whose transitions depend only on the assignments to the signals in V, it assigns values to the signals in \(C\), and instructs the environment how to assign values to the signals in \(G\) using programs that may refer to the values of the signals in \(H\).
Consider, for example, a system in a medical clinic that directs patients “if you come for a vaccine, go to the first floor; if you come to the pharmacy, go to the second floor”. Such a system is as correct as a system that asks customers for the purpose of their visit and then outputs the floor to which they should go. Clearly, if the customers prefer not to reveal the purpose of their visit, we can direct them only with a system of this type. This is exactly what SGE does: it replaces assignments that depend on hidden signals or are complicated to compute with instructions to the environment. As another example, consider a smart-home controller that manages various smart devices within a home by getting inputs from devices like thermostats and security cameras, and generating outputs to devices like lighting systems or smart locks. When it is desirable to hide from the controller information like sleep patterns or number of occupants, we can do that by limiting its input, and guiding the user about the activation of some output devices, for example with instructions like, “if you expect guests, unlock the backyard gate". In the full version, we describe more elaborated examples, in particular of a server that directs users who want to upload data to a cloud. The users may hide from the server the sensitivity level of their data, and we can expect them to follow instructions that the server issues, for example instructions to use storage of high security only when the data they upload is sensitive.
We study several aspects of SGE. We start by examining the memory used by the environment. Clearly, this memory can be used to reduce the state space of the TGE. To see this, note that in an extreme setting in which the TGE can guide all output signals, it can simply instruct the environment to execute a transducer that realizes the specification. Beyond a trade-off between the size of the TGE and the memory of the environment, we discuss how the size of the memory depends on the sets of visible and guided signals. For example, we show that, surprisingly, having more guided signals does not require more memory, yet having fewer guided signals may require more memory.
We then describe an automata-based solution for SGE. The main challenge in SGE is as follows. Consider a system that views only input signals in V. Its interactions with environments that agree on the signals in V generate the same response. In synthesis with partial visibility, this is handled by universal tree automata that run on trees with directions in \(2^V\), thus trees whose branches correspond to the interaction from the point of view of the system [20]. In the setting of SGEs, the interactions with different environments that agree on the signals in V still generate the same response, but this response now involves programs that guide the environment on how to assign values to signals in \(G\) based on the values assigned to the signals in \(H\). As a result, the computations induced by these interactions may differ (not only on \(H\) but also on \(G\)). This differences between SGE and traditional synthesis with partial visibility is our main technical challenge. We show that we can still reduce SGE to the nonemptiness problem of universal co-Büchi tree automata, proving that SGE for LTL specifications is 2EXPTIME-complete. In more detail, given an LTL formula \(\varphi \) and a bound k on the memory that the environment may use, the constructed automaton is of size exponential in \(|\varphi |\) and linear in k, making SGE doubly-exponential in \(|\varphi |\) and exponential in k. Finally, we prove a doubly-exponential upper bound on the size of the memory needed by the environment, leading to an overall triply-exponential upper bound for the SGE problem with unbounded environment memory.
We continue and study the domain of programs that TGEs use. Recall that these programs instruct the environment how to update its memory and assign values to the guided signals, given a current memory state and the current assignment to the hidden signals. Thus, for a memory state space M, each program is of the form \(p:M \times 2^H\rightarrow M \times 2^G\). We study ways to reduce the domain \(2^{H}\), which is the most dominant factor. The reduction depends on the specification \(\varphi \) we wish to synthesize. We argue that the SGE algorithm can restrict attention to tight programs: ones whose domain is a set of predicates over \(H\) obtained by simplifying propositional sub-formulas of \(\varphi \). Further simplification is achieved by exploiting the fact that programs are called after an assignment to the signals in \(V \cup C\) has been fixed, and exploiting dependencies among all signals.
Finally, we compare our solution with one that views a TGE as two distributed processes that are executed together in a pipeline architecture: the TGE itself, and a transducer with state space M that implements the instructions of the TGE to the environment. We argue that the approach we take is preferable and can lead to a quadratic saving in their joint state spaces, similar to the saving obtained by defining a regular language as the intersection of two automata. Generating programs that manage the environment’s memory efficiently is another technical challenge in SGE.
We conclude with directions for future research. Beyond extensions of the many settings in which synthesis has been studied to a setting with guided environments, we discuss two directions that are more related to “the guided paradigm” itself: settings with dynamic hiding and guidance of signals, thus when \(H\) and \(G\) are not fixed throughout the interaction; and bounded SGE, where, as in traditional synthesis [19, 29], beyond a bound on the memory used by the environment, there are bounds on the size of the state space of the TGE and possibly also on the size of the state space of its environment.

2 Preliminaries

We describe on-going behaviors of reactive systems using the linear temporal logic LTL [22]. We consider systems that interact via sets I and O of input and output signals, respectively. Formulas of LTL are defined over \(I \cup O\) using the usual Boolean operators and the temporal operators \(\textbf{G}\) (“always") and \(\textbf{F}\) (“eventually"), \(\textbf{X}\) (“next time”) and \(\textbf{U}\) (“until”). The semantics of LTL is defined with respect to infinite computations in \((2^{I \cup O})^\omega \). Thus, each LTL formula \(\varphi \) over \(I \cup O\) induces a language \(L_\varphi \subseteq (2^{I \cup O})^\omega \) of all computations that satisfy \(\varphi \).
The length of an LTL formula \(\varphi \), denoted \(|\varphi |\), is the number of nodes in the generating tree of \(\varphi \). Note that \(|\varphi |\) bounds the number of sub-formulas of \(\varphi \).
We model reactive systems that interact with their environments by finite-state transducers. A transducer is a tuple \(\mathcal {T}=\langle I,O,S,s_0,\delta ,\tau \rangle \), where I and O are sets of input and output signals, S is a finite set of states, \(s_0\in S\) is an initial state, \(\delta :S\times 2^I\rightarrow S\) is a transition function, and \(\tau : S \times 2^I \rightarrow 2^O\) is a function that labels each transition by an assignment to the output signals. Given an infinite sequence \(w_{I}=i_1\cdot i_2\cdots \in (2^I)^{\omega }\) of assignments to input signals, \(\mathcal {T}\) generates an infinite sequence \(w_{O}=o_1\cdot o_2\cdots \in (2^O)^{\omega }\) of assignments to output signals. Formally, a run of \(\mathcal {T}\) on \(w_{I}\) is an infinite sequence of states \(s_0\cdot s_1\cdot s_2\cdots \), where for all \(j \ge 1\), we have that \(s_j=\delta (s_{j-1},i_j)\). Then, the sequence \(w_O\) is obtained from the assignments along the transitions that the run traverses. Thus for all \(j \ge 1\), we have that \(o_j=\tau (s_{j-1},i_j)\). We define the computation of \(\mathcal {T}\) on \(w_{I}\) to be the word \(\mathcal {T}(w_{I})=(i_1\cup o_1)\cdot (i_2\cup o_2)\cdots \in (2^{I\cup O})^{\omega }\).
For a specification language \(L_\varphi \subseteq (2^{I\cup O})^{\omega }\), we say that \(\mathcal {T}\) (IO)-realizes \(L_\varphi \) if for every input sequence \(w_{I}\in (2^I)^{\omega }\), we have that \(\mathcal {T}(w_{I}) \in L_\varphi \). In the synthesis problem, we are given a specification language \(L_\varphi \) and a partition of the signals to I and O, and we have to return a transducer that (IO)-realizes \(L_\varphi \) (or determine that \(L_\varphi \) is not realizable). The language \(L_\varphi \) is typically given by an LTL formula \(\varphi \). We then talk about realizability or synthesis of \(\varphi \).
In synthesis with partial visibility, we seek a system that satisfies a given specification in all environments even when it cannot observe the assignments to some of the input signals. Formally, the set of input signals is partitioned into visible and hidden signals, thus \(I=V\cup H\). The specification \(\varphi \) is still over \(V\cup H\cup O\), yet the behavior of the transducer that models the system is independent of \(H\). Formally, \(\mathcal {T}=\langle V,H,O,S,s_0,\delta ,\tau \rangle \), where now \(\delta :S\times 2^V\rightarrow S\) and \(\tau : S \times 2^V\rightarrow 2^O\). Given an infinite sequence \(w_{I}=(v_1 \cup h_1)\cdot (v_2 \cup h_2) \cdots \in (2^{V\cup H})^{\omega }\), the run and computation of \(\mathcal {T}\) on \(w_I\) is defined as in the case of full visibility, except that now, for all \(j \ge 1\), we have that \(s_j=\delta (s_{j-1},v_j)\) and \(o_j=\tau (s_{j-1},v_j)\).
We can now define a transducer with a guided environment (TGE for short). TGEs extend traditional transducers by instructing the environment how to manage its guided signals. A TGE may be executed in a setting with partial visibility, thus \(I = V\cup H\). It uses the fact that the signals in \(H\) are known to the environment, and it guides the assignment to some of the output signals to the environment. In practice, not all output signals can be guided. Formally, the set of output signals is partitioned into controlled and guided signals, thus \(O=C\cup G\). In each transition, the TGE assigns values to the signals in \(C\) and instructs the environment how to assign values to the signals in \(G\). The environment may have a finite memory, in which case the transducer also instructs the environment how to update the memory in each transition. The instructions that the transducer generates are represented by programs, defined below.
Consider a finite set M of memories, and sets \(H\subseteq I\) and \(G\subseteq O\) of input and output signals. Let \(\mathcal {P}_{M,H,G} = (M\times 2^G)^{M \times 2^H}\) denote the set of propositional programs that update the memory state and assign values to signals in \(G\), given a memory in M and an assignment to the signals in \(H\). Note that each member of \(\mathcal {P}_{M,H,G}\) is of the form \(p:M \times 2^H\rightarrow M\times 2^G\). For a program \(p\in \mathcal {P}_{M,H,G}\), let \(p_M:M\times 2^H\rightarrow M\) and \(p_G:M\times 2^H\rightarrow 2^G\) be the projections of p onto M and \(G\) respectively. Thus, \(p(m,h)=\langle p_M(m,h),p_G(m,h)\rangle \) for all \(\langle m,h\rangle \in M\times 2^H\). In Section 5 we discuss ways to restrict the set of programs that a TGE may suggest to its environment without affecting the outcome of the synthesis procedure.
Now, a TGE is \(\mathcal {T}=\langle V,H,C,G,S,s_0,\delta ,M,m_0,\tau \rangle \), where \(V\) and \(H\) are sets of visible and hidden input signals, \(C\) and \(G\) are sets of controlled and guided output signals, S is a finite set of states, \(s_0 \in S\) is an initial state, \(\delta :S\times 2^V\rightarrow S\) is a transition function, M is a set of memories that the environment may use, \(m_0 \in M\) is an initial memory, and \(\tau :S\times 2^V\rightarrow 2^C\times \mathcal {P}_{M,H,G}\) labels each transition by an assignment to the controlled output signals and a program. Note that \(\delta \) and \(\tau \) are independent of the signals in \(H\), and that \(\tau \) assigns values to the signals in \(C\) and instructs the environment how to use the signals in \(H\) in order to assign values to the signals in \(G\). The latter reflects the fact that the environment does view the signals in \(H\), and constitute the main advantage of TGEs over standard transducers.
Given an infinite sequence \(w_{I}=(v_1 \cup h_1)\cdot (v_2 \cup h_2) \cdots \in (2^{V\cup H})^{\omega }\), the run of \(\mathcal {T}\) on \(w_{I}\) is obtained by applying \(\delta \) on the restriction of \(w_{I}\) to \(V\). Thus, \(r=s_0\cdot s_1\cdot s_2\cdots \), where for all \(j \ge 1\), we have that \(s_{j}=\delta (s_{j-1},v_j)\). The interaction of \(\mathcal {T}\) with the environment generates an infinite sequence \(w_C=c_1 \cdot c_2 \cdot c_3 \cdots \in (2^C)^\omega \) of assignments to the controlled signals, an infinite sequence \(w_M=m_0\cdot m_1\cdot m_2 \cdots \in M^\omega \) of memories, and an infinite sequence \(w_P = p_1\cdot p_2\cdot p_3\cdots \in (\mathcal {P}_{M,H,G})^\omega \) of programs, which in turn generates an infinite sequence \(w_G=d_1 \cdot d_2 \cdot d_3 \cdots \in (2^G)^\omega \) of assignments to the guided signals. Formally, for all \(j \ge 1\), we have that \(\langle c_j,p_j\rangle = \tau (s_{j-1},v_j)\) and \(\langle m_j,d_j\rangle =p_j(m_{j-1},h_j)\). The computation of \(\mathcal {T}\) on \(w_{I}\) is then \(\mathcal {T}(w_{I})=(v_1 \cup h_1 \cup c_1\cup d_1)\cdot (v_2 \cup h_2 \cup c_2\cup d_2) \cdots \in (2^{V\cup H\cup C \cup G})^{\omega }\).
Note that while the domain of the programs in \(\mathcal {P}_{M,H,G}\) is \(M \times 2^H\), programs are chosen in \(\mathcal {T}\) along transitions that depend on \(2^V\). Thus, effectively, assignments made by programs depend on signals in both \(H\) and \(V\).
For a specification language \(L_\varphi \subseteq (2^{I\cup O})^{\omega }\), partitions \(I=V\cup H\) and \(O=C\cup G\), and a bound \(k \ge 1\) on the memory that the environment may use, we say that a TGE \(\mathcal {T}\), \((V,H,C,G)\)-realizes \(L_\varphi \) with memory k, if \(\mathcal {T}=\langle V,H,C,G,S,s_0,\delta ,M,m_0,\tau \rangle \), with \(|M|=k\), and \(\mathcal {T}(w_{I}) \in L_\varphi \), for every input sequence \(w_{I}\in (2^I)^{\omega }\). Then, in the synthesis with guided environment problem (SGE, for short), given such a specification \(L_\varphi \subseteq (2^{I \cup O})^\omega \), a bound \(k\ge 1\) and partitions \(I=V\cup H\) and \(O=C\cup G\), we should construct a TGE with memory k that \((V,H,C,G)\)-realizes \(L_\varphi \) (or determine that no such TGE exists).
Let us consider the special case of TGEs that operate in an environment with no memory, thus \(M=\{m\}\), for a single memory state m. Consider first the setting where \(\mathcal {T}\) has full visibility, thus \(V=I\) and \(H=\emptyset \). Then, programs are of the form \(p:{\{m\}}\times 2^{\emptyset }\rightarrow {\{m\}}\times 2^G\), and so each program is a fixed assignment to the signals in \(G\). Hence, TGEs that have full visibility and operate in an environment with no memory coincide with traditional transducers. Consider now a setting with partial visibility, thus \(V\ne I\). Now, programs are of the form \(p:{\{m\}}\times 2^{H}\rightarrow {\{m\}}\times 2^G\), allowing the TGE to guide the environment in a way that depends on signals in \(H\).

3 On the Memory Used by the Environment

In this section we examine different aspects of the memory used by the environment. We discuss a trade-off between the size of the memory and the size of the TGE, and how the two depend on the partitions \(I=V\cup H\) and \(O=C\cup G\).
Example 1
In order to better understand the need for memory and the technical challenges around it, consider the tree appearing in the figure to the right.
Fig. 1.
Two rounds of interaction with a TGE \(\mathcal {T}\).
The tree describes two rounds of an interaction between a TGE \(\mathcal {T}\) and its environment. In both rounds, \(\mathcal {T}\) views the assignment \(v \in 2^V\). Consider two assignments \(h_1,h_2 \in 2^H\). The tree presents the four possible extensions of the interaction by assignments in \(\{h_1,h_2\}\). Since the signals in \(H\) are hidden, \(\mathcal {T}\) cannot distinguish among the different branches of the tree. In particular, it has to suggest to the environment the same program in all the transitions from the root to its successors, and the same program in all the transitions from these successors to the leaves. While these programs may depend on \(H\), a program with no memory would issue the same assignment to the signals in \(G\) along the two dashed transitions \(\langle h_1,h_1.h_2\rangle \) and \(\langle h_2,h_2.h_2\rangle \). Indeed, in both transitions, the environment assigns \(h_2 \in 2^H\) to the hidden signals. Using memory, \(\mathcal {T}\) can instruct the environment to maintain information about the history of the computation, and thus distinguish between computations with the same visible component and the same current assignment to the hidden signals. For example, by moving to memory state \(m_1\) with \(h_1\) and to memory state \(m_2\) with \(h_2\), a program may instruct the environment to assign different values along the dashed lines.    \(\square \)
We first observe that in settings in which all output signals can be guided, the TGE can instruct the environment to simulate a transducer that realizes the specification. Also, in settings with full visibility, the advantage of guided environments is only computational, thus specifications do not become realizable, yet may be realizable by smaller systems. Formally, we have the following.
Theorem 1
Consider an LTL specification \(\varphi \) over \(I \cup O\).
1.
If \(\varphi \) is realizable by a transducer with n states, then \(\varphi \) is \((\emptyset ,I,\emptyset ,O)\)-realizable by a one-state TGE with memory n.
 
2.
For every partition \(I=V\cup H\) and \(O=C\cup G\), if \(\varphi \) is \((V,H,C,G)\)-realizable by a TGE with n states and memory k, then \(\varphi \) is (IO)-realizable (in a setting with full visibility) by a transducer with \(n \cdot k\) states. In particular, if \(\varphi \) is \((\emptyset ,I,\emptyset ,O)\)-realizable by a one-state TGE with memory k, then \(\varphi \) is (IO)-realizable (in a setting with full visibility) by a transducer with k states.
 
Proof
We start with the first claim. Given a transducer \(\mathcal {T}=\langle I,O,S,s_0,\delta ,\tau \rangle \), we define a program \(p \in \mathcal {P}_{S,I,O}\) that instructs the environment to simulate \(\mathcal {T}\). Formally, for every \(s \in S\) and \(i \in 2^I\), we have that \(p(s,i)=\langle \delta (s,i),\tau (s,i)\rangle \). Consider the TGE \(\mathcal {T}'=\langle \emptyset ,I,\emptyset ,O,\{s\},s,\delta ',S,s_0,\tau '\rangle \), with \(\delta '(s,\emptyset )=s\) and \(\tau '(s,\emptyset )=\langle \emptyset ,p\rangle \). It is easy to see that for every \(w_I\in (2^I)^\omega \), we have that \(\mathcal {T}(w_I)=\mathcal {T}'(w_I)\). In particular, if \(\mathcal {T}\) realizes \(\varphi \), then so does \(\mathcal {T}'\).
We continue to the second claim. Note that the special case where \(V=C=\emptyset \) is the “only if” direction of the first claim. Given a TGE \(\mathcal {T}=\langle V,H,C,G\), \(S,s_0\), \(\delta ,M\), \(m_0,\tau \rangle \), consider the transducer \(\mathcal {T}'=\langle H \cup V, C\cup G,S \times M, \langle s_0,m_0\rangle ,\delta ',\tau '\rangle \), where for all \(\langle s,m\rangle \in S \times M\) and \(i \in 2^{H\cup V}\) with \(\tau (s,i \cap H)=\langle c,p\rangle \), we have that \(\delta '(\langle s,m\rangle ,i)=\langle \delta (s,i \cap V),p_{M}(i \cap H)\rangle \) and \(\tau '(\langle s,m\rangle ,i) = c \cup p_{G}(i \cap H)\). It is easy to see that for every \(w_I\in (2^I)^\omega \), we have that \(\mathcal {T}(w_I)=\mathcal {T}'(w_I)\). In particular, if \(\mathcal {T}\) realizes \(\varphi \), then so does \(\mathcal {T}'\).
Note that Theorem 1 also implies that an optimal balance between controlled and guided signals in a setting with full visibility can achieve at most a quadratic saving in the combined states space of the TGE and its memory. This is similar to the quadratic saving in defining a regular language by the intersection of two automata. Indeed, handling of two independent specifications can be partitioned between the TGE and the environment. Formally, we have the following.
Theorem 2
For all \(\,k\ge 1\), there exists a specification \(\varphi _k\) over \(I={\{i_1,i_2\}}\) and \(O={\{o_1,o_2\}}\), such that \(\varphi _k\) is \(({\{i_1\}},{\{i_2\}},{\{o_1\}},{\{o_2\}})\)-realizable by a TGE with k states and memory k, yet a transducer that (IO)-realizes \(\varphi _k\) needs at least \(k^2\) states.
Proof
For a proposition p, let \(\textbf{F}^1 p = \textbf{F}p\), and for \(j \ge 1\), let \(\textbf{F}^{j+1} p= \textbf{F}(p \wedge \textbf{X}(\textbf{F}^j p))\). Thus, \(\textbf{F}^j p\) holds in a computation \(\pi \) iff p holds in at least j positions in \(\pi \).
We define \(\varphi _k=\varphi ^1_k\wedge \varphi ^2_k\), where for \(l \in \{1,2\}\), we have that \(\varphi ^l_k=(\textbf{F}^k i_l)\leftrightarrow \textbf{F}o_l\). That is, \(\varphi ^l_k\) is over \({\{i_l,o_l\}}\), and it holds in a computation \(\pi \) if \(o_l\) is always \(\mathtt F\) in \(\pi \), unless there are k positions in \(\pi \) in which \(i_l\) is \(\mathtt T\), in which case \(o_l\) has to be \(\mathtt T\) in at least one position.
In the full version, we prove that an (IO)-transducer that realizes \(\varphi _k\) needs \(k^2\) states, yet there is a TGE that \(({\{i_1\}},{\{i_2\}},{\{o_1\}},{\{o_2\}})\)-realizes \(\varphi _k\) with state space and memory set both of size k.
Unsurprisingly, larger memory of the environment enables TGEs to synthesize more specifications. Formally, we have the following.
Theorem 3
For every \(k \ge 1\), the LTL specification \(\varphi _k=\textbf{G}(i \leftrightarrow \textbf{X}^{k}o)\) is \((\emptyset ,\{i\},\emptyset ,\{o\})\)-realizable by a TGE with memory \(2^k\), and is not \((\emptyset ,\{i\},\emptyset ,\{o\})\)-realizable by a TGE with memory \(2^{k-1}\).
Proof
In order to realize \(\varphi _k\), a TGE can store the last k assignments to i and set o to the value of i from k rounds earlier, which requires \(2^k\) memory states. However, a traditional transducer for \(\varphi _k\), with full visibility and control, requires at least \(2^k\) states. Thus, by Theorem 1, a single-state transducer with fewer memory states cannot realize \(\varphi _k\). In the full version, we argue that using more than one state for the TGE cannot be of much help and prove that at least \(2^{k-1}\) memory states are needed.
We turn to consider how changes in the partition of the input and output signals may affect the required memory. In Theorem 4, proved in the full version, we show that increasing visibility or decreasing control, does not require more states or memory. On the other hand, in Theorem 5, we show that increasing control by the system may require the environment to use more memory. Thus, surprisingly, guiding more signals does not require more memory, yet guiding fewer signals may require more memory.
Theorem 4
Consider an LTL specification \(\varphi \) that is \((V,H,C,G)\)-realizable by a TGE with n states and memory k.
1.
For every \(I' \subseteq H\), we have that \(\varphi \) is \((V\cup I',H\setminus I',C,G)\)-realizable by a TGE with n states and memory k.
 
2.
For every \(O' \subseteq C\), we have that \(\varphi \) is \((V,H,C \setminus O',G\cup O')\)-realizable by a TGE with n states and memory k.
 
Theorem 5
There is an LTL specification \(\varphi \) over \(\{i,o_1,o_2\}\) such that the following hold:
1.
\(\varphi \) is \((\emptyset ,\{i\},\emptyset ,\{o_1,o_2\})\)-realizable by a TGE with memory 1.
 
2.
\(\varphi \) is \((\emptyset ,\{i\},\{o_1\},\{o_2\})\)-realizable by a TGE with memory 2.
 
3.
\(\varphi \) is not \((\emptyset ,\{i\},\{o_1\},\{o_2\})\)-realizable by a TGE with memory 1.
 
Proof
We define \(\varphi =\varphi _1 \vee \varphi _2\), with \(\varphi _1=\textbf{G}(i \leftrightarrow o_1)\) and \(\varphi _2= \textbf{G}(i \leftrightarrow \textbf{X}o_2)\). A TGE with no memory that \((\emptyset ,\{i\},\emptyset ,\{o_1,o_2\})\)-realizes \(\varphi \), instructs the environment to always copy i into \(o_1\), and thus realizes \(\varphi _1\). A TGE that does not guide \(o_1\), cannot satisfy \(\varphi _1\), as i is hidden, and should thus instruct the environment in a way that causes the realization of \(\varphi _2\). This, however, must involve a register that maintain the previous value of i.

4 Solving the SGE Problem

Recall that in the SGE problem, we are given an LTL specification \(\varphi \) over \(I \cup O\), partitions \(I=V\cup H\) and \(O=C\cup G\), and a bound \(k\ge 1\), and we have to return a TGE that \((V,H,C,G)\)-realizes \(\varphi \) with memory k. The main challenge in solving the SGE problem is that it adds to the difficulties of synthesis with partial visibility the fact that the system’s interactions with environments that agree on the visible signals may generate different computations. Technically, the system should still behave in the same way on input sequences that agree on the visible signals. Thus, the interaction should generate the same assignments to the controlled output signals and the same sequence of programs. However, these programs may generate different computations, as they also depend on the hidden signals.
Our solution uses alternating tree automata, and we start by defining them.

4.1 Words, trees, and automata

An automaton on infinite words is \(\mathcal {A}= \langle \varSigma , Q, q_0, \eta , \alpha \rangle \), where \(\varSigma \) is an alphabet, Q is a finite set of states, \(q_0\in Q\) is an initial state, \(\eta : Q\times \varSigma \rightarrow 2^Q\) is a transition function, and \(\alpha \) is an acceptance condition to be defined below. If \(|\eta (q, \sigma )| = 1\) for every state \(q\in Q\) and letter \(\sigma \in \varSigma \), then \(\mathcal {A}\) is deterministic.
A run of \(\mathcal {A}\) on an infinite word \(w = \sigma _1 \cdot \sigma _2 \cdots \in \varSigma ^\omega \) is an infinite sequence of states \(r = r_0\cdot r_1\cdot r_2\cdots \in Q^\omega \), such that \(r_0 = q_0\), and for all \(i \ge 0\), we have that \(r_{i+1} \in \eta (r_i, \sigma _{i+1})\). The acceptance condition \(\alpha \) defines a subset of \(Q^\omega \), indicating which runs are accepting. We consider here the Büchi, co-Büchi, and parity acceptance conditions. All conditions refer to the set \({ inf}(r)\subseteq Q\) of states that r traverses infinitely often. Formally, \({ inf}(r) = \{ q \in Q: q = r_i \text { for infinitely many i's} \}\). In a Büchi automaton, the acceptance condition is \(\alpha \subseteq Q\) and a run r is accepting if \({ inf}(r)\cap \alpha \ne \emptyset \). Thus, r visits \(\alpha \) infinitely often. Dually, in a co-Büchi automaton, a run r is accepting if \({ inf}(r)\cap \alpha = \emptyset \). Thus, r visits \(\alpha \) only finitely often. Finally, in a parity automaton, \(\alpha :Q\rightarrow \{1,...,k\}\) maps states to ranks, and a run r is accepting if the maximal rank of a state in \({ inf}(r)\) is even. Formally, \(\max _{q \in { inf}(r)} \{\alpha (q)\}\) is even.
Note that when \(\mathcal {A}\) is not deterministic, it has several runs on a word. If \(\mathcal {A}\) is a nondeterministic automaton, then a word w is accepted by \(\mathcal {A}\) if there is an accepting run of \(\mathcal {A}\) on w. If \(\mathcal {A}\) is a universal automaton, then a word w is accepted by \(\mathcal {A}\) if all the runs of \(\mathcal {A}\) on w are accepting. The language of \(\mathcal {A}\), denoted \(L(\mathcal {A})\), is the set of words that \(\mathcal {A}\) accepts.
Given a set \(\varUpsilon \) of directions, the full \(\varUpsilon \)-tree is the set \(T=\varUpsilon ^*\). The elements of T are called nodes, and the empty word \(\varepsilon \) is the root of T. An edge in T is a pair \(\langle x,x \cdot a\rangle \), for \(x \in \varUpsilon ^*\) and \(a \in \varUpsilon \). The node \(x \cdot a\) is called a successor of x. A path \(\pi \) of T is a set \(\pi \subseteq T\) such that \(\varepsilon \in \pi \) and for every \(x \in \pi \), there exists a unique \(a \in \varUpsilon \) such that \(x \cdot a \in \pi \). We associate an infinite path \(\pi \) with the infinite word in \(\varUpsilon ^\omega \) obtained by concatenating the directions taken along \(\pi \).
Given an alphabet \(\varSigma \), a \(\varSigma \)-labeled \(\varUpsilon \)-tree is a pair \(\langle T,\ell \rangle \), where \(T=\varUpsilon ^*\) and \(\ell \) labels each edge of T by a letter in \(\varSigma \). That is, \(\ell (x,x\cdot a)\in \varSigma \) for all \(x\in T\) and \(a\in \varUpsilon \). Note that an infinite word in \(\varSigma ^\omega \) can be viewed as a \(\varSigma \)-labeled \(\{a\}\)-tree.
A universal tree automaton over \(\varSigma \)-labeled \(\varUpsilon \)-trees is \(\mathcal {A}=\langle \varSigma ,\varUpsilon \), Q, \(q_{0}, \eta ,\alpha \rangle \), where \(\varSigma \), Q, \(q_0\), and \(\alpha \) are as in automata over words, \(\varUpsilon \) is the set of directions, and \(\eta :Q \times \varUpsilon \times \varSigma \rightarrow 2^Q\) is a transition function.
Intuitively, \(\mathcal {A}\) runs on an input \(\varSigma \)-labeled \(\varUpsilon \)-tree \(\langle T,\ell \rangle \) as follows. The run starts with a single copy of \(\mathcal {A}\) in state \(q_0\) that has to accept the subtree of \(\langle T,\ell \rangle \) with root \(\varepsilon \). When a copy of \(\mathcal {A}\) in state q has to accept the subtree of \(\langle T,\ell \rangle \) with root x, it goes over all the directions in \(\varUpsilon \). For each direction \(a \in \varUpsilon \), it proceeds according to the transition function \(\eta (q,a,\sigma )\), where \(\sigma \) is the letter written on the edge \(\langle x,x \cdot a\rangle \) of \(\langle T,\ell \rangle \). The copy splits into \(|\eta (q,a,\sigma )|\) copies: for each \(q' \in \eta (q,a,\sigma )\), a copy in state \(q'\) is created, and it has to accept the subtree with root \(x \cdot a\). In the full version, we formally define a run of \(\mathcal {A}\) as a tree \(\langle T_r,r\rangle \).
A run \(\langle T_r,r\rangle \) is accepting if all its infinite paths satisfy the acceptance condition.
We denote by \(L(\mathcal {A})\) the set of all \(\varSigma \)-labeled \(\varUpsilon \)-trees that \(\mathcal {A}\) accepts.
We denote the different classes of automata by three-letter acronyms in \(\{ \text {D,N,U} \} \times \{ \text {B,C,P}\} \times \{\text {W,T}\}\). The first letter stands for the branching mode of the automaton (deterministic, nondeterministic, or universal); the second for the acceptance condition type (Büchi, co-Büchi, or parity); and the third indicates we consider automata on words or trees. For example, UCTs are universal co-Büchi tree automata.

4.2 An automata-based solution

Each TGE \(\mathcal {T}=\langle V,H,C,G,S,s_0,\delta ,M,m_0,\tau \rangle \) induces a \((2^C\times \mathcal {P}_{M,H,G})\)-labeled \(2^V\)-tree \(\langle (2^V)^*,\ell \rangle \), obtained by simulating the interaction of \(\mathcal {T}\) with all input sequences. Formally, let \(\delta ^*:(2^V)^* \rightarrow S\) be an extension of \(\delta \) to finite sequence in \((2^V)^*\), starting from \(s_0\). Thus, \(\delta ^*(w)\) is the state that \(\mathcal {T}\) visits after reading \(w \in (2^V)^*\). Formally, \(\delta ^*(\varepsilon )=s_0\), and for \(w \in (2^V)^*\) and \(v \in 2^V\), we have that \(\delta ^*(w \cdot v)=\delta (\delta ^*(w),v)\). Then, the tree \(\langle (2^V)^*,\ell \rangle \) is such that \(\ell (w,w\cdot v)= \tau (\delta ^*(w),v)\).
Given \(w_V=v_1\cdot v_2\cdot v_3\cdots \in (2^V)^\omega \), let \(\ell (w_V)=\langle c_1,p_1\rangle \cdot \langle c_2,p_2\rangle \cdot \langle c_3,p_3\rangle \cdots \in (2^C\times \mathcal {P}_{M,H,G})^\omega \) be the sequence of labels along the path induced by \(w_V\). For every \(j \ge 1\), \(\langle c_j,p_j\rangle =\ell (\langle v_1,v_2, \ldots v_{j-1}\rangle ,\langle v_1,v_2,\ldots , v_{j}\rangle )\). It is convenient to think of \(\ell (w_V)\) as the operation of \(\mathcal {T}\) while interacting with an environment that generates an input sequence \((v_1\cup h_1)\cdot (v_2\cup h_2)\cdot (v_2\cup h_2)\cdots \), for some \(w_H= h_1 \cdot h_2 \cdot h_3 \cdots \in (2^H)^\omega \). \(\mathcal {T}\) sees only \(V\), knowing its assignments to \(C\) and the programs for the environment, but not the \(H\) assignments, and thus neither the memory state nor the \(G\) assignments.
The environment, which does know \(w_H\), can complete \(\ell (w_V)\) to a computation in \((2^{V\cup H\cup C\cup G})^\omega \). Indeed, given a current memory state \(m\in M\) and assignment \(h\in 2^H\) to the hidden signals, the environment can apply the last program sent \(p\in \mathcal {P}_{M,H,G}\), \(p(m,h)=\langle m',g\rangle \), and thus obtain the new memory state and assignment to the guided signals. Formally, for all \(j \ge 1\), we have that \(\langle m_j,g_j\rangle =p_j(m_{j-1},h_j)\). Then, the operation of \(\ell (w_V)\) on \(w_H\), denoted \(\ell (w_V) \circ w_H\), is the sequence \((v_1 \cup h_1 \cup c_1\cup g_1)\cdot (v_2 \cup h_2 \cup c_2\cup g_2) \cdots \in (2^{V\cup H\cup C \cup G})^{\omega }\).
For an LTL specification \(\varphi \), we say that a \((2^C\times \mathcal {P}_{M,H,G})\)-labeled \(2^V\)-tree \(\langle (2^V)^*,\ell \rangle \) is \(\varphi \)-good if for all \(w_V \in (2^V)^\omega \) and \(w_H\in (2^H)^\omega \), we have that \(\ell (w_V) \circ w_H\) satisfies \(\varphi \). By definition, a TGE \((V,H,C,G)\)-realizes \(\varphi \) iff its induced \((2^C\times \mathcal {P}_{M,H,G})\)-labeled \(2^V\)-tree is \(\varphi \)-good.
Theorem 6
Given an LTL specification \(\varphi \) over \(I \cup O\), partitions \(I=V\cup H\) and \(O=C\cup G\), and a set M of memories, we can construct a UCT \(\mathcal {A}\) with \(2^{|\varphi |} \cdot |M|\) states such that \(\mathcal {A}_\varphi \) runs on \((2^C\times \mathcal {P}_{M,H,G})\)-labeled \(2^V\)-trees and accepts a labeled tree \(\langle (2^V)^*,\ell \rangle \) iff \(\langle (2^V)^*,\ell \rangle \) is \(\varphi \)-good.
Proof
Given \(\varphi \), let \(\mathcal {A}_\varphi =\langle 2^{I\cup O},Q,q_0,\eta ,\alpha \rangle \) be a UCW over the alphabet \(2^{I \cup O}\) that recognizes \(L_\varphi \). We can construct \(\mathcal {A}_\varphi \) by dualizing an NBW for \(\lnot \varphi \). By [30], the latter is of size exponential in \(|\varphi |\), and thus, so is \(\mathcal {A}_\varphi \).
We define \(\mathcal {A}=\langle 2^C\times \mathcal {P}_{M,H,G},2^V,Q \times M,\langle q_0,m_0\rangle ,\eta ',\alpha \times M\rangle \), where \(m_0\in M\) is arbitrary, and \(\eta '\) is defined for every \(\langle q,m\rangle \in Q\times M\), \(v \in 2^V\), and \(\langle c,p\rangle \in 2^C\times \mathcal {P}_{M,H,G}\), as follows.
$$ \eta '(\langle q,m\rangle ,v,\langle c,p\rangle )=\bigcup _{h \in 2^H} \eta (q,v\cup h\cup c\cup p_G(m,h)) \times \{p_M(m,h)\}.$$
Thus, to direction \(v \in 2^V\), the UCT \(\mathcal {A}\) sends copies that correspond to all possible assignments in \(2^H\), where for every \(h \in 2^H\), a copy is sent for each state in \(\eta (q,v\cup h\cup c\cup p_G(m,h))\), all with the same memory \(p_M(m,h)\). Accordingly, for all \(2^C\times \mathcal {P}_{M,H,G}\)-labeled \(2^V\)-tree \(\langle (2^V)^*,\ell \rangle \), there is a one to one correspondence between every infinite branch in the run tree \(\langle T_r,r\rangle \) of \(\mathcal {A}\) over \(\langle (2^V)^*,\ell \rangle \), and an infinite run of the UCW \(\mathcal {A}_\varphi \) over \(\ell (w_V)\circ w_H\), for some \(w_V\in (2^V)^\omega \) and \(w_H\in (2^H)^\omega \). It follows that \(\mathcal {A}\) accepts a tree \(\langle (2^V)^*,\ell \rangle \) iff \(\langle (2^V)^*,\ell \rangle \) is \(\varphi \)-good.
The construction of \(\mathcal {A}_\varphi \) allows us to conclude the following complexity result of the SGE problem. The lower bound is immediate from the complexity of the traditional LTL synthesis problem, and the upper bound is done by a reduction to the non-emptiness of \(\mathcal {A}_\varphi \). The proof is given in In the full version.
Theorem 7
The LTL SGE problem is 2EXPTIME-complete. Given \(\varphi , V,H,C,G\), and k, deciding whether \(\varphi \) is \((V,H,C,G)\)-realizable by a TGE with memory k can be done in time doubly exponential in \(|\varphi |\) and exponential in k.

4.3 Bounding the size of M

As discussed in Example 1, programs that can only refer to the current assignment of the hidden signals may be too weak: in some specifications, the assignment to the guided output signals have to depend on the history of the interaction so far. The full history of the interaction is a finite word in \((2^{I \cup O})^*\), and so apriori, an unbounded memory is needed in order to remember all possible histories. A deterministic automaton \(D_\varphi \) for \(\varphi \) partitions the infinitely many histories in \((2^{I \cup O})^*\) into finitely many equivalence classes. Two histories \(w,w' \in (2^{I \cup O})^*\) are in the same equivalence class if they reach the same state of \(D_\varphi \), which implies that \(w \cdot t \models \varphi \) iff \(w' \cdot t \models \varphi \) for all \(t \in (2^{I \cup O})^\omega \). In the case of traditional synthesis, we know that a transducer that realizes \(\varphi \) does not need more states than \(D_\varphi \). Intuitively, if two histories of the interaction are in the same equivalence class, the transducer can behave the same way after processing them.
In the following theorem we prove that the same holds for the memory used by a TGE: if two histories of the interaction reach the same state of \(D_\varphi \), there is no reason for them to reach different memories. Formally, we have the following.
Theorem 8
Consider a specification \(\varphi \), and let Q be the set of states of a DPW for \(\varphi \). If there is a TGE that \((V,H,C,G)\)-realizes \(\varphi \) with memory M, then there is also a TGE that \((V,H,C,G)\)-realizes \(\varphi \) with memory Q.
Proof
Let \(\mathcal {T}=\langle V,H,C,G,S,s_0,\delta ,M,m_0,\tau \rangle \) be a TGE that \((V,H,C,G)\)-realizes \(\varphi \), and let \(\mathcal {T}_{(V/C)}=\langle V,C,S,s_0,\delta ,\tau _C\rangle \) be the \((V,C)\)-transducer induced by \(\mathcal {T}\). The labeling function \(\tau _C:S\times 2^V\rightarrow 2^C\) is obtained by projecting \(\tau \) on its \(2^C\) component. Let \(D_\varphi =\langle 2^{I\cup O},Q,q_0,\eta ,\alpha \rangle \) be a DPW for \(\varphi \). We show that there exists a labeling function \(\tau _{G}:S\times 2^V\rightarrow \mathcal {P}_{Q,H,G}\) such that \(\mathcal {T}_Q=\langle V,H,C,G,S,s_0,\delta ,Q,q_0,\tau '\rangle \), with \(\tau '(s,v)=\langle \tau _C(s,v),\tau _{G}(s,v)\rangle \) is a TGE that \((V,H,C,G)\)-realizes \(\varphi \).
Consider the following two player game \(\mathcal{G}_{\mathcal {T},D_\varphi }\) played on top of the product of \(\mathcal {T}_{(V/C)}\) with \(D_\varphi \). The game is played between the environment, which proceeds in positions in \(S\times Q\), and the system, which proceeds in positions in \(S\times Q\times 2^I\). The game is played from the initial position \(\langle s_0,q_0\rangle \), which belongs to the environment. From a position \(\langle s,q\rangle \), the environment chooses an assignment \(i\in 2^I\) and moves to \(\langle s,q,i\rangle \). From a position \(\langle s,q,i\rangle \), the system chooses an assignment \(g\in 2^G\) and moves to \(\langle s',q'\rangle \), where \(s'=\delta (s,i\cap V)\) and \(q'=\eta (q,i\cup \tau _C(s,i\cap V)\cup g)\). The system wins the game if the Q component of the generated play satisfies \(\alpha \).
Intuitively, the systems wins the game if she has a strategy \(f:(2^I)^*\rightarrow 2^G\) that when combined with the environment’s input and the assignments to \(C\) generated by \(\mathcal {T}_{(V/C)}\) induces a computation that is accepted by \(D_\varphi \). Clearly, system wins by following the assignments to \(G\) generated by the TGE \(\mathcal {T}\).
In the full version, we use the positional determinacy of parity games to obtain the required labeling function \(\tau _G:S\times 2^V\rightarrow \mathcal {P}_{Q,H,G}\) with which \(\mathcal {T}_Q\) \((V,H,C,G)\)-realizes \(\varphi \).    \(\square \)
Theorem 9
If there is a TGE that \((V,H,C,G)\)-realizes an LTL specification \(\varphi \), then there is also a TGE that \((V,H,C,G)\)-realizes \(\varphi \) with memory doubly exponential in \(|\varphi |\), and this bound is tight.
Proof
The upper bound follows from Theorem 8 and the doubly-exponential translation of LTL formulas to DPWs [12, 27, 30]. The lower bound follows from the known doubly-exponential lower bound on the size of transducers for LTL formulas [26], applied when \(I=H\) and \(O=G\).    \(\square \)
While Theorem 9 is of theoretical interest, we find the current formulation of the SGE problem, which includes a bound on M, more appealing in practice: recall that SGE is doubly-exponential in \(|\varphi |\) and exponential in |M|. As \(|D_\varphi |\) is already doubly exponential in \(|\varphi |\), solving SGE with no bound on M results in an algorithm that is triply-exponential in \(|\varphi |\). Thus, it makes sense to let the user provide a bound on the memory.

5 Programs

Recall that TGEs generate in each transition a program \(p:M \times 2^H\rightarrow M \times 2^G\) that instructs the environment how to update its memory and assign values to the guided output signals. In this section we discuss ways to represent programs efficiently, and, in the context of synthesis, restrict the set of programs that a TGE may suggest to its environment without affecting the outcome of the synthesis procedure. Note that the number of programs in \(\mathcal {P}_{M,H,G}\) is \(2^{(\log |M|+|G|) \cdot |M| \cdot 2^{|H|}}\). Our main goal is to reduce the domain \(2^{H}\), which is the most dominant factor.
Naturally, the reduction depends on the specification we wish to synthesize. For an LTL formula \(\psi \), let \({ prop} (\psi )\) be the set of maximal predicates over \(I \cup O\) that are subformulas of \(\psi \). Formally, \({ prop} (\psi )\) is defined by an induction on the structure of \(\psi \) as follows.
  • If \(\psi \) is a propositional assertions, then \({ prop} (\psi )=\{\psi \}\).
  • Otherwise, \(\psi \) is of the form \(* \psi _1\) or \(\psi _1 * \psi _2\) for some (possibly temporal) operator \(*\), and \({ prop} (\psi )={ prop} (\psi _1) \cup { prop} (\psi _2)\).
Note that the definition is sensitive to syntax. For example, the formulas \((i_1 \vee o)\wedge \textbf{X}i_2\) and \((i_1 \wedge \textbf{X}i_2) \vee (o \wedge \textbf{X}i_2)\) are equivalent, but have different sets of maximal propositional assertions. Indeed, \({ prop} ((i_1 \vee o)\wedge \textbf{X}i_2)=\{i_1\vee o, i_2\}\), whereas \({ prop} ((i_1 \wedge \textbf{X}i_2) \vee (o \wedge \textbf{X}i_2))=\{i_1,i_2,o\}\).
It is well known that the satisfaction of an LTL formula \(\psi \) in a computation \(\pi \in (2^{I \cup O})^\omega \) depends only on the satisfaction of the formulas in \({ prop} (\psi )\) along \(\pi \): if two computations agree on \({ prop} (\psi )\), then they also agree on \(\psi \). Formally, for two assignments \(\sigma ,\sigma ' \in 2^{I \cup O}\), and a set \(\varTheta \) of predicates over \(I \cup O\), we say that \(\sigma \) and \(\sigma '\) agree on \(\varTheta \), denoted \(\sigma \approx _{\varTheta } \sigma '\), if for all \(\theta \in \varTheta \), we have that \(\sigma \models \theta \) iff \(\sigma ' \models \theta \). Then, two computations \(\pi =\sigma _1\cdot \sigma _2\cdots \) and \(\pi ' =\sigma '_1\cdot \sigma '_2\cdots \) in \((2^{I \cup O})^\omega \) agree on \(\varTheta \), denoted \(\pi \approx _{\varTheta } \pi '\), if for all \(j \ge 1\), we have \(\sigma _j \approx _{\varTheta } \sigma '_j\).
Proposition 1
Consider two computations \(\pi ,\pi ' \in (2^{I \cup O})^\omega \). For every LTL formula \(\psi \), if \(\pi \approx _{{ prop} (\psi )} \pi '\), then \(\pi \models \psi \) iff \(\pi ' \models \psi \).
As we shall formalize below, Proposition 1 enables us to restrict the set of programs so that only one computation from each equivalence class of the relation \(\approx _{{ prop} (\psi )}\) may be generated by the interaction of the system and the environment.
For an LTL formula \(\psi \), let \({ cl}_{H}(\psi )\) be the set of maximal subformulas of formulas in \({ prop} (\psi )\) that are defined only over signals in \(H\). Formally, \({ cl}_{H}(\psi )=\bigcup _{\theta \in { prop} (\psi )}{ cl}_{H}(\theta )\), where \({ cl}_{H}(\theta )\) is defined for a propositional formula \(\theta \) as follows.
  • If \(\theta \) is only over signals in \(H\), then \({ cl}_{H}(\theta )=\{\theta \}\).
  • If \(\theta \) is only over signals in \(V\cup O\), then \({ cl}_{H}(\theta )=\emptyset \).
  • Otherwise, \(\theta \) is of the form \(\lnot \theta _1\) or \(\theta _1 *\theta _2\) for \(*\in {\{\vee ,\wedge \}}\), in which case \({ cl}_{H}(\theta )={ cl}_{H}(\theta _1)\) or \({ cl}_{H}(\theta )={ cl}_{H}(\theta _1) \cup { cl}_{H}(\theta _2)\), respectively.
For example, if \(H=I=\{i_1,i_2,i_3\}\) and \(O=\{o\}\), then \({ cl}_{H}((i_1 \vee i_2)\wedge (i_3\vee o))=\{i_1 \vee i_2,i_3\}\). Note that formulas in \({ cl}_{H}(\psi )\) are over \(H\), and that the relation \(\approx _{{ cl}_{H}(\psi )}\) is an equivalence relation on \(2^H\). We say that a program \(p: M \times 2^H\rightarrow M \times 2^G\) is tight for \(\psi \) if for every memory \(m \in M\) and two assignments \(h,h' \in 2^{H}\), if \(h \approx _{{ cl}_{H}(\psi )} h'\), then \(p(m,h)=p(m,h')\).
In Theorem 10 below, proved in the full version, we argue that in the context of LTL synthesis, one can always restrict attention to tight programs.
Theorem 10
If \(\varphi \) is \((V,H,C,G)\)-realizable by a TGE, then it is \((V,H,C,G)\)-realizable by a TGE that uses only programs tight for \(\varphi \).
Theorem 10 enables us to replace the domain \(M \times 2^H\) by the more restrictive domain \(M \times 2^{{ cl}_{H}(\varphi )}\). Below we discuss how to reduce the domain further, taking into account the configurations in which the programs are going to be executed.
Recall that whenever a TGE instructs the environment which program to follow, the signals in \(V\) and \(C\) already have an assignment. Indeed, \(\tau : S \times 2^V\rightarrow 2^C\times \mathcal {P}_{M,H,G}\), and when \(\langle c,p\rangle \in \delta (s,v)\), we know that the program p is going to be executed when the signals in V and C are assigned v and c. In the full version, we use the values of \(V\cup C\) in order to make the equivalence classes on \(2^H\) even coarser.
Remark 1
Typically, a specification to a synthesized system is a conjunction of sub-specifications, each referring to a different functionality of the system. Consequently, the assignment to each output signal may depend only on a small subset of the input signals – these that participate in the sub-specifications of the output signal. For example, in the specification \(\varphi =\textbf{G}(o_1\leftrightarrow i_1)\wedge \textbf{G}(o_2\leftrightarrow i_2)\), the two conjuncts are independent of each other, and so the assignment to \(o_1\) can depend only on \(i_1\), and similarly for \(i_1\) and \(o_2\). Accordingly, further reductions to the set of programs can be achieved by decomposing programs to sub-programs in which different subsets of \(H\) are considered when assigning values to different subsets of \(G\). In addition, by analyzing dependencies within each sub-specification, the partition of the output signals and their corresponding “affecting sets of hidden signals” can be refined further, and as showed in [1], finding dependent output signals may lead to improvements in state of the art synthesis algorithms.    \(\square \)

6 Viewing a TGE as a Distributed System

Consider a TGE \(\mathcal {T}\) with memory M that \((V,H,C,G)\)-realizes a specification. As shown in Figure 2 (left), the TGE’s operation can be viewed as two distributed processes executed together: the TGE \(\mathcal {T}\) itself, and a transducer \(\mathcal {T}_G\) with state space M, implementing \(\mathcal {T}\)’s instructions to the environment. In each round, the transducer \(\mathcal {T}_G\) receives from the environment an assignment to the signals in \(H\), received from \(\mathcal {T}\) a program, and uses both in order to generate an assignment to the signals in G and update its state.
Fig. 2.
A TGE that interacts with a guided environment (left) and the corresponding distributed system architecture (right).
We examine whether viewing TGEs this way can help reduce SGE to known algorithms for the synthesis of distributed systems. We argue that the approach here, where we do not view \(\mathcal {T}_G\) as a process in a distributed system, is preferable. In distributed-systems synthesis, we are given a specification \(\varphi \) and an architecture describing the communication channels among processes. The goal is to design strategies for these processes so that their joint behavior satisfies \(\varphi \). Synthesis of distributed systems is generally undecidable [24], primarily due to information forks – processes with incomparable information (e.g., when the environment sends assignments of disjoint sets of signals to two processes) [28]. The SGE setting corresponds to the architecture in Figure 2: Process \(P_1\) is the TGE \(\mathcal {T}\), which gets assignments to \(V\) and generates assignments to \(C\). Instead of designing \(P_1\) to generate instructions for the environment, the synthesis algorithm also returns \(P_2\), which instructs the environment on generating assignments to \(G\). The process \(P_2\) gets (in fact generates) assignments to both \(V\) and \(H\), eliminating information forks, making SGE solvable by solving distributed-system synthesis for this architecture. A solution in the TGE setting, that is composed of a TGE \(\mathcal {T}\) and an environment transducer \(\mathcal {T}_G\), induces a solution in the distributed setting: \(P_1\) follows \(\mathcal {T}\), and \(P_2\) simulates the joint operation of \(\mathcal {T}\) and \(\mathcal {T}_G\), assigning values to \(G\) as instructed by \(\mathcal {T}\). Conversely, a TGE can encode through \(\mathcal {P}_{M,H,G}\) the current assignment to V together with a description of the structure of \(P_2\), achieving the architecture in Figure 2 (right).
Using programs in \(\mathcal {P}_{M,H,G}\) goes beyond sending \(V\)’s values, which are already known to the environment. Programs leverage the TGE’s computation, particularly its current state, to save resources and utilize less memory. Not using the communication channel between the TGE and the environment could result in a significant increase in the size of process \(P_2\). For example, when \(|M|=1\) (specifications where \(G\)’s assignment depends only on \(V\)’s history and \(H\)’s current assignment), the process \(P_2\) is redundant. As demonstrated in Section 5, programs in \(\mathcal {P}_{M,H,G}\) can be described symbolically. Formally, we have the following.
Theorem 11
For every \(k\ge 1\), there exists a specification \(\varphi _k\) over \(V\cup H\cup G\), such that \(\varphi _k\) is \((V,H,C,G)\)-realizable by a TGE with a set of states and a memory set both of size O(k), yet the size of \(P_2\) in a distributed system that realizes \(\varphi _k\) is at least \(\varOmega (k^2)\).

7 Discussion

We introduced synthesis with guided environments, where the system can utilize the knowledge and computational power of the environment. Straightforward directions for future research include extensions of the many settings in which synthesis has been studied to the “the guided paradigm”. Here we discuss two directions that are more related to the paradigm itself.
Dynamic hiding and guidance. In the setting studied here, the partition of I and O into visible, hidden, controlled, and guided signals is fixed throughout the computation. In some settings, these partitions may be dynamic. For example, when visibility depends on sensors that the system may activate and deactivate [2] or when signals are sometimes hidden in order to maintain privacy [18]. The decision which signals to hide in each round may depend on the system (e.g., when it instructs the environment which signals to hide in order to maintain its privacy), the environment (e.g., when it prefers not to share sensitive information), or an external authority (e.g., when signals become hidden due to actual invisibility). As for output signals, their guidance may depend on the history of the interaction (e.g., we may be able to assume amenability from the environment only after some password has been entered).
Bounded SGE. SGE involves a memory that can be used by the environment. As in the study of traditional bounded synthesis [19, 29], it is interesting to study SGE with given bounds on both the state spaces of the system and the environment. In addition to better modeling the setting, the bounds are used in order to improve the complexity of the algorithm, and they can also serve in heuristics, as in SAT-based algorithms for bounded synthesis [11]. In the setting of SGE, it is interesting to investigate the tradeoffs among the three involved bounds. It is easy to see that the two bounds that are related to the environment, namely the bound on its state space and the bound on the memory supervised by the system, are dual: an increase in the memory supervised by the system makes more specifications realizable, whereas an increase in the size of the state space of the environment makes fewer specifications realizable.
Another parameter that is interesting to bound is the number of different programs that a TGE may use, or the class of possible programs. In particular, restricting SGE to programs in which guided output signals can be assigned only the values of hidden signals or values stored in registers, will simplify an implementation of the algorithm. Likewise, the update of the memory during the interaction may be global and fixed throughout the computation.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
Readers who are still concerned about harnessing the environment towards the satisfaction of the specification, please note that the environment being hostile highlights the fact that the system has to satisfy the specification for all input sequences. This is still the case also in SGE: While we expect the environment to obey instructions received from the system, these instructions do not limit the input sequences that the environment may generate.
 
Literatur
1.
Zurück zum Zitat Akshay, S., Basa, E., Chakraborty, S., Fried, D.: On dependent variables in reactive synthesis. In: Proc. 30th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems. pp. 123–143. Springer Nature Switzerland (2024) Akshay, S., Basa, E., Chakraborty, S., Fried, D.: On dependent variables in reactive synthesis. In: Proc. 30th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems. pp. 123–143. Springer Nature Switzerland (2024)
2.
Zurück zum Zitat Almagor, S., Kuperberg, D., Kupferman, O.: Sensing as a complexity measure. Int. J. Found. Comput. Sci. 30(6-7), 831–873 (2019) Almagor, S., Kuperberg, D., Kupferman, O.: Sensing as a complexity measure. Int. J. Found. Comput. Sci. 30(6-7), 831–873 (2019)
3.
Zurück zum Zitat Alur, R., Henzinger, T., Kupferman, O.: Alternating-time temporal logic. Journal of the ACM 49(5), 672–713 (2002) Alur, R., Henzinger, T., Kupferman, O.: Alternating-time temporal logic. Journal of the ACM 49(5), 672–713 (2002)
4.
Zurück zum Zitat Berthon, R., Maubert, B., Murano, A., Rubin, S., Vardi, M.: Strategy logic with imperfect information. In: Proc. 32nd ACM/IEEE Symp. on Logic in Computer Science. pp. 1–12 (2017) Berthon, R., Maubert, B., Murano, A., Rubin, S., Vardi, M.: Strategy logic with imperfect information. In: Proc. 32nd ACM/IEEE Symp. on Logic in Computer Science. pp. 1–12 (2017)
5.
Zurück zum Zitat Bloem, R., Chatterjee, K., Jobstmann, B.: Graph games and reactive synthesis. In: Handbook of Model Checking., pp. 921–962. Springer (2018) Bloem, R., Chatterjee, K., Jobstmann, B.: Graph games and reactive synthesis. In: Handbook of Model Checking., pp. 921–962. Springer (2018)
6.
Zurück zum Zitat Bouyer, P., Markey, N., Vester, S.: Nash equilibria in symmetric graph games with partial observation. Information and Computation 254, 238–258 (2017) Bouyer, P., Markey, N., Vester, S.: Nash equilibria in symmetric graph games with partial observation. Information and Computation 254, 238–258 (2017)
7.
Zurück zum Zitat Chatterjee, K., Doyen, L., Henzinger, T., Raskin, J.F.: Algorithms for omega-regular games with imperfect information. Logical Methods in Computer Science 3(3) (2007) Chatterjee, K., Doyen, L., Henzinger, T., Raskin, J.F.: Algorithms for omega-regular games with imperfect information. Logical Methods in Computer Science 3(3) (2007)
8.
Zurück zum Zitat Chatterjee, K., Doyen, L., Nain, S., Vardi, M.: The complexity of partial-observation stochastic parity games with finite-memory strategies. In: Proc. 17th Int. Conf. on Foundations of Software Science and Computation Structures. Lecture Notes in Computer Science, vol. 8412, pp. 242–257. Springer (2014) Chatterjee, K., Doyen, L., Nain, S., Vardi, M.: The complexity of partial-observation stochastic parity games with finite-memory strategies. In: Proc. 17th Int. Conf. on Foundations of Software Science and Computation Structures. Lecture Notes in Computer Science, vol. 8412, pp. 242–257. Springer (2014)
9.
Zurück zum Zitat Chatterjee, K., Majumdar, R., Henzinger, T.A.: Controller synthesis with budget constraints. In: Proc 11th International Workshop on Hybrid Systems: Computation and Control. Lecture Notes in Computer Science, vol. 4981, pp. 72–86. Springer (2008) Chatterjee, K., Majumdar, R., Henzinger, T.A.: Controller synthesis with budget constraints. In: Proc 11th International Workshop on Hybrid Systems: Computation and Control. Lecture Notes in Computer Science, vol. 4981, pp. 72–86. Springer (2008)
10.
Zurück zum Zitat De Giacomo, G., Vardi, M.Y.: Ltl\(_f\) and ldl\(_f\) synthesis under partial observability. In: Proc. 25th Int’l Joint Conf. on Artificial Intelligence. pp. 1044–1050. IJCAI/AAAI Press (2016) De Giacomo, G., Vardi, M.Y.: Ltl\(_f\) and ldl\(_f\) synthesis under partial observability. In: Proc. 25th Int’l Joint Conf. on Artificial Intelligence. pp. 1044–1050. IJCAI/AAAI Press (2016)
11.
Zurück zum Zitat Ehlers, R.: Symbolic bounded synthesis. In: Proc. 22nd Int. Conf. on Computer Aided Verification. Lecture Notes in Computer Science, vol. 6174, pp. 365–379. Springer (2010) Ehlers, R.: Symbolic bounded synthesis. In: Proc. 22nd Int. Conf. on Computer Aided Verification. Lecture Notes in Computer Science, vol. 6174, pp. 365–379. Springer (2010)
12.
Zurück zum Zitat Esparza, J., Kretínský, J., Sickert, S.: A unified translation of linear temporal logic to \(\omega \)-automata. J. ACM 67(6), 33:1–33:61 (2020) Esparza, J., Kretínský, J., Sickert, S.: A unified translation of linear temporal logic to \(\omega \)-automata. J. ACM 67(6), 33:1–33:61 (2020)
13.
Zurück zum Zitat Filiot, E., Gentilini, R., Raskin, J.F.: Rational synthesis under imperfect information. In: Proc. 33rd ACM/IEEE Symp. on Logic in Computer Science. pp. 422–431. ACM (2018) Filiot, E., Gentilini, R., Raskin, J.F.: Rational synthesis under imperfect information. In: Proc. 33rd ACM/IEEE Symp. on Logic in Computer Science. pp. 422–431. ACM (2018)
14.
Zurück zum Zitat Finkbeiner, B., Metzger, N., Moses, Y.: Information flow guided synthesis withă unbounded communication. In: Proc. 36th Int. Conf. on Computer Aided Verification. pp. 64–86. Springer Nature Switzerland (2024) Finkbeiner, B., Metzger, N., Moses, Y.: Information flow guided synthesis withă unbounded communication. In: Proc. 36th Int. Conf. on Computer Aided Verification. pp. 64–86. Springer Nature Switzerland (2024)
15.
Zurück zum Zitat Gutierrez, J., Perelli, G., Wooldridge, M.J.: Imperfect information in reactive modules games. Inf. Comput. 261, 650–675 (2018) Gutierrez, J., Perelli, G., Wooldridge, M.J.: Imperfect information in reactive modules games. Inf. Comput. 261, 650–675 (2018)
16.
Zurück zum Zitat Krausz, A., Rieder, U.: Markov games with incomplete information. Mathematical Methods of Operations Research 46, 263–279 (1997) Krausz, A., Rieder, U.: Markov games with incomplete information. Mathematical Methods of Operations Research 46, 263–279 (1997)
17.
Zurück zum Zitat Kumar, R., Shayman, M.: Formulae relating controllability, observability, and co-observability. Autom. 34(2), 211–215 (1998) Kumar, R., Shayman, M.: Formulae relating controllability, observability, and co-observability. Autom. 34(2), 211–215 (1998)
18.
Zurück zum Zitat Kupferman, O., Leshkowitz, O.: Synthesis of privacy-preserving systems. In: Proc. 42nd Conf. on Foundations of Software Technology and Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), vol. 250, pp. 42:1–42:23 (2022) Kupferman, O., Leshkowitz, O.: Synthesis of privacy-preserving systems. In: Proc. 42nd Conf. on Foundations of Software Technology and Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), vol. 250, pp. 42:1–42:23 (2022)
19.
Zurück zum Zitat Kupferman, O., Lustig, Y., Vardi, M., Yannakakis, M.: Temporal synthesis for bounded systems and environments. In: Proc. 28th Symp. on Theoretical Aspects of Computer Science. pp. 615–626 (2011) Kupferman, O., Lustig, Y., Vardi, M., Yannakakis, M.: Temporal synthesis for bounded systems and environments. In: Proc. 28th Symp. on Theoretical Aspects of Computer Science. pp. 615–626 (2011)
20.
Zurück zum Zitat Kupferman, O., Vardi, M.: Synthesis with incomplete information. In: Advances in Temporal Logic. pp. 109–127. Kluwer Academic Publishers (2000) Kupferman, O., Vardi, M.: Synthesis with incomplete information. In: Advances in Temporal Logic. pp. 109–127. Kluwer Academic Publishers (2000)
21.
Zurück zum Zitat Kupferman, O., Vardi, M.: Synthesizing distributed systems. In: Proc. 16th ACM/IEEE Symp. on Logic in Computer Science. pp. 389–398 (2001) Kupferman, O., Vardi, M.: Synthesizing distributed systems. In: Proc. 16th ACM/IEEE Symp. on Logic in Computer Science. pp. 389–398 (2001)
22.
Zurück zum Zitat Pnueli, A.: The temporal semantics of concurrent programs. Theoretical Computer Science 13, 45–60 (1981) Pnueli, A.: The temporal semantics of concurrent programs. Theoretical Computer Science 13, 45–60 (1981)
23.
Zurück zum Zitat Pnueli, A., Rosner, R.: On the synthesis of a reactive module. In: Proc. 16th ACM Symp. on Principles of Programming Languages. pp. 179–190 (1989) Pnueli, A., Rosner, R.: On the synthesis of a reactive module. In: Proc. 16th ACM Symp. on Principles of Programming Languages. pp. 179–190 (1989)
24.
Zurück zum Zitat Pnueli, A., Rosner, R.: Distributed reactive systems are hard to synthesize. In: Proc. 31st IEEE Symp. on Foundations of Computer Science. pp. 746–757 (1990) Pnueli, A., Rosner, R.: Distributed reactive systems are hard to synthesize. In: Proc. 31st IEEE Symp. on Foundations of Computer Science. pp. 746–757 (1990)
25.
Zurück zum Zitat Reif, J.: The complexity of two-player games of incomplete information. Journal of Computer and Systems Science 29, 274–301 (1984) Reif, J.: The complexity of two-player games of incomplete information. Journal of Computer and Systems Science 29, 274–301 (1984)
26.
Zurück zum Zitat Rosner, R.: Modular Synthesis of Reactive Systems. Ph.D. thesis, Weizmann Institute of Science (1992) Rosner, R.: Modular Synthesis of Reactive Systems. Ph.D. thesis, Weizmann Institute of Science (1992)
27.
Zurück zum Zitat Safra, S.: On the complexity of \(\omega \)-automata. In: Proc. 29th IEEE Symp. on Foundations of Computer Science. pp. 319–327 (1988) Safra, S.: On the complexity of \(\omega \)-automata. In: Proc. 29th IEEE Symp. on Foundations of Computer Science. pp. 319–327 (1988)
28.
Zurück zum Zitat Schewe, S.: Synthesis of distributed systems. Ph.D. thesis, Saarland University, Saarbrücken, Germany (2008) Schewe, S.: Synthesis of distributed systems. Ph.D. thesis, Saarland University, Saarbrücken, Germany (2008)
29.
Zurück zum Zitat Schewe, S., Finkbeiner, B.: Bounded synthesis. In: 5th Int. Symp. on Automated Technology for Verification and Analysis. Lecture Notes in Computer Science, vol. 4762, pp. 474–488. Springer (2007) Schewe, S., Finkbeiner, B.: Bounded synthesis. In: 5th Int. Symp. on Automated Technology for Verification and Analysis. Lecture Notes in Computer Science, vol. 4762, pp. 474–488. Springer (2007)
30.
Zurück zum Zitat Vardi, M., Wolper, P.: Reasoning about infinite computations. Information and Computation 115(1), 1–37 (1994) Vardi, M., Wolper, P.: Reasoning about infinite computations. Information and Computation 115(1), 1–37 (1994)
31.
Zurück zum Zitat Wu, Y., Raman, V., Rawlings, B., Lafortune, S., Seshia, S.: Synthesis of obfuscation policies to ensure privacy and utility. Journal of Automated Reasoning 60(1), 107–131 (2018) Wu, Y., Raman, V., Rawlings, B., Lafortune, S., Seshia, S.: Synthesis of obfuscation policies to ensure privacy and utility. Journal of Automated Reasoning 60(1), 107–131 (2018)
Metadaten
Titel
Synthesis with Guided Environments
verfasst von
Orna Kupferman
Ofer Leshkowitz
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-90653-4_10