1 Introduction

Supervisory control theory, as introduced by Ramadge and Wonham (1987) and Ramadge and Wonham (1989), is a model-based approach to control discrete event (dynamic) systems. Given a plant model (that defines all possible system behavior) and a requirement specification (which defines what plant behavior is allowed), a supervisor can be computed algorithmically (synthesized) that restricts the plant’s behavior so that it is in accordance with the requirements. Depending on the synthesis algorithm, the supervised system has some useful properties, such as safety, nonblockingness, controllability and maximal permissiveness. The benefit of supervisory control theory has been shown in literature for varying fields of industry. Some examples where it is applied to controller design are; A patient support table of a magnetic resonance imaging scanner in Theunissen et al. (2014), chemical process control in Rawlings et al. (2014), lithography machines in van der Sanden et al. (2015), a waterway lock and movable bridge combination in Reijnen et al. (2020), construction robotics in Rosa et al. (2020), and tactical planning for automated vehicles in Krook et al. (2020). Despite the advantages of applying this technique, and the examples thereof shown in case studies, industrial acceptance is still scarce compared to other topics of control theory. Wonham et al. (2018) point to the state space explosion as one of the barriers to industrial acceptance. When the size of the system grows, the time and space (memory) required for synthesis grows exponentially.

We consider the situation sketched in Fig. 1; A supervisor has been synthesized for a particular specification of plant and requirements. Later, a (slight) adaptation is made to the specification, so that we are going to need a new supervisor. In state of practice, a completely new synthesis would be performed on the adapted model. We investigate how to reuse the initial model and synthesis result, in order to more efficiently synthesize a new supervisor, while the supervisor’s desired properties are retained.

Fig. 1
figure 1

Schematic overview synthesis for evolving system

The reuse of artifacts during (software) development is considered in (software) Product Line Engineering (PLE). Pohl et al. (2005) define: ‘Software product line engineering is a paradigm to develop software applications using platforms and mass customisation.’ By reusing domain artifacts and exploiting product line variability, companies can employ PLE to increase product individualization, reduce development costs, reduce time-to-market, and enhance product quality. Pohl et al. (2005) point to model-based software development as an ideal candidate for employing PLE.

Within the context of PLE, Schaefer et al. (2012) characterize Delta Modeling as a modular approach to model the variability of a system using transformations. A model delta explicitly specifies an adaptation that can be applied to some base model, in order to form a variant model. A particular variant model can be obtained by selecting one or more model deltas and applying them to the base model one-by-one.

Regarding adaptations that are made to software over time, Lehman (1996) has defined the laws of software evolution; these describe what changes typically occur during a software’s lifetime. The laws themselves have evolved over the years, but the law of continuing change has consistently been a part of them. This law states that software controlling a cyber-physical system must continually be adapted, otherwise its functioning becomes progressively less satisfactory.

In this paper, we elaborate on a Transformational Supervisor Synthesis (TSS) method. This type of synthesis uses a base model, its synthesis result and model delta to obtain a supervisor. This supervisor is the same as the would-be supervisor if a completely new synthesis was performed for the variant model, which is defined by the base model and model delta. Note that in this problem statement the model delta is unknown before performing synthesis on the base model. This is a realistic constraint, following from Lehman’s law of continuing change, as well as in the case of iterative and incremental development (Larman and Basili 2003), where system and requirement definitions are adapted during controller development. We introduce a supervisor synthesis algorithm that outputs relevant data that can be used for TSS. We present a notion for model delta, which defines adaptations made between two models, and use this notion to identify atomic adaptations, that are the smallest possible model deltas. For different types of atomic adaptations, we provide TSS algorithms that use the result from a previous synthesis to transformationally compute a supervisor. We show how we can iterate over these atomic adaptations to transformationally obtain a supervisor when multiple atomic adaptations specify the difference between any base and variant model. To improve the efficiency, we will then present an algorithm that groups atomic adaptations together based on their required computations and processes them at the same time. These algorithms are then first applied in an academic experiment in order to analyze their effectiveness. Next, an industrial case study is presented for evolution of a controller that is used in lithography machines. Finally, conclusions are provided based on these results.

1.1 Related work

This paper is strongly based on, and can be seen as an extension to, Thuijsman and Reniers (2020), where the TSS method was first introduced. The extension we present here includes more elaborate examples and explanations, an additional industrial case study, as well as theorems and their accompanying proofs. The algorithms we present here have been updated with respect to Thuijsman and Reniers (2020), some modifications were made on account of obtaining correct results, others for the sake of improving computational efficiency. Tijsse Claase (2020) is also closely related, in which a first attempt of applying TSS to symbolic supervisor synthesis is made, where binary decision diagrams are used to represent the system for efficient supervisor synthesis (Fei et al. 2014).

Within the research area of discrete event systems, PLE is mostly considered in the topic of formal verification or model checking. For example, efficient verification of linear-time temporal logic for variability-intensive systems in Classen et al. (2010) or feature-oriented modular verification of software product lines in ter Beek and de Vink (2014). Khan (2013) investigates evolving Algebraic Petri Nets, how to perform verification on the parts of the system that are affected by the property that is analyzed, and how to identify evolutions that require verification. In ter Beek et al. (2016) and Reniers and Thuijsman (2020), PLE has been applied in supervisory control. In these works a supervisory controller is synthesized for all possible product configurations given by a feature model. The output is one controller with multiple initial locations, were each initial location corresponds to a product configuration. In Reniers and Thuijsman (2020), runtime evolution of the system behavior over the configurations is studied. In contrast to this work, we do not assume a priori knowledge of the possible system configurations and the evolution takes place at design time.

2 Preliminaries

We consider finite state automaton A defined as a 5-tuple: A = (X,Σ,→,X0,Xm), where X is the finite set of states, of which \(X_{0} \subseteq X\) is the set of initial states and \(X_{m}\subseteq X\) is the set of marked states. Σ is the finite set of events, also called the alphabet, which is partitioned into sets of controllable and uncontrollable events, respectively Σc and Σu. Σ denotes all possible finite strings using events in Σ. → is the finite set of transitions, a transition is a 3-tuple: (xor,σ,xtar) ∈ X ×Σ× X, specifying a transition from origin state xor to target state xtar over event σ. We denote the existence of a transition (xor,σ,xtar) ∈→ by: \(x_{\mathit {or}}\xrightarrow { \sigma } x_{\mathit {tar}}\). Likewise, the existence of a sequence of transitions over intermediate states can be addressed by: \(x_{\mathit {or}} \xrightarrow { s } x_{\mathit {tar}}\), for sΣ.

The synchronous product of automata A1 = (X1,Σ1,→1,X0,1,Xm,1) and A2 = (X2,Σ2,→2,X0,2,Xm,2) is defined as: A1||A2 = (X1 × X2,Σ1Σ2,→12,X0,1 × X0,2,Xm,1 × Xm,2), where →12 is constructed by:

$$ \begin{aligned} &((x_{\mathit{or},1},x_{\mathit{or},2}),\sigma,(x_{\mathit{tar},1},x_{\mathit{tar},2})) \in \longrightarrow_{12}, \\ &\text{ if } \sigma \in {\varSigma}_{1} \cap {\varSigma}_{2},\ x_{\mathit{or},1}\rightarrow{ \sigma }_{1} x_{\mathit{tar},1},\ x_{\mathit{or},2}\rightarrow{ \sigma }_{2} x_{\mathit{tar},2}\\ &((x_{\mathit{or},1},x_{2}),\sigma,(x_{\mathit{tar},1},x_{2})) \in \longrightarrow_{12}, \\ &\text{ if } \sigma \in {\varSigma}_{1} \setminus {\varSigma}_{2},\ x_{\mathit{or},1}\rightarrow{ \sigma }_{1} x_{\mathit{tar},1} \\ &((x_{1},x_{\mathit{or},2}),\sigma,(x_{1},x_{\mathit{tar},2})) \in \longrightarrow_{12}, \\ &\text{ if } \sigma \in {\varSigma}_{2} \setminus {\varSigma}_{1},\ x_{\mathit{or},2}\rightarrow{ \sigma }_{2} x_{\mathit{tar},2} \end{aligned} $$
(1)

For a given automaton A, we apply supervisor synthesis to generate a supervisor subautomaton S of A that is reachable, coreachable, controllable, and maximally permissive.

An automaton S = (Y,ΣS,→S,Y0,Ym) is a subautomaton of A = (X,Σ,→, X0,Xm) if \(Y\subseteq X\), ΣS = Σ, \(\longrightarrow _{S} \subseteq \longrightarrow \), \(Y_{0}\subseteq X_{0}\), and \(Y_{m}\subseteq X_{m}\). In this work, the subautomata we encounter are restricted to →S = → ∩ (Y ×Σ× Y ), Y0 = YX0, and Ym = YXm.

A state xrX is reachable if it can be reached from some initial state; \(x_{0}\xrightarrow { s } x_{r}\) for some \(x_{0} \in X_{0},\ s \in {{\varSigma }}^{*}\). A state xcrX is coreachable if from it a marked state can be reached; \(x_{cr} \xrightarrow { s } x_{m}\) for some \(x_{m} \in X_{m},\ s \in {{\varSigma }}^{*}\). Supervisor automaton S is called (co-) reachable for plant automaton A, if all its states can be defined as such. An automaton for which all reachable states are coreachable is commonly called nonblocking in literature. We say that for automaton A = (X,Σ,→,X0,Xm), supervisor automaton S = (Y,ΣS,→S, Y0,Ym) is controllable if \(\longrightarrow \! \cap (Y \times {{\varSigma }}_{u} \times X) {} \subseteq {}\!\! \longrightarrow _{S}\). If S is controllable, the states in Y are also called controllable. Maximally permissive says that S is the maximal subautomaton of A for which coreachability, reachability, and controllability are ensured. Meaning the supervisor does not disable any transitions that do not strictly need to be disallowed.

In addition to the properties of the supervisor mentioned above, problem formulations for supervisor synthesis often include a safety constraint; Along with the plant, some requirement specification on the plant’s behavior is given. The supervisor should restrict the behavior of the plant so that the requirement specification is always satisfied. In such a case, a plantified requirement automaton can be constructed by introducing a non-coreachable sink state. Transitions that are not in accordance with the specification are redirected to this sink state. Given a requirement automaton R = (X,Σ,→,X0,Xm), the plantified requirement automaton is obtained as follows (Flordal et al. 2007):

$$ R^{\bot} = (X\cup \{\bot\},{{\varSigma}}, \longrightarrow^{\bot},X_{0},X_{m}), $$
(2)

where ⊥∉X is the new sink state and

$$ \longrightarrow^{\bot} = \longrightarrow\! \cup \{ (x,\sigma_{u},\bot) | x\in X, \sigma_{u}\in {{\varSigma}}_{u}, \nexists (x_{\mathit{tar}} \in X) x \xrightarrow{ \sigma_{u} } x_{\mathit{tar}} \}. $$

A safe supervisor can be obtained by synthesizing a coreachable and controllable supervisor on the synchronous product of the plant automata and plantified requirement automata, as proven in Flordal et al. (2007). Other ways of specifying requirements can be plantified as well. For example state exclusion requirements discussed in Markovski et al. (2010) are plantified by removing excluded controllable transitions from the plant, and directing the excluded uncontrollable transitions to the sink state. Therefore, in this work we will consider synthesizing a coreachable supervisor for a single automaton, without loss in generality regarding safety constraints or networks of automata.

We allow automata to be non-deterministic. In the case of non-determinism, we allow the supervisor to be able to disable individual controllable transitions as a result of the removal of unsafe states. So if in the plant a state has two outgoing transitions over the same controllable event, the supervisor subautomaton may contain this state with only one of these outgoing transitions. This is unlike some traditional supervisory control definitions, by for example (Ramadge and Wonham 1989) or (Cassandras and Lafortune 2008), where multiple outgoing events over the same event can not be disabled individually. The distinction between these paradigms is further discussed in Flordal et al. (2007).

2.1 Running example

We will consider the plant automaton P of Fig. 2 and requirement automaton R of Fig. 3 as a running example throughout this paper. A solid or dashed arrow respectively indicates a transition by a controllable or an uncontrollable event. The initial states are indicated by the incoming arrows, and the marked states are indicated by a double circle. Requirement automaton R has been plantified, resulting in the plantified requirement automaton R in Fig. 4. Constructing the synchronous product P||R yields automaton A of Fig. 5. Note that for the remainder of this paper, when we discuss model deltas to this example, they are always to automaton A directly, not to P and R with an implied delta on A.

Fig. 2
figure 2

Plant automaton P

Fig. 3
figure 3

Requirement automaton R

Fig. 4
figure 4

Plantified requirement automaton R

Fig. 5
figure 5

Automaton A = P||R

figure c

2.2 Supervisor synthesis algorithm

In Algorithm 1 a supervisor synthesis algorithm is presented. It is based on the algorithm introduced in Ouedraogo et al. (2011). However, we consider Finite State Automata rather than Extended Finite Automata for sake of simplicity. We also omitted the use of forbidden states in the algorithm, as we plantify the requirements. In case the plant behavior is given by multiple plant automata, the input automaton for this algorithm can be obtained by calculating the synchronous product of the plant automata. We present this algorithm using function calls to other algorithms to facilitate reuse of these (sub-)algorithms later in this paper. The algorithm uses a fixpoint computation, provided in Algorithm 2, which iteratively calculates a set of coreachable states G, followed by a set of bad states B, that are non-coreachable or have a sequence of uncontrollable transitions to a non-coreachable state. The calculation to obtain G and B is done by the means of a Backward Reachability Search (BRS), given in Algorithm 3, for which Lemma 1 holds. This is a Breadth First Search algorithm taken from Kleinberg and Tardos (2005) that has a linear runtime complexity. All found states are added to Xω. The state space is searched in layers. For each state in the current layer, all undiscovered states that have a transition to this state are added to the next layer. After all states in the current layer have been evaluated, the algorithm moves to evaluating the states in the next layer. These steps are repeated until no more new states are found. The algorithm is slightly adapted from Kleinberg and Tardos (2005) to allow a set of starting states, instead of a singular starting state. Also, at the start of the algorithm the transitions are pruned, so that only transitions between states in the input state set X are considered. Transitions from states in the starting set Xα are also removed, as these states are already discovered as the starting set, so analyzing these transitions is not necessary. The functioning of this algorithm is well known so Lemma 1 is not proven here.

figure d
figure e

Lemma 1

For state set X, alphabet Σ, set of transitions →, and starting state set Xα; BRS(X,Σ,→,Xα) contains all states in X from which a state in XαX can be reached, using transitions in →, over states in X, that have an event in Σ.

The bad states B are removed from G. The removal of these states can induce other states to become non-coreachable. Therefore, the algorithm repeats these steps until no further states get removed. At this point, the set of remaining states is defined as good states G, which is the maximal set of controllable and coreachable states, see Lemma 2. Proof for Lemma 2 is provided in Ouedraogo et al. (2011).

Lemma 2

For automaton A = (X , Σ , →, X0,Xm), and (Y , G) = computeFixpoint(A); G is the maximal controllable and coreachable set of states in X.

figure f

Then, in order to generate a reachable supervisor, a Forward Reachability Search (FRS) (Algorithm 4, Lemma 3) is carried out from the set of initial states, resulting in states Y. Essentially BRS is performed with all transitions reversed to search forward instead of backward. Same as for BRS, the accompanying lemma is not proven here. Y is the maximal controllable, coreachable, and reachable subset of X, see Lemma 4.

Lemma 3

For state set X, alphabet Σ, set of transitions →, and starting state set Xα; FRS(X,Σ,→,Xα) contains all states in X that can be reached from a state in XαX, using transitions in →, over states in X, that have an event in Σ.

Lemma 4

For automaton A = (X , Σ , →, X0,Xm), and (Y ,G) = computeFixpoint(A); Y is the maximal controllable, coreachable, and reachable set of states in X.

Proof

Y is the maximal reachable set in G, following from Lemma 3. It is shown by Lemma 2 that G is the maximal controllable and coreachable subset of X. Thus, the maximal reachable subset Y in G is the maximal controllable, coreachable, and reachable subset of X. □

Together, Y, alphabet Σ, the transitions of A between the states in Y, the initial states in Y, and the marked states in Y define the supervisor automaton S. Supervisor S is the maximal subautomaton of A that is reachable, coreachable, controllable; see Theorem 1. The algorithm always computes a supervisor automaton. If there are no reachable, coreachable, and controllable states then the supervisor automaton will contain no states, and hence no transitions.

Theorem 1

For automaton A, and (S,G) = SS(A); S is maximally permissive, controllable, coreachable, and reachable with respect to A.

Proof

By construction, S is the maximal subautomaton of A over states in Y. Y is the maximal controllable, coreachable and reachable set of states in A (Lemma 4). It follows that S is maximally permissive, controllable, coreachable, and reachable with respect to A. □

Next to supervisor S, the synthesis algorithm outputs good state set G, in order to facilitate reuse of this set in other computations. Note that this state set is computed anyways during synthesis, it is not computed specifically for the facilitation of reuse.

2.2.1 Example

When applying the supervisor synthesis algorithm to automaton A of Fig. 5, first the supervisor states and good states are calculated by computeFixpoint (Algorithm 2). The supervisor states are {p0r0,p2r0}, the good states are {p0r0,p1r0,p2r0,p3r0,p2r1,p3r1}. Next, the supervisor automaton is constructed, which provides the supervisor automaton given in Fig. 6.

Fig. 6
figure 6

Supervisor S, for (S,G) = SS(A)

For convenience we also provide a visualization of automaton A, where the states are color coded depending on their containment in the state sets resulting from synthesis, in Fig. 7. Supervisor states (Y ) (that are also good states by definition) are displayed white, good states that are not supervisor states (GY) are displayed grey, and non-good states (XG) are displayed black.

Fig. 7
figure 7

Automaton A, color coded by synthesis result

3 Model delta

For the purpose of TSS we wish to model the difference between the base and variant model. We can represent any adaptation from base to variant automaton as model delta as 10-tuple: Δ= (X+, X, Σ+, Σ, →+, →, \({X_{0}^{+},}\) \({X_{0}^{-},}\) \({X_{m}^{+},}\) \({X_{m}^{-}} )\), which for each of the sets in the 5-tuple definition of automaton A, defines the added (+) and removed () elements of that set. Σ+ and Σ are both partitioned into sets of controllable and uncontrollable events that are added or removed. The following constraints apply to the model delta:

  • All removed elements within the model delta, must exist in the base model: \(X^{-} \!{}\subseteq {} X,\ {{\varSigma }}^{-} \!{} \subseteq {} {{\varSigma }}, ...\).

  • All added elements within the model delta, must not yet exist in the base model: X+X = , Σ+Σ = ,....

  • Elements can not simultaneously be added and removed XX+ = , ΣΣ+ = ,....

  • The initial and marked states of the variant model must exist in the variant state set: \(X_{0}^{+}\subseteq (X\cup X^{+})\setminus X^{-},\ X_{m}^{+}\subseteq (X\cup X^{+})\setminus X^{-}\).

  • Transitions must go to-and-from states, by defined events: \(\longrightarrow ^{\prime } \subseteq X^{\prime }\times {{\varSigma }}^{\prime } \times X^{\prime }\), where \(X^{\prime } = (X\cup X^{+})\setminus X^{-},\ {{\varSigma }}^{\prime }=({{\varSigma }}\cup {{\varSigma }}^{+})\setminus {{\varSigma }}^{-}\), and \(\longrightarrow ^{\prime }=(\longrightarrow \!\cup \longrightarrow ^{+})\setminus \longrightarrow ^{-}\).

When these constraints are met, we call the model delta valid.

Given base automaton A and model delta Δ, variant automaton \(A^{\prime }\) is constructed by: \(A^{\prime }=(X^{\prime },{{\varSigma }}^{\prime },\longrightarrow ^{\prime },X_{0}^{\prime },X_{m}^{\prime })\), where: \(X^{\prime }=(X\cup X^{+})\setminus X^{-},\)\({{\varSigma }}^{\prime }=({{\varSigma }}\cup {{\varSigma }}^{+})\setminus {{\varSigma }}^{-},\)\(\longrightarrow ^{\prime }=(\longrightarrow \!\cup \longrightarrow ^{+})\setminus \longrightarrow ^{-},\)\(X_{0}^{\prime }=(X_{0}\cup X_{0}^{+})\setminus X_{0}^{-},\)\(X_{m}^{\prime }=(X_{m}\cup X_{m}^{+})\setminus X_{m}^{-}\). For a valid model delta, and well-defined base automaton, the constructed variant automaton is well-defined.

Furthermore, for any pair of well-defined automata \((A=(X,{{\varSigma }},\longrightarrow ,X_{0},X_{m}),A^{\prime }=(X^{\prime }, {{\varSigma }}^{\prime },\longrightarrow ^{\prime },X_{0}^{\prime },X_{m}^{\prime }))\), a valid model delta is constructed as follows: \(X^{+}=X^{\prime }\setminus X\), \(X^{-}=X\setminus X^{\prime }\), \({{\varSigma }}^{+}={{\varSigma }}^{\prime }\setminus {{\varSigma }}\), \({{\varSigma }}^{-}={{\varSigma }}\setminus {{\varSigma }}^{\prime }\), \(\longrightarrow ^{+}=\longrightarrow ^{\prime }\setminus \longrightarrow \), \(\longrightarrow ^{-}=\longrightarrow \setminus \longrightarrow ^{\prime }\), \(X_{0}^{+}=X_{0}^{\prime }\setminus X_{0}\), \(X_{0}^{-}=X_{0}\setminus X_{0}^{\prime }\), \(X_{m}^{+}=X_{m}^{\prime }\setminus X_{m}\), \(X_{m}^{-}=X_{m}\setminus X_{m}^{\prime }\).

The change of controllability of an event can be modeled by removing all transitions that are labeled by this event, and adding these transitions back with an added event with modified controllability.

We may address a model delta with only its non-empty part. So if we mention a model delta with \(X_{0}^{+}=\{x^{\delta }\}\), and no other information, this implies that the other elements in the model delta tuple are empty. In the remainder of this paper, when considering a model delta Δ it is implied that this is a model delta from base automaton A to variant automaton \(A^{\prime }\).

4 Atomic adaptations

In this section we consider atomic adaptations, where the difference between the base and variant model can be described by a single, indivisible change in the automaton specification. Formally we can say that a model delta Δ is an atomic adaptation when only one of the tuple-elements is a set of size one, and all other elements are empty; \(|{X^{+}}| + |{X^{-}}| + |{{\varSigma }}^{+}| +|{{\varSigma }}^{-}| + | \longrightarrow ^{+}| + |\longrightarrow ^{-}| + | X_{0}^{+} | + |X_{0}^{-} | + |X_{m}^{+}| + |X_{m}^{-}| = 1\).

We consider several types of atomic adaptations, e.g., removing a transition, or adding the marked property to a state, for which we provide an atomic TSS algorithm. The purpose of these algorithms is to calculate the supervisor states and good states of the variant automaton, using the base automaton, its synthesis result, and model delta. Theorem 2 holds for the algorithms. Essentially, the properties of the supervisor ((co-)reachability, controllability, and maximal permissiveness) are retained during atomic TSS. For sake of cohesion, proofs of Theorem 2 for each algorithm are given separately in Appendix Appendix.

Theorem 2

Given base automaton A, fixpoint result (Y,G) = computeFixpoint(A), and atomic adaptation Δ for which the atomic TSS algorithm is given, the atomic TSS algorithm provides a supervisor state set \(Y^{\prime }\) and good state set \(G^{\prime }\) such that they are equal to the fixpoint result of the variant automaton; \((Y^{\prime },G^{\prime })=\text {\texttt {computeFixpoint}}(A^{\prime })\).

Figure 8 shows an overview of the atomic TSS method. It is similar to Fig. 1, only now the names of the artifacts and algorithms that have been introduced are shown. The figure shows that the fixpoint for the variant supervisor can be computed in two ways, either by (1) performing computeFixpoint on the variant automaton directly, or by (2) performing atomic TSS using the base automaton, base supervisor fixpoint, and atomic model adaptation. Either way, the fixpoint result is the same.

Fig. 8
figure 8

Inputs and outputs for atomic TSS for atomic adaptation Δ

In the following subsections we consider adding or removing the initial property to some state, adding or removing the marked property to some state, and adding or removing a transition respectively. For the cases of removed or added states and events no algorithms are provided. These atomic adaptations are discussed in Section 4.7. After presenting the atomic TSS algorithms in this section, we will show how we can iterate over them in Section 5, where we are also going to group atomic adaptations together to process them at once.

The algorithms are strongly based on Thuijsman and Reniers (2020). In some places minor modifications are made for sake of correctness and efficiency. Some of these modifications are discussed in Tijsse Claase (2020). The authors note that these modifications influence the experimental results presented in Thuijsman and Reniers (2020), however only to a small enough extent that they do not influence the conclusions made on those results. All algorithms are also modified to compute the supervisor state set Y instead of the supervisor S, leading to shorter notations. S can simply be computed from Y, as in line 2 of Algorithm 1.

4.1 Added initial property

We assume the situation that (Y,G) = computeFixpoint(A) has been calculated for base automaton A. Some state of base automaton A has been made an initial state, which is the only adaptation to create variant automaton \(A^{\prime }\). In Algorithm 5 the atomic TSS algorithm is provided to compute supervisor states \(Y^{\prime }\) and good states \(G^{\prime }\) for the variant model, given A, Y, G, and the state with added initial property xδ. The algorithm uses a switch statement, where the value of a variable, in this case xδ, is tested for multiple cases. Once a case match is found, the statements associated with the particular case are executed. In case no match is found, the default statements are executed. For all atomic TSS algorithms the switch cases are mutually exclusive, which means that for the given atomic adaptation only one switch case holds, or none and then the default statement is executed.

figure g

In Algorithm 5 it can be seen that two cases are considered, the first being that xδ is in GY. A state in GY is coreachable and controllable in the base model. It was not reachable in the base model, as in that case it would be part of Y. Due to addition of the initial property, we now know that xδ is reachable, so it should become part of \(Y^{\prime }\). It is possible that more states in GY have become reachable due to xδ being reachable, so an FRS is carried out over states G. We already know that states in Y were reachable, and they will remain reachable in the variant model. As we do not want to reinvest computational effort in finding these states in Y again, the FRS is already initiated with states Y in the starting set along with xδ, essentially we already start closer to the fixpoint that we wish to find. States in XG are not considered, as they remain non-coreachable or non-controllable in the variant model. In our running example, we can consider the adaptation to make p3r1 initial, which would fit under this particular case. As p2r1 is a reachable good state from p3r1, both p3r1 and p2r1 will be added to the supervisor states Y to construct the supervisor states for the variant model \(Y^{\prime }\).

Alternatively, xδ may be in XG. As we just noted, the adaptation of initial states does not influence the set of coreachable and controllable states. So in this case, we already performed the FRS over the same set G to compute Y. As G did not change, the supervisor and good states remain the same for the variant model. In our example we can consider making p4r1 an initial state as such an adaptation. In that variant model, p4r1 will remain a non-good state in \(X\setminus G^{\prime }\).

Finally, xδ may be in Y. If this is the case, just like in the previous example the default statement will be executed. xδ was already found in the FRS of the base model, so also all reachable states from xδ in G are included in Y. Thus, the supervisor states and good states remain the same for the variant model. In the running example, making p2r0 an initial state would be of this case.

4.2 Removed initial property

We consider a similar situation as the previous section, only this time the initial property has been removed from a state instead of added. The atomic TSS algorithm for this case is shown in Algorithm 6.

figure h

In the first case of the algorithm, an initial state in Y is removed. This might lead to some states in Y being unreachable in the variant model. However the good states G are not influenced by the initial states, so they remain the same. Also unreachable states will remain unreachable. So an FRS is carried out over the previously reachable states Y, from the new set of initial states for the variant model to compute \(Y^{\prime }\). In the running example, the removal of initial property from p0r0 would fit in this case. Consequently, there are no initial states left in Y, so for the variant model there are no supervisor states; \(Y^{\prime }=\emptyset \). The good states remain the same.

The other case is that the removed initial state is not in Y, considered under the default statement of Algorithm 6. Actually we know that in this case the removed initial state is in XG, as states in GY could not be an initial state, they would already have been in Y as a reachable state in G. As xδ is not in the maximal controllable and coreachable set in this case, it does not matter if it is reachable, or initial. It will not be part of the good states and supervisor states. In the running example this could be demonstrated by the removal of the initial property from state p4r0.

figure i

4.3 Added marked property

In Algorithm 7 the atomic TSS algorithm is provided for the case that a state was given the marked property in the model delta from A to \(A^{\prime }\).

In the first case the circumstance is considered that a non-good state is now marked. This means that some non-good states may become good and/or supervisor states because they are coreachable in the variant model. All states that were supervisor states and good states will remain so. So a fixpoint computation is instantiated where the supervisor states are already added as marked states and initial states, so this part of the states does not have to be found again in the reachability searches. In the running example we could consider the case that p1r1 was made a marked state. During the first iteration of the fixpoint computation it will be found as a good state, as it is marked. It will however be removed from the good states since it has an uncontrollable transition to a bad state. So for this specific example the supervisor and good states will remain the same from base to variant model.

If the state with added marked property is a good state, it was already coreachable, as well as all good states that can reach this state. So giving it a marked property is not going to change the coreachability. Thus the supervisor states and the good states remain the same. In the running example, this would be the case if for instance p1r0 was added to the set of marked states.

4.4 Removed marked property

Now the case is considered that a marked state in the base model, is not a marked state in the variant model. The atomic TSS algorithm for this case is given in Algorithm 8.

figure j

In the first case the removal of a marked state in GY is considered. The supervisor states Y will remain the same, as xδ is not reachable from such a state, the removal of its marked property does not influence their coreachability. Some good states might not be good states for the variant model, as they may not be coreachable anymore. Therefore a fixpoint computation is performed, with all supervisor states already added as initial states and marked states, so this part of the state space does not have to be searched anymore. In the running example we can consider removing the marked property from p2r1. Consequently, the supervisor for the variant model will remain the same, but p2r1 and p3r1 are not good states for the variant model as they are not coreachable anymore.

The second case considers the situation that the removed marked state is in Y. A new fixpoint computation is performed for the variant model, only the non-good states are not taken into account, as the removal of a marked state will not increase the maximal set of coreachable and controllable states. In the example we can consider removing the marked property of state p2r0. As a result, there will be no supervisor states in the variant model, \(Y^{\prime }=\emptyset \), and only p2r1 and p3r1 remain in \(G^{\prime }\).

The final case will occur when the marked property is removed from a non-good state. This does not influence the coreachability and controllability of the good states, as they must be coreachable for marked states in G. Also the reachable part of the good states remains the same. So the variant model has the same supervisor states and good states as the base model.

4.5 Added transition

Now we consider the case that only a single transition has been added to the base automaton A to create variant automaton \(A^{\prime }\). So all elements in Δ are empty except →+ which only contains the transition (xor,σ,xtar). Algorithm 9 computes the supervisor and good states for the variant model according to the type of adaptation that is made.

figure k

The first case considers an added transition from a state in Y to a state in GY. The target and origin state where already coreachable and controllable, so this is not influenced by the addition of the transition. However the target state is now reachable, which it wasn’t before. To find all good states that are now reachable, an FRS is performed to compute the variant supervisor states. The good states remain the same. In the running example we can consider the addition of a transition from p0r0 to p3r1. In that case, p3r1 and p2r1 would become supervisor states in the variant model in the addition to the states already in Y. The good states remain unchanged.

Next, the addition of an uncontrollable transition from a good state to a non-good state is considered. The target state was non-coreachable or non-controllable, so transitions to this state need to be disabled. Because an uncontrollable transition is added, it can not be disabled by the supervisor. This means that the origin state is not a good state anymore, and we wish to remove it for the variant model. The state is made non-coreachable by removing all outgoing transitions from it, and removing it from the set of marked states in case it was a marked state. The fixpoint computation is then performed on the state space spanned by the good states, with the origin state of the added transition as non-coreachable. In the running example this could be an added uncontrollable transition from p2r1 to p2⊥. For that adaptation p2r1 and p3r1 would be removed from the good states to construct the variant good states, and the supervisor states remain the same.

In case the origin state is not a good state, adding the transition that is not a self-loop (xorxtar) might influence the coreachability of this and other non-good states. States that were supervisor states in the base model will remain so. Thus, a fixpoint computation is performed over the entire state set, where the supervisor states are added to the marked and initial states so that this part of the state space does not need to be searched to reduce computational effort. For the running example we can consider the case that a transition is added from p4r1 to p2r0, in that case the supervisor states will remain the same, and p4r1 is added to the good states to construct the variant good states.

In all other cases, the supervisor states and good states remain the same. For example the addition of a transition from-and-to a supervisor state, all states that are good states will remain coreachable and controllable, and states in GY will remain non-reachable. In the example this could be a transition from p2r0 to p0r0. Another example is adding a self-loop to a non-good state not influencing its non-coreachability or non-controllability. In the running example this may be a transition from p4r1 to p4r1.

4.6 Removed transition

Here we consider the case that only a single transition has been removed from base automaton A to create variant automaton \(A^{\prime }\). Algorithm 10 computes the supervisor states and good states for the variant model according to the state sets the origin and target state of this transition belong to, and the controllability of the transition.

figure l

Let us consider a removed transition, that is not a self-loop, for which xor and xtar both were in Y. This case is considered first in Algorithm 10. It is possible that due to the removal of this transition, xor and other states in G might not be coreachable anymore. Also xtar might not be reachable anymore. However, bad states and non-reachable states will remain as such. Therefore, a fixpoint computation is performed for the variant model, only for the states in G of the base model. As this synthesis is on a reduced state-set, it will require less effort to perform than a completely new synthesis on the variant model. In the running example this could be the removal of transition (p0r0,b,p2r0), which would lead to no supervisor states for the variant automaton. p0r0 would also be removed from the good state set to construct the good states \(G^{\prime }\).

Next, we consider a removed transition from a state xor in GY to a state xtar in G, that is not a self-loop. xor, and other good states, might become non-coreachable after removal of this transition. However, as these states are not reachable from the supervisor states, these will remain the same for the variant model. To find the set of good states for the variant model, a fixpoint computation is performed over the good states, where the supervisor states are added to the marked and initial states so that this part of the state space is not searched. For the running example, removing transition (p3r1,c,p2r1) would fall under this case. As a result the supervisor states remain the same from the base to the variant model, but p3r1 is removed from the good states to construct the variant good states \(G^{\prime }\).

As a third case in Algorithm 10, an uncontrollable transition is removed with origin and target state as not good states. It is possible that the origin state and other states were not good states due to the existence of this transition. We need to perform some additional fixpoint computation in order to find these states. However, we know that all good states that have been found already, will remain good states for the variant model. The same goes for supervisor states. Therefore, computeFixpoint is instantiated with the good states added as marked states, and the supervisor states added as initial states. In the running example the removal of transition (p0r1,b,p2⊥) would fall under this case. In that circumstance, the supervisor states and good states would remain the same for the variant model as the base model.

Finally, for all other cases the supervisor states and good states remain the same between variant and base model. For example, we remove transition (xor,σ,xtar) from base model A, for which xorXG and xtarY. We know that in A, the state xor was coreachable, as the removed transition existed to a state in Y. As (coreachable state) xor does not exist in the set of good states G, it must be non-controllable. We can reason that the removal of this transition is not going to make it controllable. Thus, Y and G of the base automaton remain the same for the variant automaton. This is also observed in Algorithm 10.

4.7 Other atomic adaptations

Some atomic adaptations were not discussed in the algorithms above. These atomic adaptations are: adding a state, removing a state, adding an event, and removing an event. When these model deltas occur as an atomic adaptation, they do not influence the supervisor states or good states. For example, an added state only influences the synthesis result if there are added transitions towards or from it. Or an event can only be removed, if there are no transitions that are labeled by that event. Otherwise the model delta is not an atomic adaptation, or it is not a valid model delta. Proofs for Lemma 5 are provided in Appendix Appendix.

Lemma 5

For an atomic adaptation that is an: added state, removed state, added event, or removed event, the supervisor states \(Y^{\prime }\) of the variant model are equal to the supervisor states Y of the base model, and the good states \(G^{\prime }\) of the variant model are equal to the good states G of the base model.

5 Transformational Supervisor Synthesis for any model delta

In this section we will not restrict the model delta to atomic cases anymore; any valid model delta of any size is allowed. A method that iterates over all atomic adaptations is discussed in Section 5.1. A method that groups these atomic adaptations together based on their required computation and processes them at the same time is shown in Section 5.2.

5.1 Iterative Transformational Supervisor Synthesis

In Fig. 9 a modified version of Fig. 8 is shown, which provides a visualization of the idea on which Iterative TSS (ITSS) is based. A non-atomic model delta describes the difference between the base and the variant model. This model delta is split into atomic adaptations, on which the atomic TSS algorithms can be applied, and the variant supervisor fixpoint is computed by iterating over these atomic adaptations.

Fig. 9
figure 9

ITSS for non-atomic Δ

Algorithm 11 provides an ITSS algorithm, that iterates over the atomic adaptations in the model delta one-by-one. Each time an atomic adaptation is applied using the results of Section 4, and \(Y^{\prime }\) and \(G^{\prime }\) are computed accordingly. After an adaptation has been applied, the tuples \(X_{0}^{\prime },\ X_{m}^{\prime },\ \longrightarrow ^{\prime }\) are updated, so that if we were to construct an intermediate automaton \(A^{\prime }=(X^{\prime },{{\varSigma }}^{\prime },\longrightarrow ^{\prime },X_{0}^{\prime },X_{m}^{\prime })\), this automaton is up-to-date for the adaptations applied until that point. This intermediate automaton is then used in the input for the next atomic TSS algorithm. Before iterating over the atomic TSS algorithms, the added states and events are added to the base supervisor and automaton. After the iterations, the set of removed events is removed from the supervisor, as at this point no transitions with this event are left in the supervisor, following from the restrictions specified in Section 3. Because \(A^{\prime }\) will not have transitions to-or-from the removed states, and the removed states are not initial or marked, the set of removed states will not be a part of \(Y^{\prime }\) (or \(G^{\prime }\)) at this point of the algorithm. So they do not need to be removed from the supervisor. Algorithm 11 also outputs the variant automaton \(A^{\prime }\) of which it produces the synthesis result.

figure m

Theorem 3 holds for Algorithm 11. It is similar to Theorem 2 for the atomic model adaptations, only modified to apply for a supervisor automaton S, rather than supervisor states Y. It also considers that the algorithm provides the correct variant automaton as output. The proof for Theorem 3 on Algorithm 11 can be found in Appendix Appendix.

Theorem 3

Given base automaton A, model delta Δ, synthesis result (Y,G) = computeFixpoint(A), and \((\hat {S},\hat {G},\hat {A})=\text {\texttt {ITSS}}(A,Y,G,{{\varDelta }}_{})\); then \(\hat {S}=S^{\prime }\), \(\hat {G}=G^{\prime }\), and \(\hat {A}=A^{\prime }\), for \((S^{\prime },G'{})=\text {\texttt {SS}}(A^{\prime })\).

5.1.1 Order of applying atomic adaptations

There are some restrictions to the order in which the atomic adaptations can be applied during iterative TSS. Essentially, when an atomic adaptation is applied, this atomic adaptation needs to be a valid model delta. Let us consider the following model delta for the running example:

  • X+ = {p5r0},

  • Σ+ = {e},

  • + = {(p4r0,e,p5r0)},

  • \(X^{-} = \emptyset , {{\varSigma }}^{-} = \emptyset , \longrightarrow ^{-} = \emptyset , X_{0}^{+} = \emptyset , X_{0}^{-} = \emptyset , X_{m}^{+} = \emptyset , X_{m}^{-} = \emptyset \).

This model delta represents an added transition with an added event (e) to an added state (p5r0) from an existing state (p4r0). The model delta can be split into three atomic parts. We observe that the added state and added event need to be added to the automaton first, before the added transition can be added. Otherwise, the transition goes to an undefined state, or uses an undefined event, which means the model delta is not valid.

Therefore, the added states and added events are added first. Now, any state with added or removed initial property and any state with added or removed marked property will exist in this intermediate automaton. Also all added and removed transitions will go between defined states by defined events. Thus these atomic adaptations can be applied in any order. Once these adaptations have been applied, there will be no more transitions towards removed states, or transitions over removed events. Then finally the removed states and removed events can be removed, resulting in the final variant automaton.

The functioning of the atomic TSS algorithms is based on the assumption that the atomic adaptation is a valid model delta by the definitions in Section 3. To support the correct functioning of Algorithm 11, proof that each atomic adaptation that is applied is a valid model delta is given in Appendix Appendix.

The authors note that even though the atomic adaptations of added/removed initial property, added/removed marked property, and added/removed transition can be applied in any order to come up with the same supervisor, the order in which they are applied may impact the computational efficiency. We do not optimize this order here, as the optimal order is likely highly dependent on the particular model and model delta. Therefore the adaptations are applied in the order shown in Algorithm 11, where the atomic TSS algorithms appear in the order in which they are introduced in Section 4.

5.2 Grouped Transformational Supervisor Synthesis

We observe that Algorithm 11 might not be very efficient when many atomic adaptations need to be considered. The main issue is that the SS and FRS algorithms are repeatedly called for input sets that are considerably similar to each other. This may notably occur when the plant description is given by a set of automata {P1,P2,...,Pn}. For our synthesis purpose, we take the synchronous product A = P1||P2||...||Pn, as mentioned in Section 2. If one of the automata Pi is adapted in an atomic manner, this might result in the model delta Δ to contain many atomic adaptations, due to synchronicity. A lot of these atomic adaptations in the synchronous system will be of the same type, e.g., an added transition in Pi can induce many added transitions in A. Therefore, we want to consider some adaptations at the same time as a group, rather than applying them one-by-one.

We partition the model delta into two disjoint subsets; \({{\varDelta }}^{\times } \uplus {{\varDelta }}^{\circ } = {{\varDelta }}_{}\), where ⊎ denotes the disjoint union of the sets that are in the same field of both tuples. Δ× contains all atomic adaptations that, when applying the respective atomic TSS algorithm, require the SS or FRS algorithm. Δ contains all other possible atomic adaptations outside Δ×, these require no reachability searches. As a result of the construction of the atomic TSS algorithms, all atomic adaptations in Δ fit under the default cases of these algorithms, and all adaptations in Δ× match one of the (non-default) case statements in these algorithms. Formally, we can compute Δ× = (X+,×, X−,×, Σ+,×, Σ−,×, →+,×, →−,×, \({X_{0}^{+,\times },}\) \({X_{0}^{-,\times },}\) \({X_{m}^{+,\times },}\) \({X_{m}^{-,\times }})\) for a model delta Δ= (X+, X, Σ+, Σ, →+, →, \({X_{0}^{+},}\) \({X_{0}^{-},}\) \({X_{m}^{+},}\) \({X_{m}^{-}})\) as follows:

  • X+,× = , X−,× = , Σ+,× = , Σ−,× =

  • \(\longrightarrow ^{+,\times } = \{(x_{\mathit {or}},\sigma ,x_{\mathit {tar}}) | (x_{\mathit {or}} \in Y \wedge x_{\mathit {tar}} \in G\setminus Y)\vee (x_{\mathit {or}}\in G \wedge x_{\mathit {tar}}\in X\setminus G{} \wedge \sigma \in {{\varSigma }}_{u})\vee (x_{\mathit {or}}\in X\setminus G{} \wedge x_{\mathit {or}} \neq x_{\mathit {tar}})\}\)

  • \(\longrightarrow ^{-,\times } = \{(x_{\mathit {or}},\sigma ,x_{\mathit {tar}}) | (x_{\mathit {or}} \in Y \wedge x_{\mathit {tar}} \in Y \wedge x_{\mathit {or}} \neq x_{\mathit {tar}})\vee (x_{\mathit {or}}\in G\setminus Y \wedge x_{\mathit {tar}}\in G{} \wedge x_{\mathit {or}} \neq x_{\mathit {tar}})\vee (x_{\mathit {or}}\in X\setminus G{} \wedge x_{\mathit {tar}}\in X\setminus G{} \wedge \sigma \in {{\varSigma }}_{u} \wedge x_{\mathit {or}} \neq x_{\mathit {tar}}) \}\)

  • \(X^{+,\times }_{0} = X_{0}^{+} \cap (G\setminus Y)\)

  • \(X^{-,\times }_{0} = X_{0}^{-} \cap Y\)

  • \(X^{+,\times }_{m} = X_{m}^{+} \cap (X\setminus G)\)

  • \(X^{-,\times }_{m} = X_{m}^{-} \cap G\)

Δ is then constructed by all atomic adaptations in Δ outside Δ×.

When performing Grouped TSS (GTSS), we first want to apply all atomic adaptations in Δ, as we can observe in the atomic TSS algorithms that no reachability searches need to be performed, and the supervisor states and good states remain unchanged. After all atomic adaptations in Δ have been applied, we find the first atomic adaptation in Δ× for which, if we were to apply ITSS with model delta Δ×, one of the case conditions in Algorithms 5-10 holds. Instead of performing the corresponding case statements on only this atomic adaptation, within the case operation we first construct a set \({{\varDelta }}^{\sim }\), that contains all atomic adaptations in Δ× for which that same case condition in the same atomic TSS algorithm holds. So for example, if we have Δ× with nonempty set \(X_{0}^{+,\times } \cap (G\setminus Y)\), then \(X_{0}^{+,\times } \cap (G\setminus Y)\) is a set \({{\varDelta }}^{\sim }\). In addition to just atomic adaptations, the rationale applied in Algorithms 5-10 still holds for sets \({{\varDelta }}^{\sim }\). So, we take set \({{\varDelta }}^{\sim }\), and apply the respective case operation on this set, rather than doing this for each atomic adaptation one-by-one. This might influence the supervisor states and good states. Because the partitioning in Δ× and Δ of Δ depends on those state sets, Δ might be nonempty if we recompute it for the atomic adaptations in Δ that have not been applied yet. Thus, we once more apply adaptations in Δ, and reiterate until all adaptations have been applied.

Figure 10 visualizes the grouped TSS method. First, all atomic adaptations are applied that require no reachability search. Then, a set of atomic adaptations \({{\varDelta }}^{\sim }\) is applied at once. This repetition continues until the entire model delta is applied. At this point, the fixpoint for the variant supervisor has been found.

figure n
Fig. 10
figure 10

GTSS for non-atomic Δ

Algorithm 12 functions as described above. The added events and added states are added in line 1. After these have been applied, they are removed from the model delta. As stated in Section 4.7, these adaptations do not influence the supervisor states and good states, seen in line 3. The model delta is partitioned in Δ× and Δ as specified earlier in this section. All adaptations in Δ are applied in line 6, and subsequently removed from the model delta in line 7. Note that \(Y^{\prime }\) and \(G^{\prime }\) remain unchanged. In line 8 a set of atomic adaptations \({{\varDelta }}^{\sim }\) is applied; \(Y^{\prime }\) and \(G^{\prime }\) are calculated accordingly, and \(\longrightarrow ^{\prime }\), \(X_{0}^{\prime }\), and \(X_{m}^{\prime }\) are consequently updated. The calculation of \(Y^{\prime }\) and \(G^{\prime }\) is done by using slightly modified versions of Algorithms 5-10 that accept sets of atomic adaptations. To avoid redundancy, we only provide the modified version of Algorithm 5 in the example below. The other atomic TSS algorithms are converted in the same manner. The set of atomic adaptations that has been applied, is removed from Δ in line 9. Now Δ× and Δ are calculated once again, because the partitioning of the model delta is dependent on \(Y^{\prime }\) and \(G^{\prime }\) which are now modified. The steps are repeated until all atomic adaptations to \(\longrightarrow ^{\prime }\), \(X_{0}^{\prime }\), and \(X_{m}^{\prime }\) have been applied. Finally, the supervisor automaton for the variant model and the variant automaton are constructed in lines 11 and 12 respectively.

Theorem 4 is proven for Algorithm 12 in Appendix Appendix. In Appendix ?? also the proof is given that Algorithm 12 respects the order of applying adaptations discussed in Section 5.1.

Theorem 4

Given base automaton A, model delta Δ, synthesis result (Y,G) = computeFixpoint(A), and \((\hat {S},\hat {G},\hat {A})=\)GTSS(A,Y,G,Δ); then \(\hat {S}=S^{\prime }\), \(\hat {G}=G^{\prime }\), and \(\hat {A}=A^{\prime }\), for \((S^{\prime },G'{})=\)SS\((A^{\prime })\).

5.2.1 Example

Let us consider the case that Δ contains three atomic adaptations, all are states with added initial property; \(X_{0}^{+} = \{x^{\delta ,1},x^{\delta ,2},x^{\delta ,3}\}\), with xδ,1GY, xδ,2GY, xδ,3Y. In our running example this could be states p3r0, p3r1, and p2r0 respectively. xδ,1 and xδ,2 require FRS, seen in line 3 of Algorithm 5, so they are in Δ×. xδ,3 triggers the default case in Algorithm 5, and thus it is in Δ. Therefore, xδ,3 is applied first, resulting in:

  • \(X_{0}^{\prime } = X_{0}^{\prime } \cup \{x^{\delta ,3}\}\)

  • \(X_{0}^{+} = X_{0}^{+} \setminus \{x^{\delta ,3}\}\)

Note that \(Y^{\prime }\) and \(G^{\prime }\) are not influenced as xδ,3 is in Δ.

Now all adaptations in Δ have been applied. \(X_{0}^{+} = \{x^{\delta ,1},x^{\delta ,2}\}\) remains in the model delta. As xδ,2 and xδ,3 are both of the same case (line 2 of Algorithm 5), they are in a set \({{\varDelta }}^{\sim }=\lbrace x^{\delta ,1}, x^{\delta ,2}\rbrace \). Consequently, these atomic adaptations will simultaneously be applied. \(Y^{\prime }\) and \(G^{\prime }\) are calculated in a modified version of Algorithm 5, given in Algorithm 13. \(X_{0}^{\prime }\) and \(X_{0}^{+}\) are updated as follows:

  • \(X_{0}^{\prime } = X_{0}^{\prime } \cup {{\varDelta }}^{\sim }\)

  • \(X_{0}^{+} = X_{0}^{+} \setminus {{\varDelta }}^{\sim }\)

figure o

After applying these adaptations, all atomic model adaptations in the model have been applied and the model delta is empty, and \(Y^{\prime }\) and \(G^{\prime }\) are the fixpoint result for the variant automaton \(A^{\prime }\). For the running example, the good states would remain the same, \(G^{\prime }=G\), and \(Y^{\prime }\) would be {p0r0,p2r0,p3r0,p3r1,p2r1}.

6 Experiments

As stated before, we can deal with any valid base automaton and model delta, so the TSS algorithm will always find a supervisor for the variant model. However, there are no guarantees that by applying TSS, we will find the supervisor more efficiently relative to simply performing a completely new synthesis. Therefore, we perform some experiments to investigate the potential reduction in computational effort by applying TSS.

For the experiments, a proof-of-concept implementation of the above synthesis algorithms and models of the case studies we describe below have been made in Matlab.Footnote 1

Before discussing case studies, we provide some practical notes in Section 6.1. In Section 6.2 we consider the Transfer Line model as an academic case study, and in Section 6.3 we consider a Lithography Machine Wafer Logistics controller as an industrial case study.

6.1 Practical notes

A proof-of-concept Matlab implementation of the above algorithms has been made. One modification from Thuijsman and Reniers (2020) to this paper, is that now a linear complexity reachability search algorithm is used instead of quadratic. This is to enable a more realistic effort comparison between SS and TSS since linear search algorithms are more widely used than quadratic. In Thuijsman and Reniers (2020), a counter in the reachability search algorithms was introduced to express and compare the time effort of performing synthesis. After implementing the new reachability search algorithms, it was found that the previously used metric was not representative anymore of the time effort of performing synthesis. Experiments were performed with counters at several locations in the code, however none of these counters gave a proper representation of the time effort. Therefore, wall-clock time is used here to represent the time effort of performing synthesis. In order to enable fair comparisons between running times, obvious inefficient parts of the algorithm were improved. Still, the authors note that by using wall clock time instead of a counter makes the results more dependent on the implementation. The experiments were performed on an HP ZBook Studio G4 laptop, using an Intel i7 processor clocked at 2.8 GHz. Regarding memory, Matlab used around 1 GB of memory, regardless of the model size. Filesizes to store the automata and model deltas ranged from a few KB to a few MB.

Each synthesis algorithm requires a monolithic automaton as input, and the transformational synthesis algorithms require a model delta to the variant automaton. In practice, these inputs may not be readily available. E.g., the monolithic automaton A needs to be constructed from a component-wise specification, as discussed in Section 2. This is also the case for the conducted experiments. The inputs are computed in preparation of the experiments. We assume that for the transformational method, the input automaton A is maintained for the next iteration. For each transformational synthesis, \(A^{\prime }\) is constructed beforehand in order to compute the model delta. Note that \(A^{\prime }\) also needs to be constructed for the baseline case of performing a completely new synthesis. Computing the model delta is done by simple matrix subtractions and requires negligible computational effort. The preparatory computations are not included in the computational effort measurements, because this matches the experiments to the monolithic level discussed in the theoretical part and because computing the synchronous composition is required for all methods.

The construction of A can be done by computing the complete synchronous composition as shown in Eq. 1. Practically however, often only the reachable part of A is constructed. Note that the computed supervisor is the same, as it only contains reachable states. The good state set G is influenced by only using the reachable part of A. Even when all states in X are reachable, it is still beneficial to store set G next to Y. When states are removed from G during synthesis, not all states in G may be reachable anymore through states in G. The model delta may be impacted by considering only the reachable parts of the automata or not. Also the computational effort of performing transformational or non-transformational supervisor synthesis may be impacted. Note that the transformational synthesis method works for both cases, as it allows for any pair of automata A and \(A^{\prime }\), independent of how they are constructed. We consider both options in the case studies below.

6.2 Transfer Line

We first consider the Transfer Line model from Wonham and Cai (2019) as an academic case study. In this model, products are being processed by two machines. Machine M1 takes products from the environment, and processes them. After processing, M1 places the product in buffer B1, which can hold up to three products. Machine M2 takes products from B1, processes them, and places them in buffer B2, which can hold only one product. Test unit TU takes products from B2, and tests them. If the product is accepted, it is released from the system. If the product is rejected, it goes back to B1. M1, M2, and TU start by a controllable event, and terminate by an uncontrollable event. Same as in Wonham and Cai (2019), M1, M2, and TU are modeled by plant automata and B1 and B2 by requirement automata. These automata are shown in Fig. 11. The synchronous product over all automata is taken. For plantification, a single sink state is added to this synchronous product, to which all uncontrollable transitions are created whenever one of the requirement automata blocks an uncontrollable event of the plant. The resulting base automaton TL has 65 states and 200 transitions. Supervisor synthesis (Algorithm 1) for this base automaton requires 1.4 milliseconds, which is the mean runtime of 100 executions of SS.

Fig. 11
figure 11

Transfer Line automata; (a), (b), and (c) are plant automata, (d) and (e) are requirement automata

The following five variant automata, \(\mathit {TL}^{\prime }_{1}\) to \(\mathit {TL}^{\prime }_{5}\), have been generated by making adaptations to TL.

  • \(\mathit {TL}^{\prime }_{1}\): Reduced capacity of B1 to two products; state x3 of automaton B1 removed, and transitions (x2,2,x3), (x2,8,x3), (x3,3,x2) removed.

  • \(\mathit {TL}^{\prime }_{2}\): Increased capacity of B2 to two products; added a state x2 and transitions (x1,4,x2), (x2,5,x1) to B2.

  • \(\mathit {TL}^{\prime }_{3}\): B1 initially holds one product instead of zero; removed initial property of state x0, and added initial property to state x1 in automaton B1.

  • \(\mathit {TL}^{\prime }_{4}\): TU may send the product to B2 upon completion; added uncontrollable event 9, added transition (x1,9,x2) to TU, and added transition (x0,9,x1) to B2.

  • \(\mathit {TL}^{\prime }_{5}\): Capacity of B1 and B2 is two products each; removed state x3 and transitions (x2,2,x3), (x2,8,x3), (x3,3,x2) from B1, state x2 and transitions (x1,4,x2), (x2,5,x1) are added to B2.

As the TSS methods are on a monolithic state space, these adaptations for the individual automata are converted to adaptations on the synchronous state space for the experiments, as discussed in Section 6.1. For the base model and each variant model, all states constructed during the synchronous composition (1) are reachable. Therefore, for these results it does not matter if A is completely constructed as in Eq. 1, or only the reachable part. The experiment cases we presented are the same as presented in Thuijsman and Reniers (2020), however the experiments are executed differently as pointed out in Section 6.1.

For each of the variant automata, the number of states and transitions, as well as the sizes of the nonempty sets in the model delta, are given in Table 1. One might expect that for constructing \(\mathit {TL}^{\prime }_{1}\) only states and transitions would be removed. However, it can be observed that some transitions have been added. These are new transitions towards the sink state, as the buffer now may overflow from different states. For each variant model, a supervisor has been synthesized three times. First, doing a completely new synthesis by applying SS given in Algorithm 1, the second and third by using the synthesis result of the base automaton and applying ITSS (Algorithm 11), and GTSS (Algorithm 12) respectively. For each model, the three synthesized supervisors have the exact same automaton specification. For each of the syntheses, the runtime is shown in milliseconds in Table 1. This is the mean over 100 runs for each synthesis. The Coefficient of Variation (CV), calculated by dividing the sample standard deviation over the mean, is shown in between brackets underneath each runtime value as a percentage. Because this example is small, the runtimes are low. Therefore slight absolute variations in runtime result in a high relative variation. The rightmost column shows the percentage change in runtime of using GTSS compared to SS. A positive value indicates an increase in computational effort, and a negative value indicates a reduction. Note that the efficiency of the TSS algorithms will be influenced by the order in which the adaptations are applied, this has not been optimized for the experiments as this would entail a complete new study.

Table 1 Experimental results of performing SS, ITSS, and GTSS on five variant models of the Transfer Line model

We observe that in some cases applying ITSS requires considerably more runtime than applying SS. This was already addressed in Section 5, and led to the introduction of the GTSS algorithm. We observe that for this use case, for most variant models GTSS requires considerably less runtime than ITSS, but still requires more runtime than SS in most cases. The aim of GTSS is to use smaller synthesis calls and reachability searches by using the model delta and previous synthesis result. To do so, GTSS prepares these smaller synthesis calls and reachability searches by operations such as iterating over the model delta and evaluating the switch case statements. Because the model is very small, relatively speaking GTSS requires a lot of time to do the preparation steps, and little time performing the smaller synthesis calls and reachability searches. Therefore GTSS performs poorly compared to SS, that can just perform synthesis directly to the small model. It is expected that for a larger system, GTSS spends relatively less time on the preparation steps, and more on the smaller synthesis calls and reachability searches. Therefore GTSS could be more competitive to SS for these larger systems, since SS will require more runtime as well for the larger systems. We will study a larger system next, in Section 6.3. From a user perspective the runtime differences for the Transfer Line case would be insignificant.

6.3 Lithography Machine Wafer Logistics

Next we present an industrial case study. This case study is performed using models from ASML. ASML is the world-leading manufacturer of lithography machines, which are used in the semiconductor industry to produce integrated circuits. These circuits are printed on silicon wafers. The movement of these wafers through the machine is called the Wafer Logistics, which is studied in van der Sanden et al. (2015) and (van der Schriek 2018). The controller of the Wafer Logistics is constructed using Analytical Software Design (ASD) ((Broadfoot and Hopcroft 2003)). van der Schriek (2018) presents a study on how these ASD models of the components of the Wafer Logistics controller evolve over time. In this study equivalent automata models are constructed in CIF ((van Beek et al. 2014), CIF is part of the Eclipse Supervisory Control Engineering Toolkit®), that are suitable for supervisory controller synthesis. These automata models are constructed for the variation points that the ASD models evolved to over a number of years. We use these automata models here, to investigate the efficiency of TSS in this industrial setting.

Component B of (van der Schriek 2018) is selected to perform the experiments on, based on its large but manageable state space size, the number of variation points, and the variety within the model deltas. The first 11 variation points of this model are taken, to investigate 10 adaptations. Opposed to the Transfer Line experiment, where each variant model was an adaptation of the same base model, we now consider incremental adaptations. So we start with the evolution from B1 to B2, then from B2 to B3, from B3 to B4, and so on. To provide an indication of the model size, B1 contains 5 plant automata, that respectively have:

  1. 1.

    2 states and 2 transitions,

  2. 2.

    2 states and 1 transition,

  3. 3.

    2 states and 3 transitions,

  4. 4.

    15 states and 72 transitions,

  5. 5.

    16 states and 128 transitions.

Model B1 also contains 4 requirement automata, that respectively have:

  1. 1.

    2 states and 2 transitions,

  2. 2.

    2 states and 2 transitions,

  3. 3.

    3 states and 5 transitions,

  4. 4.

    3 states and 5 transitions.

Additionally 3 state transition exclusion invariant requirements (Markovski et al. 2010) are specified. After computing the synchronous composition (1), and adding a sink state for plantification, B1 has 69121 states and 1038228 transitions. Performing SS (Algorithm 1) requires 46.2 seconds, which is the mean over 20 executions of SS.

Unlike the Transfer Line model, for the models of Component B not all states are reachable when automaton A is constructed by Eq. 1. We perform two studies, in Section 6.3.1 we consider the case that the complete automaton is used as an input to the synthesis algorithms and in Section 6.3.2 we consider that only the reachable part of the automaton is used as input.

6.3.1 Complete state space

In this section we perform synthesis on the complete state space that is constructed by performing synchronous composition on the Component B Wafer Logistics model. The experimental results are presented in Table 2. ITSS has not been performed, as its inefficiency compared to GTSS has been discussed in Section 5.2, and shown in the Transfer Line experiments. Note that the runtime is now displayed in seconds instead of milliseconds. Each runtime value is the mean over 20 syntheses. Because of the higher absolute runtimes compared to the Transfer Line use case, the CV is lower in these experiments. Compared to the Transfer Line experiment, we observe that the number of states for Component B increased by three orders of magnitude, for the number of transitions it is four orders of magnitude. Due to this increase in model size, the required runtime has also increased by four orders of magnitude. We observe that for all evolutions in this case study, applying GTSS is more efficient than applying SS to compute the supervisor for the variant model. The efficiency gain ranges from 9 to 29%, and is 20% on average.

Table 2 Experimental results of performing SS and GTSS for evolution of the complete Component B Wafer Logistics model

6.3.2 Reachable state space

In this section we perform synthesis using only the reachable part of synchronous composition of the Component B Wafer Logistics model. The complete state space of model B1 consists of 69121 states and 1038228 transitions. The reachable part of this automaton has 59185 states and 888780 transitions. Note that also the model delta sizes are influenced by only considering the reachable part of the base and variant model automata. The results are shown in Table 3. Each runtime value is the mean over 20 syntheses. Once more, for all evolutions in this experiment applying GTSS is more efficient than applying SS to compute the supervisor for the variant model. Compared to the experiment on the complete state space, discussed in Section 6.3.1, the absolute computational effort to perform synthesis is reduced for both SS and GTSS. Furthermore, the relative efficiency of using GTSS compared to SS was also improved. The efficiency gain ranges from 17 to 36%, and is 27% on average. The authors note that even though in this case study the transformational synthesis method works better when only the reachable states are considered, this may not be so for other models.

Table 3 Experimental results of performing SS and GTSS for evolution of the reachable part of the Component B Wafer Logistics model

7 Conclusions

Supervisory controller synthesis is a means to compute correct-by-construction controllers for discrete event systems. As these systems evolve over time, we want to be able to efficiently generate a supervisor each time the system is adapted. We consider the case that a supervisor has been synthesized for a given base model, after which this base model is adapted to some variant model. Model deltas are used to describe the difference between the base and the variant model. A notion of atomic adaptations is introduced, where the model delta can be described by a single, indivisible change in the automaton specification. For these atomic model adaptations, algorithms are provided to compute the supervisor for the variant model in a transformational manner. The atomic model adaptations can be iterated over to transformationally compute a supervisor for any model delta that contains a number of atomic model adaptations. It is discussed why, and shown in experiments that, only purely iterating over these atomic adaptations is not efficient. Therefore a method is presented where groups of adaptations are considered. By means of both an academic and an industrial case study, we show that in some, but not all, cases the method of GTSS can more efficiently compute the variant model supervisor than SS. The best results for GTSS were found for the larger, industrial, use case.

The methods we presented are based on monolithic synthesis of a fully enumerated state space. To tackle industrial-sized systems, commonly modular methods that split the synthesis problem into smaller sub-problems are applied. It would be interesting to see how the transformational synthesis method we present here translates to those methods. The performance of transformational synthesis could further be improved by, for example, evaluating the adaptations in a smart order, or by imposing restrictions on the model delta. It might also be possible to obtain better results by relaxing the problem constraints, for example not requiring full equality between the results of TSS and SS, but only requiring them to be bisimilar. At the moment we have no way to tell if TSS will be more efficient than simply performing a new synthesis. Optionally, one could run both algorithms in parallel, and stop as soon as on of them is finished. It might also be valuable to have a method that predicts which algorithm will be more efficient. Moreover, perhaps it is possible to have a TSS algorithm that is guaranteed to require less effort than a new synthesis.