1 Introduction
1.1 Problems considered in this paper
1.2 Detailed contributions
1.2.1 Competitive analysis
-
In Sect. 2, we formally define our real-time scheduling problem.
-
In Sect. 3, we provide a formalism for on-line and clairvoyant scheduling algorithms as labeled transitions systems. We also show how automata on infinite words can be used to express natural constraints on the set of released job sequences (such as sporadicity and workload constraints).
-
In Sect. 4.3, we present a formal reduction of the competitive analysis problem to solving a multi-objective graph problem. Section 4.4 describes both general and implementation-specific optimizations for the above reduction, which considerably reduce the size of the obtained graph and thus make our approach feasible in practice.
-
In Sect. 4.5, we present a comparative study of the competitive ratio of several existing firm-deadline real-time scheduling algorithms. Our results show that the competitive ratio of any algorithm varies highly when varying tasksets, which highlights the usefulness of an automated competitive analysis framework: after all, our framework allows to replace human ingenuity (required for finding worst-case scenarios) by computing power, as the application designer can analyze different scheduling algorithms for the specific taskset arising in some application and compare their competitive ratio.
1.2.2 Competitive synthesis
-
In Sect. 5.1, we consider a game model (a partial-observation game with memoryless strategies for Player 1 with mean-payoff and ratio objectives) that is suitable for the competitive synthesis of real-time scheduling algorithms. The mean-payoff (resp. ratio) objective allows to compute the cumulated utility (resp. competitive ratio) of the best on-line algorithm under the worst-case task sequence.
-
In Sect. 5.2, we establish that the relevant decision problems for the underlying game are Np-complete in the size of the game graph.
-
In Sect. 5.3, we use the game of Sect. 5.1 to tackle two relevant synthesis problems for a given taskset \({\mathcal {T}}\): first, we show that constructing an on-line scheduling algorithm with optimal worst-case average utility for \({\mathcal {T}}\) is in Np \(\cap \) coNp in general, and polynomial in the size of the underlying game graph for reasonable choices of task utility values. Second, we show that constructing an on-line scheduling algorithm with optimal competitive ratio for \({\mathcal {T}}\) is in Np. Note that these complexities are with respect to the size of the constructed algorithm, represented explicitly as a labeled transition system. As a function of the input taskset \({\mathcal {T}}\) given in binary, all polynomial upper bounds become exponential upper bounds in the worst case.
1.3 Related work
2 Problem definition
2.1 Notation on sequences
2.2 Job sequences
2.3 Admissible job sequences
2.4 Schedule
2.5 Competitive ratio
3 Modeling formalisms in our framework
3.1 Labeled Transition Systems
3.1.1 Labeled transitions systems (LTSs)
3.1.2 Deterministic LTS for an on-line algorithm
3.1.3 The non-deterministic LTS
3.2 Admissible job sequences
3.2.1 Synchronous product of LTSs
3.2.2 Overall approach for computing \({\mathcal {CR}}\)
4 Competitive analysis of on-line scheduling algorithms
4.1 Graphs with multiple objectives
4.1.1 Multi-graphs
4.1.2 Objectives
4.1.3 Decision problem
4.2 Algorithms for solving graphs with multiple objectives
4.2.1 Algorithms for safety and liveness objectives
4.2.2 Algorithms for mean-payoff objectives
-
Single dimension In the case of a single-dimensional weight function, a single weight is assigned to every edge, and the decision problem of the mean-payoff objective reduces to determining the mean weight of a minimum-weight simple cycle in G, as the latter also determines the mean-weight by infinite repetition. Using the algorithms of Karp (1978 and Madani (2002), this process requires \(\mathcal {O}(n\cdot m)\) time. When the objective is satisfied, the process also returns a simple cycle C, as a witness to the objective. From C, a path \(\rho \in \mathsf{MP}(w, \vec {\nu })\) is constructed by infinite repetitions of C.
-
Multiple dimensions When \(d>1\), the mean-payoff objective reduces to determining the feasibility of a linear program (LP). For \(u\in V\), let \(\mathsf{IN}(u)\) be the set of incoming, and \(\mathsf{OUT}(u)\) the set of outgoing edges of u. As shown in Velner et al. (2015), G satisfies \(\mathsf{MP}(w, \vec {\nu })\) iff the following set of constraints on \(\vec {x}=(x_e)_{e\in E_{\text {SCC}}}\) with \(x_e \in \mathbb {Q}\) is satisfied simultaneously on some SCC \(V_{\text {SCC}}\) of G with induced edges \(E_{\text {SCC}}\subseteq E\).The quantities \(x_e\) are intuitively interpreted as “flows”. The first constraint specifies that the flow of each edge is non-negative. The second constraint is a flow-conservation constraint. The third constraint specifies that the objective is satisfied if we consider the relative contribution of the weight of each edge, according to the flow of the edge. The last constraint asks that the preceding constraints are satisfied by a non-trivial (positive) flow. Hence, when \(d>1\), the decision problem reduces to solving a LP, and the time complexity is polynomial (Khachiyan 1979).$$\begin{aligned}&x_e\geqslant 0&\&e\in E_{\text {SCC}}\nonumber \\&\sum _{e\in \mathsf{IN}(u)}{x_e} = \sum _{e\in \mathsf{OUT}(u)}{x_e}&\&u\in V_{\text {SCC}}\\&\sum _{e\in E_{\text {SCC}}} x_e\cdot w(e) \leqslant \vec {\nu } \nonumber \\&\sum _{e\in E_{\text {SCC}}}x_e\geqslant 1\nonumber \end{aligned}$$(2)
4.2.3 Algorithm for ratio objectives
4.2.4 Algorithms for conjunctions of objectives
4.3 Reduction of competitive analysis to graphs with multiple objectives
4.3.1 Reduction for safety and liveness constraints
4.3.2 Reduction for limit-average constraints
4.3.3 Adaptive binary search
4.4 Optimized reduction
4.4.1 Clairvoyant LTS reduction
4.4.2 Clairvoyant LTS generation
4.4.3 On-line LTS reduction
4.5 Experimental results
A1 (\(k=6\)) | A2 (\(k=5\)) | A3 (\(k=4\)) | A4 (\(k=3\)) | A5 (\(k=2\)) | A6 (\(k=4\)) | A7 (\(k=5\)) | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
|
\(\tau _4\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
| |
\(C_i\)
| 1 | 4 | 1 | 3 | 2 | 2 | 2 | 1 | 1 | 1 | 2 | 1 | 2 | 1 | 1 | 2 | 6 | 1 | 1 | 2 | 1 |
\(D_i\)
| 2 | 6 | 3 | 4 | 3 | 2 | 2 | 5 | 5 | 2 | 3 | 6 | 3 | 1 | 3 | 2 | 6 | 1 | 5 | 2 | 1 |
\(V_i\)
| 3 | 2 | 3 | 3 | 5 | 1 | 1 | 2 | 2 | 3 | 2 | 1 | 9 | 6 | 3 | 1 | 10 | 2 | 5 | 4 | 1 |
1.5 | 1 | 0.8 | 0.6 | 0.4 | 0.3 | 0.1 | 0.078 | 0.05 | |
---|---|---|---|---|---|---|---|---|---|
FIFO |
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
| |||
SP |
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
| ||||||
SRT |
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\checkmark \)
|
\(\tau _1\)
|
\(\tau _2\)
|
\(\tau _3\)
| |
---|---|---|---|
\(C_i\)
| 1 | 1 | 1 |
\(D_i\)
| 1 | 2 | 1 |
\(V_i\)
| 3 | 3 | 1 |
\(C_i\)
| 2 | 5 | 5 |
\(D_i\)
| 7 | 5 | 6 |
\(V_i\)
| 3 | 2 | 1 |
Taskset |
\(\eta \)
| Taskset | Comp. Ratio |
---|---|---|---|
C1 | 2 |
\(\{1,1\}\)
| 1 |
C2 | 3 |
\(\{1,2,3\}\)
| 1 / 2 |
C3 | 3.1 |
\(\{1,3,7,13,19\}\)
| 7 / 25 |
C4 | 3.2 |
\(\{1,3,7,13,20,23\}\)
| 1 / 4 |
C5 | 3.3 |
\(\{1,3,7,14,24,33\}\)
| 1 / 4 |
C6 | 3.4 |
\(\{1,3,7,14,24,34\}\)
| 1 / 4 |
Taskset | N |
\(D_{\max }\)
| Size (nodes) | Time (s) | ||
---|---|---|---|---|---|---|
Clairv. | Product | Mean | Max | |||
B01 | 2 | 7 | 19 | 823 | 0.04 | 0.05 |
B02 | 2 | 8 | 26 | 1997 | 0.39 | 0.58 |
B03 | 2 | 9 | 34 | 4918 | 10.02 | 15.21 |
B04 | 3 | 7 | 19 | 1064 | 0.14 | 0.40 |
B05 | 3 | 8 | 26 | 1653 | 0.66 | 2.05 |
B06 | 3 | 9 | 34 | 7705 | 51.04 | 136.62 |
B07 | 4 | 7 | 19 | 1711 | 2.13 | 6.34 |
B08 | 4 | 8 | 26 | 3707 | 13.88 | 34.12 |
B09 | 4 | 9 | 44 | 10,040 | 131.83 | 311.94 |
B10 | 5 | 7 | 19 | 2195 | 5.73 | 16.42 |
B11 | 5 | 8 | 32 | 9105 | 142.55 | 364.92 |
B12 | 5 | 9 | 44 | 16,817 | 558.04 | 1342.59 |
5 Competitive synthesis of on-line scheduling algorithms
-
In Sect. 5.1, we introduce a suitable two-player partial-information game with mean-payoff and ratio objectives. Player 1 will represent the online algorithm, whereas Player 2 will represent both the adversary (which chooses the job sequence) and the clairvoyant algorithm (which knows the job sequence in advance). We use a partial-information setting to model that Player 1 is oblivious to the scheduling choices of Player 2, but Player 2 knows the scheduling choices of Player 1 for deciding which future jobs to release. The mean-payoff and ratio objectives model directly the worst-case utility and competitive ratio problems, respectively.
-
In Sect. 5.2, we establish that the relevant decision problems for our game are Np-complete in the size of the game graph.
-
In Sect. 5.3, we study the decision problems relevant for two particular synthesis questions: in synthesis for worst-case average utility, the goal is to automatically construct an on-line scheduling algorithm with the largest possible worst-case average utility for a given taskset. In competitive synthesis, we construct an on-line scheduling algorithm with the largest possible competitive ratio for the given taskset. The complexity results for our graph game reveal that the former problem is in Np \(\cap \) coNp, whereas the latter is in Np. These complexities are wrt the size of the constructed algorithm, represented explicitly as a labeled transition system. As a function of the input taskset \({\mathcal {T}}\) given in binary, all polynomial upper bounds become exponential upper bounds in the worst case. The solution to the competitive synthesis . Hence the algorithm for obtaining the optimal scheduler comes in two steps. The first step reduces the problem to the relevant partial-information game and is found in Theorem 5 of Sect. 5.3. The second step is solving the partial-information game, and is found in Theorem 3 of Sect. 5.2.
5.1 Partial-information mean-payoff and ratio games
5.1.1 Notation on graph games
5.1.2 Plays
5.1.3 Strategies
5.1.4 Objectives
5.1.5 Decision problems
5.1.6 Perfect-information games
5.1.7 Sketch of the algorithm
5.2 Complexity results
5.2.1 Transformation
-
If the action of Player 1 is \(\mathsf{true}\) in \(s_{i,j}\), then (i) if the j-th literal in \(\mathfrak c_i\) is \(x_k\), then we have a transition back to the initial state; and (ii) if the j-th literal in \(\mathfrak c_i\) is \(\overline{x}_k\) (negation of \(x_k\)), then we have a transition to \(s_{i,j+1}\) if \(j\in \{1,2\}\), and if \(j=3\), we have a transition to \(\mathsf {dead}\).
-
If the action of Player 1 is \(\mathsf{false}\) in \(s_{i,j}\), then (i) if the j-th literal in \(\mathfrak c_i\) is \(\overline{x}_k\) (negation of \(x_k\)), then we have a transition back to the initial state; and (ii) if the j-th literal in \(\mathfrak c_i\) is \(x_k\), then we have a transition to \(s_{i,j+1}\) if \(j\in \{1,2\}\), and if \(j=3\), we have a transition to \(\mathsf {dead}\).