17.09.2020  Ausgabe 5/2020 Open Access
Scheduling a proportionate flow shop of batching machines
 Zeitschrift:
 Journal of Scheduling > Ausgabe 5/2020
Wichtige Hinweise
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Modern medicine can treat some serious illnesses using individualized drugs, which are produced to order for a specific patient and adapted to work only for that unique patient and nobody else. Manufacturing such a drug often involves a complex production line, consisting of many different steps.
If each step of the process is performed manually, by a laboratory worker, then each laboratory worker can only handle materials for one patient at a time. However, in the industrial application which motivates our study, some steps are instead performed by actual machines, like pipetting robots. Such highend machines can typically handle materials for multiple patients in one go. If scheduled efficiently, this special feature can drastically increase the throughput of the production line. Clearly, in such an environment efficient operative planning is crucial in order to optimize the performance of the manufacturing process and treat as many patients as possible.
Anzeige
In this paper, we present – as part of a larger study of an industrial application – new theoretical results for scheduling problems arising from a manufacturing process as described above. Formally, the manufacturing process we consider is structured in a
flow shop manner, where each step is handled by a single, dedicated machine. A
job
\(J_j\),
\(j=1,2, \ldots , n\), representing the production of a drug for a specific patient, has to be processed by
machines
\(M_1,M_2, \ldots , M_m\) in order of their numbering. Each job
\(J_j\) has a
release date
\(r_j \ge 0\), denoting the time at which the job
\(J_j\) is available for processing at the first machine
\(M_1\). Furthermore, a job is only available for processing at machine
\(M_i\),
\(i=2,3, \ldots , m\), when it has finished processing on the previous machine
\(M_{i1}\).
Processing times are jobindependent, meaning that each machine
\(M_i\),
\(i=1,2, \ldots , m\), has a fixed
processing time
\(p_i\), which is the same for every job when processed on that machine. In the literature, a flow shop with machine or jobindependent processing times is sometimes called a
proportionate flow shop, see e.g. (Panwalkar et al.
2013).
Recall that, as a special feature from our application, each machine in the flow shop can handle multiple jobs at the same time. These kind of machines are called
(parallel) batching machines and a set of jobs processed at the same time on some machine is called a
batch on that machine (Brucker
2007, Chapter 8). All jobs in one batch
\(B^{(i)}_k\) on some machine
\(M_i\) have to start processing on
\(M_i\) at the same time. In particular, all jobs in
\(B^{(i)}_k\) have to be available for processing on
\(M_i\), before
\(B^{(i)}_k\) can be started. The processing time of a batch on
\(M_i\) remains
\(p_i\), no matter how many jobs are included in this batch. This distinguishes parallel batching machines from
serial batching machines, where the processing time of a batch is calculated as the sum of the processing times of the jobs contained in it plus an additional setup time. Each machine
\(M_i\),
\(i=1,2, \ldots , m\) has a
maximum batch size
\(b_i\), which is the maximum number of jobs a batch on machine
\(M_i\) may contain.
Given a feasible schedule
S, we denote by
\(c_{ij}(S)\) the
completion time of job
\(J_j\) on machine
\(M_i\). For the completion time of job
\(J_j\) on the last machine we also write
\(C_j(S)=c_{mj}(S)\). If there is no confusion which schedule is considered, we may omit the reference to the schedule and simply write
\(c_{ij}\) and
\(C_j\).
Anzeige
As optimization criteria, in this paper we are interested in objective functions of the forms
with functions
\(f_j\) nondecreasing for
\(j=1,2,\ldots ,n\).
$$\begin{aligned} f_{\max }= & {} \max _{j=1,2,\ldots ,n} f_j(C_j), \end{aligned}$$
(1)
$$\begin{aligned} f_{\sum }= & {} \sum _{j=1}^n f_j(C_j), \end{aligned}$$
(2)
In particular, the first part of the paper will be dedicated to the minimization of the
makespan
\(C_{\max } = \max \left\{ C_j \mid j= 1,2, \ldots , n \right\} \) and of the
total completion time
\(\sum C_j = \sum _{j=1}^n C_j\), although the obtained algorithms will work for an arbitrary objective function of type (
1) or (
2), if certain preconditions are met.
In the second part of the paper, we assume that for each job
\(J_j\) a
weight
\(w_j\) or a
due date
\(d_j\) is given. We focus on the following traditional scheduling objectives:
where
and
Note that all these objective functions are
regular, that is, nondecreasing in each job completion time
\(C_j\).

the weighted total completion time\(\sum w_j C_j = \sum _{j=1}^n w_j C_j\),

the maximum lateness\(L_{\max } = \max \left\{ C_j  d_j \mid j = 1, 2, \ldots , n \right\} \),

the total tardiness\(\sum T_j = \sum _{j=1}^n T_j\),

the number of late jobs\(\sum U_j = \sum _{j=1}^n U_j\),

the weighted number of late jobs\(\sum w_j U_j = \sum _{j=1}^n w_j U_j\),
$$\begin{aligned} T_j = \left\{ \begin{array}{ll} C_j  d_j, &{} \text {if } C_j > d_j, \\ 0, &{} \text {otherwise}, \end{array} \right. \end{aligned}$$
$$\begin{aligned} U_j = \left\{ \begin{array}{ll} 1, &{} \text {if } C_j > d_j, \\ 0, &{} \text {otherwise.} \end{array} \right. \end{aligned}$$
Using the standard threefield notation for scheduling problems (Graham et al.
1979), our problem is denoted as
where
f is a function of the form defined above. We refer to the described scheduling model as
proportionate flow shop of batching machines and abbreviate it by
PFB.
$$\begin{aligned} Fm \mid r_j, p_{ij} = p_i, p\text {batch}, b_i \mid f, \end{aligned}$$
Next, we provide an example in order to illustrate the problem setting.
Example 1
Consider a PFB instance with
\(m=2\) machines,
\(n=5\) jobs, maximum batch sizes
\(b_1=3\) and
\(b_2=4\), processing times
\(p_1=2\) and
\(p_2=3\), and release dates
\(r_1=r_2=0\),
\(r_3=r_4=1\), and
\(r_5=2\). Figure
1 illustrates a feasible schedule for the instance as joboriented Gantt chart.
×
Each rectangle labeled by a machine represents a batch of jobs processed together on this machine. The black area indicates that the respective jobs have not been released at this time yet. Note that in this example none of the batches can be started earlier, since either a job of the batch has just arrived when the batch is started, or the machine is occupied before. Still, the schedule does not minimize the makespan, since the schedule shown in Fig.
2 is feasible as well and has a makespan of 8 instead of 9.
×
The improvement in the makespan was achieved by reducing the size of the first batch on
\(M_1\) from three to two, which allows to start it one time step earlier. Observe that no job can finish
\(M_2\) before time step 5. Moreover, not all five jobs fit into one batch on
\(M_2\). Hence, at least two batches are necessary on
\(M_2\) in any schedule. Therefore, 8 is the optimal makespan and no further improvement is possible.
The rest of the paper is structured as follows. The remaining parts of this section provide an overview of the related literature and our results, respectively. In Sect.
2 we prove that permutation schedules with jobs in earliest release date order are optimal for PFBs with the makespan and total completion time objectives. We also show that permutation schedules are not optimal for any other traditional scheduling objective functions. In Sect.
3 we construct a dynamic program to find the optimal permutation schedule with a given, fixed job order. We show that, if the number of machines
m is fixed, the dynamic program can be used to minimize the makespan or total completion time in a PFB in polynomial time. In Sect.
4 we consider PFBs where all release dates are equal. We show that, in this special case, permutation schedules are always optimal and that the dynamic program from Sect.
3 can be applied to solve most traditional scheduling objectives. Finally, in Sect.
5 we draw conclusions, reflect on our results in the context of some additional existing literature, and point out future research directions.
1.1 Literature
In this overview as well as the remainder of this paper, we are exclusively concerned with flow shops of parallel batching machines, which we will simply call batching machines. For complexity results about (proportionate) flow shops with serial batching machines, we refer e.g. to Ng and Kovalyov (
2007) and Brucker and Shakhlevich (
2011). It is interesting to note that, although the different type of batching machines changes the problem significantly, the methodology used by Ng and Kovalyov (
2007) is quite similar to ours: for a fixed job permutation a dynamic program is developed that adds jobs one by one, resulting in polynomial runtime for a fixed number of machines.
The proportionate flow shop problem with batching machines as studied in this paper has so far not received a lot of attention from researchers, although several applications have appeared in the literature. In addition to the application from pharmacy described in Sect.
1, Sung et al. (
2000) mention a variety of applications in the manufacturing industry, e.g., in the production of semiconductors. Furthermore, there are several papers which study the scheduling sequences of locks for ships along a canal or river, which can in some ways be seen as a generalization of PFBs (see, e.g., Passchyn and Spieksma (
2019)). The problem is generally strongly NPcomplete, but several special cases for which it can be solved in polynomial time have been identified.
Another stream of recent research focuses on finding practical scheduling techniques for production processes which can be viewed as NPhard generalizations of the PFB problem. Usually applicationspecific heuristics are developed and their quality is certified by computational studies. A preliminary study of the specific industrial setup which motivated our research was done in Ackermann et al. (
2020+). For studies from different areas of application we refer, for instance, to Li et al. (
2015), Zhou et al. (
2016), Tan et al. (
2018), Chen et al. (
2019), Li et al. (
2019). The problems typically stem from various industries, e.g. electronics, steel, or equipment manufacturing. Features that make these problems difficult include mixtures of parallel and serial batching machines, families of jobs that are incompatible to be batched together, hybrid/flexible flow shop setups, as well as nowait constraints.
Despite this multitude of possible applications, for PFBs as introduced in Sect.
1 significant hardness results have, to the best of our knowledge, not been achieved at all. Polynomial time algorithms are only known for the special case with no more than two machines and, in most cases, without release dates (Ahmadi et al.
1992; Sung and Yoon
1997; Sung and Kim
2003).
The paper by Sung et al. (
2000) is, as far as we know, the only work in literature considering a PFB with arbitrarily many machines. They propose a reduction procedure to decrease the size of an instance by eliminating dominated machines. Using this procedure, they develop heuristics to minimize the makespan and the sum of completion times in a PFB and conduct a computational study certifying quality and efficiency of their heuristic approach. However, they do not establish any complexity result.
For the case with
\(m=2\) machines and no release dates, Ahmadi et al. (
1992) show that the makespan can be minimized by completely filling all batches on the first machine (other than the last one, if the number of jobs is not a multiple of
\(b_1\)) and completely filling all batches on the second machine (other than the first one, if the number of jobs is not a multiple of
\(b_2\)). This structural result immediately yields an
\({\mathcal {O}}(n)\) time algorithm. If release dates do not all vanish, i.e.
\(r_j >0\) for some
\(j=1,2, \ldots , n\), then the makespan can be minimized in
\({\mathcal {O}}(n^2)\) time, see Sung and Yoon (
1997). The authors use a dynamic programming algorithm in order to find an optimal batching on the first machine. On the second machine, the strategy to completely fill all batches (other than maybe the first one) is still optimal.
For objectives other than the makespan, results are only known if all jobs are released at the same time. In this case, the total completion time in a twomachine PFB can be minimized in
\({\mathcal {O}}(n^3)\) time, see (Ahmadi et al.
1992). Sung and Kim (
2003) provide dynamic programs to minimize
\(L_{\max }\),
\(\sum T_j\), and
\(\sum U_j\) in a twomachine PFB without release dates in polynomial time.
There are two special cases and one generalization of PFB which have been studied relatively extensively in the literature and which are of importance when it comes to comparison with the wider literature. First, consider the special case of PFB with maximum batch size equal to one on all machines. This is the usual proportionate flow shop problem, which is wellresearched and known to be polynomially solvable for many variants. An overview is given in Panwalkar et al. (
2013). In particular, the makespan and the total completion time can be minimized in
\(O(n \log n + nm)\) time, by ordering jobs according to nondecreasing release dates and scheduling each job as early as possible on each machine. Note that the term
\(n\log n\) is due to the sorting of the jobs while the term
nm arises from computing start and completion times for each operation of each job.
Second, consider the special case of PFB where the number of machines is fixed to
\(m=1\), which leaves us with the problem of scheduling a single batching machine with identical (but not necessarily unit) processing times. This problem, too, has been investigated by various authors (Ikura and Gimple
1986; Lee et al.
1992; Baptiste
2000; Condotta et al.
2010). Even with release dates, the problem can be solved in polynomial time for almost all standard scheduling objective functions, except for the weighted total tardiness, for which it is open. Note, in particular, the reduction procedure from Sung et al. (
2000) implies that for the cases where all batch sizes or all processing times are equal,
\(b_i = {\bar{b}}\) or
\(p_i = {\bar{p}}\) for all
\(i = 1, 2, \ldots , m\), the problem of scheduling a PFB reduces to scheduling a single batching machine with equal processing times. Thus, for those two special cases, the results by Baptiste (
2000) imply that scheduling a PFB is polynomially solvable for all objective functions other than the weighted total tardiness.
Third, in contrast to the two special cases from before, consider the generalization of PFB where processing times may be job as well as machinedependent. We end up with the usual flow shop problem with batching machines. Observe that this problem is also a generalization of traditional flow shop as well as scheduling a single batching machine. Therefore, we can immediately deduce that, even without release dates, it is strongly NPhard to minimize the makespan for any fixed number of machines
\(m \ge 3\) and to minimize the total completion time for any fixed number of machines
\(m \ge 2\) (Garey et al.
1976). Furthermore, minimizing the maximum lateness (or, equivalently, the makespan with release dates) is strongly NPhard even for the case with
\(m=1\) and
\(b_1 = 2\) (Brucker et al.
1998). Finally, Potts et al. (
2001) showed that scheduling a flow shop with batching machines to minimize the makespan, without release dates, is strongly NPhard, even for
\(m=2\) machines. Recall that for traditional flow shop with
\(m=2\) machines a schedule with minimum makespan can be computed in
\(O(n \log n)\) time (Johnson
1954).
In conclusion, we see that dropping either the batching or the multimachine property of PFB leads to easy special cases, while dropping the proportionate property leads to a very hard generalization. Therefore, PFB lies exactly on the borderline between easy and hard problems, which makes it an interesting problem to study from a theoretical perspective, in addition to the practical considerations explained in the introduction.
1.2 Our results
Our main focus in this paper is the complexity of PFBs with an arbitrary, but fixed number
m of machines. This is motivated by the specific industrial application we consider: while the number of steps needed to produce individualized drugs in a process as described in the beginning is significantly larger than 2 or 3 (machine numbers which are often studied separately in the literature), it is not arbitrarily large and it does not change once the process is set up. Thus, it is reasonable to assume that
m is fixed and not part of the input.
Table 1
Polynomially solvable PFB variants
Problem

\(m=1\)

\(m=2\)

fixed
\(m\ge 3\)


\(Fm \mid r_j, p_{ij} = p_i, p\text {batch}, b_i \mid C_{\max }\)

Ikura and Gimple (
1986)

Sung and Yoon (
1997)


\(Fm \mid r_j, p_{ij} = p_i, p\text {batch}, b_i \mid \sum C_j\)

Ahmadi et al. (
1992)

Corollary
13

Corollary
13

\(Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid \sum w_j C_j\)

Baptiste (
2000)

Theorem
18

Theorem
18

\(Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid L_{\max }\)

Lee et al. (
1992)

Sung and Kim (
2003)

Theorem
18

\(Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid \sum T_j\)

Baptiste (
2000)

Sung and Kim (
2003)

Theorem
18

\(Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid \sum U_j\)

Lee et al. (
1992)

Sung and Kim (
2003)

Theorem
19

\(Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid \sum w_j U_j\)

Baptiste (
2000)

Theorem
19

Theorem
19

This paper is, to the best of our knowledge, the first study which provides any complexity results – positive or negative – for PFBs with more than
\(m=2\) machines (see Table
1 for an overview). As such we close several questions which have remained open for over fifteen years, despite the significant practical importance of PFBs in the area of production lines. We show that for PFBs with release dates and a fixed number
m of machines, the makespan and the total completion time can be minimized in polynomial time. For PFBs without release dates, we further show that the weighted total completion time, the maximum lateness, the total tardiness and the (weighted) number of late jobs can be minimized in polynomial time.
Note that, while this paper closes many open complexity questions for PFBs, the proposed algorithm has practical limitations concerning its running time. We discuss this in some more detail also in the conclusions to this paper. Still, some of the structural results proved in this paper, in particular concerning the optimality of permutation schedules proved in Sects.
2 and
4, can be of great help for the future design of heuristics or approximation algorithms, which run fast in practice. Indeed, for the specific industrial setup of the study this paper originates from, such practical heuristics have already been considered by Ackermann et al. (
2020+). However, a close investigation of heuristics or approximation algorithms for PFBs in general is beyond the scope of this paper.
2 Optimality of permutation schedules
Our approach to scheduling a PFB relies on the wellknown concept of permutation schedules. In a
permutation schedule the order of the jobs is the same on all machines of the flow shop. This means there exists a permutation
\(\sigma \) of the job indices such that
\(c_{i\sigma (1)}\le c_{i\sigma (2)} \le \ldots \le c_{i\sigma (n)}\), for all
\(i=1,\ldots ,m\). Due to jobindependent processing times and since a machine can only process one batch at a time, clearly the same then also holds for starting times instead of completion times. If there exists an optimal schedule
\(S^*\) which is a permutation schedule with a certain ordering
\(\sigma ^*\) of the jobs, we say that
permutation schedules are optimal. A job ordering
\(\sigma ^*\) which gives rise to an optimal permutation schedule is then called an
optimal job ordering. Often, if the job ordering is clear, we will assume that jobs are already numbered with respect to that ordering and drop the notation of
\(\sigma \) or
\(\sigma ^*\). Finally, an ordering
\(\sigma \) of the jobs is called an
earliest release date ordering, if
\(r_{\sigma (1)} \le r_{\sigma (2)} \le \ldots \le r_{\sigma (n)}\).
Suppose that for some problem
permutation schedules are optimal. Then the scheduling problem can be split into two parts:
In this section we deal with part (i) and show that for
\(f\in \{C_{\max },\sum C_j\}\) optimal job orderings exist and can be found in
\(O(n \log n)\) time. Then, in Sect.
3, we show that under certain, very general preconditions part (ii) can be done in polynomial time via dynamic programming.
$$\begin{aligned} Fm \mid r_j, p_{ij} = p_i, p\text {batch}, b_i \mid f \end{aligned}$$
(i)
find an optimal ordering
\(\sigma ^*\) of the jobs;
(ii)
for each machine, partition the job set into batches and order those batches in accordance with the ordering
\(\sigma ^*\), such that the resulting schedule is optimal.
We begin with a technical lemma which will help us to show optimality of permutation schedules in this section and also in Sect.
4.
Lemma 2
Let
S be a feasible schedule for a PFB and let
\(\sigma \) be some earliest release date ordering of the jobs. Then there exists a feasible permutation schedule
\({\hat{S}}\) in which the jobs are ordered by
\(\sigma \) and the multiset of job completion times in
\({\hat{S}}\) is the same as in
S.
Proof
Suppose that jobs are indexed according to ordering
\(\sigma \), i.e.
\(\sigma \) is the identity permutation. Otherwise renumber jobs accordingly. Note that since
\(\sigma \) is an earliest release date ordering, it follows that
\(r_1\le r_2 \le \ldots \le r_n\). Let
\(B^{(i)}_\ell \),
\(\ell =1,\ldots ,\kappa ^{(i)}\), be the
\(\ell \)th batch processed on
\(M_i\) in
S, where
\(\kappa ^{(i)}\) is the total number of batches used on
\(M_i\). Let
\(s^{(i)}_\ell \) and
\(c^{(i)}_\ell \) be the start and completion times of batch
\(B^{(i)}_\ell \). Furthermore, let
\(k_\ell ^{(i)}=B^{(i)}_\ell \) be the size of batch
\(B^{(i)}_\ell \) and let
be the the number of jobs processed in batches
\(B^{(i)}_1\),
\(B^{(i)}_2, \ldots , B^{(i)}_\ell \).
$$\begin{aligned} j^{(i)}_\ell = \sum _{s = 1}^{\ell } k^{(i)}_s \end{aligned}$$
Construct the new schedule
\({\hat{S}}\) using batches
\({\hat{B}}^{(i)}_\ell \),
\(i = 1,2, \ldots , m\),
\(\ell = 1,2, \ldots , \kappa ^{(i)}\), with start times
\({\hat{s}}^{(i)}_\ell \) and completion times
\({\hat{c}}^{(i)}_\ell \) given by
In other words, in schedule
\({\hat{S}}\) we use batches with the same start and completion times and of the same size as in
S. We only reassign which job is processed in which batch in such a way, that the first batch on machine
\(M_i\) processes the first
\(k^{(i)}_1\) jobs (in order of their numbering, i.e. in the earliest release date order), the second batch processes jobs
\(J_{k^{(i)}_1+1}\) to
\(J_{k^{(i)}_1+k^{(i)}_2}\), the third batch processes jobs
\(J_{k^{(i)}_1 + k^{(i)}_2+1}\) to
\(J_{k^{(i)}_1 + k^{(i)}_2 + k^{(i)}_3}\) and so on. Such, in schedule
\({\hat{S}}\), jobs are processed in order of their indices on all machines (i.e. in order
\(\sigma \)).
$$\begin{aligned} {\hat{B}}^{(i)}_\ell= & {} \{ J_{j^{(i)}_{\ell 1} + 1}, J_{j^{(i)}_{\ell 1} + 2}, \ldots , J_{j^{(i)}_{\ell }}\},\\ {\hat{s}}^{(i)}_\ell= & {} s^{(i)}_\ell , \\ {\hat{c}}^{(i)}_\ell= & {} c^{(i)}_\ell . \end{aligned}$$
Clearly, as all batches have exactly the same start and completion times, as well as the same sizes as in
S, for schedule
\({\hat{S}}\) the multiset of job completion times on the last machine is exactly the same as for
S.
It remains to show that
\({\hat{S}}\) is feasible, i.e., no job is started on machine
\(M_1\) before its release date and no job is started on a machine
\(M_i\),
\(i = 2, 3, \ldots , m\) before finishing processing on machine
\(M_{i1}\).
We first show that no batch
\({\hat{B}}^{(1)}_\ell \) on machine
\(M_1\) violates any release dates. Indeed, due to the feasibility of
S, at the time
\(s^{(1)}_\ell \), when batch
\(B^{(1)}_\ell \) is started, at least
\(j_\ell ^{(1)}\) jobs are released (recall that
\(j_\ell ^{(1)}\) is the number of jobs processed in batches
\(B^{(1)}_1, B^{(1)}_2, \ldots , B^{(1)}_\ell \)). This means that in particular the first
\(j_\ell ^{(1)}\) jobs in earliest release date order are released at time
\(s^{(1)}_\ell \). Thus, since batch
\({\hat{B}}^{(1)}_\ell \) by construction contains only the jobs
\(J_{j_{\ell 1}^{(1)} +1}\) to
\(J_{j_\ell ^{(1)}}\) and since jobs are numbered in earliest release date order, batch
\({\hat{B}}^{(1)}_\ell \) does not violate any release dates.
Finally, using an analogous argument, we show that no batch starts processing on machine
\(M_i\) before all jobs it contains finish processing on machine
\(M_{i1}\),
\(i = 2, 3, \ldots , m\). By time
\(s^{(i)}_\ell \), when batch
\(B^{(i)}_\ell \) is started on machine
\(M_i\) in schedule
S, machine
\(M_{i1}\) has processed at least
\(j_\ell ^{(i)}\) jobs. Thus, by construction, the same is true for batch
\({\hat{B}}^{(i)}_\ell \) in schedule
\({\hat{S}}\). In particular, this means that at time
\(s^{(i)}_\ell \) the first
\(j_\ell ^{(i)}\) jobs in order of their numbering are finished on machine
\(M_{i1}\) in schedule
\({\hat{S}}\). Since batch
\({\hat{B}}^{(i)}_\ell \) contains only the jobs
\(J_{j_{\ell 1}^{(i)} +1}\) to
\(J_{j_\ell ^{(i)}}\), this means that batch
\({\hat{B}}^{(i)}_\ell \) starts only after each job contained in it has finished processing on machine
\(M_{i1}\). Therefore, schedule
\({\hat{S}}\) is feasible.
\(\square \)
Note that, in particular, if all release dates are equal, then any ordering
\(\sigma \) is an earliest release date ordering of the jobs. This fact will be used in Sect.
4.
Now we show that for a PFB with the makespan or total sum of completion times objective, permutation schedules are optimal and that any earliest release date ordering is an optimal ordering of the jobs. This result generalizes Sung and Yoon (
1997), Proposition 1 to arbitrarily many machines. For makespan and total completion time, it also extends Sung et al. (
2000), Lemma 1 to the case with release dates.
Theorem 3
For a PFB with objective function
\(C_{\max }\) or
\(\sum C_j\), permutation schedules are optimal. Moreover, any earliest release date ordering is an optimal ordering of the jobs.
Proof
Let
\(S^*\) be an optimal schedule with respect to makespan or total completion time. Let
\(\sigma \) be any earliest release date ordering. Using Lemma
2, construct a new permutation schedule
\({\hat{S}}\) with jobs ordered by
\(\sigma \) on all machines and with the same multiset of completion times as
S. Since makespan and total completion time only depend on the multiset of completion times,
\({\hat{S}}\) is optimal, too.
\(\square \)
Hence, when minimizing the makespan or the total completion time in a PFB, it is valid to limit oneself to permutation schedules in an earliest release date order.
To conclude this section, we present an example where no permutation schedule is optimal for a bunch of other objective functions. This shows that Theorem
3 does not hold for these objective functions. It also implies that the statements made in Sung and Yoon (
1997, Proposition 1), Sung et al. (
2000, Lemma 1), and Sung and Kim (
2003, Lemma 1) cannot be generalized to settings in which release dates and due dates or release dates and weights are present.
Example 4
Consider a PFB with
\(m=3\) machines,
\(b_1=p_1=b_3=p_3=1\), and
\(b_2=p_2=2\). Suppose there are
\(n=2\) jobs with release dates
\(r_1=0\) and
\(r_2=1\). Let due dates
\(d_1=6\) and
\(d_2=5\) be given. First, we show that no permutation schedule is optimal for
\(L_{\max }\). Figure
3 shows a joboriented Gantt chart of a feasible schedule which is not a permutation schedule. Each rectangle labeled by a machine denotes a batch processed on this machine. The black box indicates that the second job has not been released yet. The gray shaded area denotes points in time after the due dates.
×
When constructing a permutation schedule for this instance, two decisions have to be made, namely which of the two jobs is processed first, and whether a batch of size two or two batches of size one should be used on
\(M_2\). The resulting four possible schedules are depicted in Fig.
4.
×
However, the schedule in Fig.
3 achieves an objective value of zero, while all schedules in Fig.
4 have a positive objective value. Hence, there is no optimal permutation schedule for this instance. The same example shows that there is no optimal permutation schedule for
\(\sum T_j\) and
\(\sum U_j\).
3 Dynamic programming to find an optimal schedule for a given job permutation
In this section we show that for a fixed number
m of machines and a fixed ordering of the jobs
\(\sigma \), we can construct a polynomial time dynamic program, which finds a permutation schedule that is optimal among all schedules obeying job ordering
\(\sigma \). The algorithm works for an arbitrary regular sum or bottleneck objective function. Combined with Theorem
3 this shows that the makespan and the total completion time in a PFB can be minimized in polynomial time for fixed
m.
The dynamic program is based on the important observation that, for a given machine
\(M_i\), the set of possible job completion times on
\(M_i\) is not too large. In order to formalize this statement, we introduce the notion of a schedule being
\(\varGamma \)active. For ease of notation, we use the shorthand
\([k]=\{1,2,\ldots ,k\}\).
Definition 5
Given an instance of PFB with
n jobs and
m machines, let
for all
\(i \in \left[ m\right] \). We say that a schedule
S is
\(\varGamma \)
active, if
\(c_{ij}(S) \in \varGamma _i\), for any job
\(J_j\),
\(j \in \left[ n\right] \), and any machine
\(M_i\),
\(i \in \left[ m\right] \).
$$\begin{aligned} \varGamma _i=\left\{ r_{j'} + \sum _{i'=1}^{i}\lambda _{i'}p_{i'}\mathrel {}\Bigg \vert \mathrel {}j'\in [n],\ \lambda _{i'}\in [n]\text { for } i'\in [i] \right\} \end{aligned}$$
Note that on machine
\(M_i\), there are only
\(\varGamma _i\le n^{i+1}\le n^{m+1}\) possible job completion times to consider for any
\(\varGamma \)active schedule
S.
Now we show that for a PFB problem with a regular objective function, any schedule can be transformed into a
\(\varGamma \)active schedule, without increasing the objective value. To do so, we prove that this is true even for a slightly stronger concept than
\(\varGamma \)active.
Definition 6
A schedule is called
batchactive, if no batch can be started earlier without violating feasibility of the schedule and without changing the order of batches.
Clearly, given a schedule
S that is not batchactive, by successively removing unnecessary idle times we can obtain a new, batchactive schedule
\(S^\prime \). Furthermore, for regular objective functions,
\(S^\prime \) has objective value no higher than the original schedule
S. These two observations immediately yield the following lemma.
Lemma 7
A schedule for a PFB can always be transformed into a batchactive schedule such that any regular objective function is not increased. This transformation does not change the order in which the jobs are processed on the machines.
\(\square \)
Now we show that, indeed, being batchactive is stronger than being
\(\varGamma \)active, in other words that every batchactive schedule is also
\(\varGamma \)active. This result generalizes an observation made by Baptiste (
2000, Section 2.1) from a single machine to the flow shop setting.
Lemma 8
In a PFB, any batchactive schedule is also
\(\varGamma \)active.
Proof
Fix some
\(i\in [m]\) and
\(j\in [n]\). Let
\(B^{(i)}_\ell \) be the batch which contains job
\(J_j\) on machine
\(M_i\). Since the schedule is batchactive,
\(B^{(i)}_\ell \) is either started at the completion time of the previous batch
\(B^{(i)}_{\ell 1}\) on the same machine or as soon as all jobs of
\(B^{(i)}_\ell \) are available on machine
\(M_i\). In the former case, all jobs
\(J_{j'}\in B^{(i)}_{\ell 1}\) satisfy
\(c_{ij} = c_{ij'}+p_i\). In the latter case, there is a job
\(J_{j''}\in B^{(i)}_\ell \) such that
\(c_{ij}=c_{(i1)j''}+p_i\), where we write
\(c_{0j''}=r_{j''}\) for convenience. The claim follows inductively by observing that the former case can happen at most
\(n1\) times in a row, since there are at most
n batches on each machine.
\(\square \)
Lemma 9
Let
S be a schedule for a PFB problem with regular objective function
f. Then there exists a schedule
\(S^\prime \) for the same PFB problem, such that
1.
On each machine, jobs appear in the same order in
\(S^\prime \) as they do in
S,
2.
\(S^\prime \) has objective value no larger than
S, and
3.
\(S'\) is
\(\varGamma \)active.
\(\square \)
In particular, Lemma
9 shows that, if for a PFB problem an optimal job ordering
\(\sigma ^*\) is given, then there exists an optimal schedule which is a permutation schedule with jobs ordered by
\(\sigma ^*\) and in addition
\(\varGamma \)active.
From this point on, we assume that the objective function is a regular sum or bottleneck function, that is,
\(f(C) = \bigoplus _{j=1}^n f_j(C_j)\), where
\(\bigoplus \in \{\sum ,\max \}\) and
\(f_j\) is nondecreasing for all
\(j\in [n]\). We also use the symbol
\(\oplus \) as a binary operator. In what follows, we present a dynamic program which, given a job ordering
\(\sigma \), finds a schedule that is optimal among all
\(\varGamma \)active permutation schedules obeying job ordering
\(\sigma \). For simplicity, we assume that jobs are already indexed by
\(\sigma \). The dynamic program schedules the jobs one after the other, until all jobs are scheduled. Due to Lemma
9, if
\(\sigma \) is an optimal ordering, then the resulting schedule is optimal.
Given an instance
I of a PFB and a job index
\(j\in [n]\), let
\(I_j\) be the modified instance which contains only the jobs
\(J_1,\dots ,J_j\). We write
\(\varGamma =\varGamma _1 \times \varGamma _2 \times \ldots \times \varGamma _m\) and
\({\mathcal {B}}=\left[ b_1\right] \times \left[ b_2\right] \times \ldots \times \left[ b_m\right] \), where
\(\times \) is the standard Cartesian product. For a vector
of
m possible completion times and a vector
of
m possible batch sizes, we say that a schedule
S for instance
\(I_j\)
corresponds to
t and
k if, for all
\(i\in [m]\),
\(c_{ij}=t_i\) and job
\(J_j\) is contained in a batch with exactly
\(k_i\) jobs (including
\(J_j\)) on
\(M_i\).
$$\begin{aligned} t = (t_1, t_2, \ldots , t_i, \ldots , t_{m1}, t_m) \in \varGamma \end{aligned}$$
$$\begin{aligned} k = (k_1, k_2, \ldots , k_i, \ldots , k_{m1}, k_m) \in {\mathcal {B}} \end{aligned}$$
Next, we define the variables
g of the dynamic program. Let
\({\mathcal {S}}(j,t,k)\) be the set of feasible permutation schedules
S for instance
\(I_j\) satisfying the following properties:
Then, for
\(j\in [n]\),
\(t\in \varGamma \), and
\(k\in {\mathcal {B}}\), we define
to be the minimum objective value of a schedule in
\({\mathcal {S}}(j,t,k)\) for instance
\(I_j\). If no such schedule exists, the value of
g is defined to be
\(+\infty \).
1.
Jobs in
S are ordered by their indices,
2.
S corresponds to
t and
k, and
3.
S is
\(\varGamma \)active.
$$\begin{aligned} g(j,t,k)=g(j,t_1,t_2,\dots ,t_m,k_1,k_2,\dots ,k_m) \end{aligned}$$
Using the above definitions as well as Lemma
9, the objective value of the best permutation schedule ordered by the job indices is given by
\(\min _{t,k} g(n,t,k)\),
\(t\in \varGamma \),
\(k\in {\mathcal {B}}\). In Lemmas
10 and
11 (below) we provide formulas to recursively compute the values
g(
j,
t,
k),
\(j=1,2,\ldots ,n\),
\(t\in \varGamma \),
\(k \in {\mathcal {B}}\). In total, we then obtain Algorithm
1 as our dynamic program.
×
The remainder of this section deals with proving the correctness and running time bound of Algorithm
1. As mentioned above, in Lemmas
10 and
11 we define and prove the correctness of a recursive formula to compute all values of function
g. Lemma
10 deals with the starting values
g(1,
t,
k) while Lemma
11 deals with the recurrence relation. Finally, in Theorem
12 we connect the results of this section to prove the correctness of Algorithm
1.
Lemma 10
For
\(j=1\),
\(t_i\in \varGamma _i\), and
\(k_i\in [b_i]\),
\(i\in [m]\), the starting values of
g are given by
where
$$\begin{aligned} g(1,t,k) = \left\{ \begin{array}{ll} f_1(t_m), &{} \text {if conditions (i)(iii) hold},\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned}$$
(i)
\(k_i=1\) for all
\(i\in [m]\),
(ii)
\(t_1\ge r_1 + p_1\), and
(iii)
\(t_{i+1}\ge t_{i}+p_{i+1}\) for all
\(i\in [m1]\).
Proof
Conditions (i)–(iii) are necessary for the existence of a schedule in
\({\mathcal {S}}(1,t,k)\) because
\(I_1\) consists of only one job and there must be enough time to process this job on each machine. Conversely, if (i)–(iii) are satisfied, then the vector
t of completion times uniquely defines a feasible schedule
\(S \in {\mathcal {S}}(1,t,k)\) by
\(c_{i1}(S)=t_i\). Since
S is uniquely defined by
t and thus the only schedule in
\({\mathcal {S}}(1,t,k)\), its objective value
\(f_1(C_1)=f_1(c_{m1})=f_1(t_m)\) is optimal.
\(\square \)
Now we turn to the recurrence formula to calculate
g for
\(j>1\) from the values of
g for
\(j1\).
Lemma 11
For
\(j>1\),
\(t_i\in \varGamma _i\), and
\(k_i\in [b_i]\),
\(i\in [m]\), the values of
g are determined by
where the minimum over the empty set is defined to be
\(+\infty \). Here,
\((*)\) is given by conditions
and
\((**)\) is given by conditions
for all
\(i\in [m]\).
$$\begin{aligned}&g(j,t,k) \\&\quad = \left\{ \begin{array}{ll} \min \{f_j(t_m)\oplus g(j1,t',k') \mid (**) \}, &{} \text {if }(*),\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned}$$
(i)
\(t_1\ge r_j + p_1\) and
(ii)
\(t_{i+1}\ge t_{i}+p_{i+1}\) for all
\(i\in [m1]\),
(iii)
\(t'_i\in \varGamma _i\),
(iv)
\(k'_i\in [b_i]\),
(v)
if
\(k_i=1\), then
\(t'_i\le t_ip_i\), and
(vi)
if
\(k_i>1\), then
\(t'_i=t_i\) and
\(k'_i=k_i1\),
Proof
Fix values
\(t_i\in \varGamma _i\) and
\(k_i\in [b_i]\),
\(i\in [m]\). The conditions of
\((*)\) are necessary for the existence of a schedule in
\({\mathcal {S}}(j,t,k)\) because there must be enough time to process job
\(J_j\) on each machine. Therefore,
g takes the value
\(+\infty \), if
\((*)\) is violated. For the remainder of the proof, assume that
\((*)\) is satisfied. Hence, we have to show that
We first prove “
\(\ge \)”. If the lefthand side of (
3) equals infinity, then this direction follows immediately. Otherwise, by the definition of
g, there must be a schedule
\(S\in {\mathcal {S}}(j,t,k)\) with objective value
g(
j,
t,
k). Schedule
S naturally defines a feasible schedule
\(S'\) for instance
\(I_{j1}\) by ignoring job
\(J_j\).
$$\begin{aligned} g(j,t,k)=\min \left\{ f_j(t_m)\oplus g(j1,t',k') \mathrel {}\Bigg \vert \mathrel {}(**)\right\} . \end{aligned}$$
(3)
Observe that, because
S belongs to
\({\mathcal {S}}(j,t,k)\) and therefore is
\(\varGamma \)active,
\(S'\) is also
\(\varGamma \)active and job
\(J_{j1}\) finishes processing on machine
\(M_i\) at some time
\(t'_i \in \varGamma _i\). Also, since
S is feasible, on each machine
\(M_i\) job
\(J_{j1}\) is scheduled in some batch of size
\(k'_i \le b_i\). Thus,
\(S'\) corresponds to two unique vectors
\(t' = (t'_1, t'_2, \ldots , t'_m)\) and
\(k' = (k'_1, k'_2, \ldots , k'_m)\), which satisfy (iii) and (iv) from
\((**)\). Note that, in particular,
\(S' \in {\mathcal {S}}(j1,t',k')\).
Furthermore,
\(t'\) and
\(k'\) also satisfy (v) and (vi) from
\((**)\). Indeed, due to the fixed job permutation, one of the following two things happens on each machine
\(M_i\) in
S: either jobs
\(J_j\) and
\(J_{j1}\) are batched together, or job
\(J_j\) is batched in a singleton batch. In the former case, it follows that
\(1<k_i=k'_i+1\) and
\(t'_i=t_i\), while the latter case requires
\(k_i=1\) and
\(t_i\ge t'_i + p_i\), since the machine is occupied by the previous batch with job
\(J_{j1}\) until then.
Thus,
\(t'\) and
\(k'\) satisfy
\((**)\) and we obtain
where the last inequality follows due to the definition of
g and
\(S'\in {\mathcal {S}}(j1,t',k')\). Hence, the “
\(\ge \)” direction in (
3) follows because
\(t'\) and
\(k'\) satisfy
\((**)\).
$$\begin{aligned} g(j,t,k)&= \bigoplus _{j'=1}^j f_{j'}(C_{j'}(S)) \\&= f_j(t_m)\oplus \bigoplus _{j'=1}^{j1} f_{j'}(C_{j'}(S'))\\&\ge f_j(t_m)\oplus g(j1,t',k'), \end{aligned}$$
For the “
\(\le \)” direction, if the righthand side of (
3) equals infinity, then this direction follows immediately. Otherwise, let
\(t'\) and
\(k'\) be minimizers at the righthand side. By definition of
g there must be a schedule
\(S'\in {\mathcal {S}}(j1,t',k')\) for
\(I_{j1}\) with objective value
\(g(j1,t',k')\).
We now show that
\(S'\) can be extended to a feasible schedule
\(S\in {\mathcal {S}}(j,t,k)\). Construct
S from
\(S'\) by adding job
\(J_j\) in the following way:
First note that
\(c_{ij}(S) = t_i\) for all
\(i \in \left[ m\right] \), and thus
S corresponds to
t by definition. Furthermore, for each machine
\(M_i\) consider the two cases (a)
\(k_i = 1\) and (b)
\(k_i > 1\).

if in \(S'\) there is a batch on machine \(M_i\) which ends at time \(t_i\), add \(J_j\) to that batch (we show later that this does not cause the batch to exceed the maximum batch size \(b_i\) of \(M_i\));

otherwise, add a new batch on machine \(M_i\) finishing at time \(t_i\) and containing only job \(J_j\) (we show later that this does not create overlap with any other batch on machine \(M_i\)).
In case (a), due to (v) it follows that
\(t'_i \le t_i  p_i\). Since job
\(J_{j1}\) finishing at time
\(t'_i\) is the last job completed on machine
\(M_i\) in schedule
\(S'\), by construction in schedule
S job
\(J_j\) is in a singleton batch on machine
\(M_i\) that starts at time
\(t_i  p_i\) and ends at time
\(t_i\). Therefore, in this case,
S corresponds to
k, because
\(k_i = 1\) and job
\(J_j\) is in a singleton batch. Also, since the last batch on machine
\(M_i\) in
\(S'\) ends at time
\(t'_i \le t_i  p_i\) and the batch with job
\(J_j\) starts at time
\(t_ip_i\), no overlapping happens between the new batch for job
\(J_j\) and any other batch on machine
\(M_i\).
In case (b), due to (vi) it follows that
\(t'_i = t_i\) and by construction job
\(J_j\) is scheduled in the same batch as job
\(J_{j1}\) on machine
\(M_i\) in
S. Since
\(S'\) corresponds to
\(k'\) and again due to (vi), this means that
\(J_j\) is scheduled in a batch with
\(k_i\) jobs on machine
\(M_i\) and S corresponds to
k in this case. Also, since
\(k_i \in \left[ b_i\right] \) by definition, no batch in
S exceeds its permissible size.
Combining the considerations for cases (a) and (b), together with the feasiblity of schedule
\(S'\), it follows that
In order to show feasibility of
S it is only left to show that no job starts before its release date on machine
\(M_1\), and no job starts on machine
\(M_i\) before it is completed on machine
\(M_{i1}\). As
\(S'\) is feasible, this is clear for all jobs other than
\(J_j\). On the other hand, since
S corresponds to
t, and
t fulfills
\((*)\), it also follows for job
\(J_j\).
(1)
S corresponds to
k,
(2)
no overlapping happens between batches in
S, and
(3)
all batches in
S are of permissible size.
Thus,
S is indeed feasible. In total, we have shown all conditions for
\(S\in {\mathcal {S}}(j,t,k)\). Therefore, we obtain
where the last equality is due to the choice of
\(S'\) as a schedule with objective value
\(g(j1, t', k')\). Hence, the “
\(\le \)” direction in (
3) follows due to the choice of
\(t'\) and
\(k'\) as minimizers of the righthand side.
\(\square \)
$$\begin{aligned} g(j,t,k)&\le \bigoplus _{j'=1}^j f_{j'}(C_{j'}(S))\\&= f_j(t_m)\oplus \bigoplus _{j'=1}^{j1} f_{j'}(C_{j'}(S'))\\&= f_j(t_m)\oplus g(j1,t',k'), \end{aligned}$$
Using Lemmas
9,
10, and
11 we can now prove correctness of Algorithm
1 and show that its runtime is bounded polynomially (if
m is fixed).
Theorem 12
Consider a PFB instance with a constant number of
m machines and a regular sum or bottleneck objective function. Then, for a given ordering of the jobs, Algorithm
1 finds the best permutation schedule in time
\({\mathcal {O}}(n^{m^2+4m+1})\).
Proof
Without loss of generality, assume that the desired job ordering is given by the job indices. By Lemmas
10 and
11, Algorithm
1 correctly computes the values of function
g. Lemma
9 assures that the resulting schedule is indeed optimal among all permutation schedules ordered by the job indices.
Concerning the time complexity, note that there are
\(\left\varGamma \right\) many possible choices for vector
t and
\(\left{\mathcal {B}} \right\) many possible choices for vector
k, yielding a total of
\(\left\varGamma \right\cdot \left{\mathcal {B}} \right\) possible combinations for vectors
t and
k. The number can be bounded from above as follows:
Here, the first
m factors in the second line are due to
\(\varGamma _i \le n^{i+1}\) and the last factor
\(n^m\) is due to inequalities
\(b_i \le n\) for all
\(i \in \left[ m\right] \).
$$\begin{aligned} \begin{array}{ccccc} \left\varGamma \right\cdot \left{\mathcal {B}} \right&{} =\ &{} \varGamma _1\cdot \varGamma _2\cdot \ldots \cdot \varGamma _m &{}\cdot &{} b_1 \cdot b_2 \cdot \ldots \cdot b_m\\ &{}\le \ &{} n^2 \cdot n^3 \cdot \ldots \cdot n^{m+1} &{} \cdot &{} n^m\\ &{}=\ &{} n^{\frac{(m+1)(m+2)}{2}1+m}&{} &{}\\ &{}=\ &{} n^{\frac{m^2}{2}+\frac{5m}{2}}.&{} &{} \end{array} \end{aligned}$$
Therefore, Step
1 of Algorithm
1 involves at most
\(n^{\frac{m^2}{2}+\frac{5m}{2}}\) iterations, each of which can be performed in constant time because
m is fixed.
For Step
2, first observe that conditions
\((*)\) and
\((**)\) can be checked in constant time for fixed
m. In a single iteration, that is, for fixed
j,
t, and
k, in order to compute the minimum in the recurrence of Lemma
11, we may have to consider all possible choices for
\(t'\) and
\(k'\). The number of these choices is
\(\{(t',k')\in \varGamma \times {\mathcal {B}}\mid (**)\}\). Aggregating over all
t and
k, while keeping
j fixed, yields
The number of quadruples
\((t,t',k,k')\) satisfying (v) and (vi) can be bounded by
\(b_i\varGamma _i^2+b_i\varGamma _i\). Here, the first term corresponds to the quadruples with
\(k_i=1\), while the second term corresponds to the quadruples with
\(k_i>1\), in which case
\(t'_i\) and
\(k'_i\) are uniquely determined by
\(t_i\) and
\(k_i\). Further, note that due to
\(\varGamma _i \ge 1\), it holds that
\(b_i\varGamma _i^2+b_i\varGamma _i \le 2 b_i \varGamma _i^2\). Hence, the total running time of Step
2 (also aggregating over all choices of
j), is at most
The first equality is due to the factor
\(2^m\) being constant for constant
m and the set inclusion is due to
\(b_i \le n\) and
\(\varGamma _i \le n^{i+1}\).
$$\begin{aligned}&\sum _{(t,k)\in \varGamma \times {\mathcal {B}}}\{(t',k')\in \varGamma \times {\mathcal {B}}\mid (**)\}\\&\quad = \{(t,t',k,k')\in \varGamma \times \varGamma \times {\mathcal {B}} \times {\mathcal {B}}\mid (**)\}\\&\quad = \prod _{i=1}^m\ \{(t_i,t'_i,k_i,k'_i) \in \varGamma _i \times \varGamma _i \times [b_i] \times [b_i] \\&\qquad \mid (v)\text { and }(vi)\}. \end{aligned}$$
$$\begin{aligned} \begin{aligned}&{\mathcal {O}}\left( n\prod _{i=1}^m 2 b_i \varGamma _i^2\right) \\&\quad ={\mathcal {O}}\left( n \prod _{i=1}^m b_i \varGamma _i^2\right) \\&\quad \subseteq {\mathcal {O}}\left( n\prod _{i=1}^m n\cdot n^{2i+2}\right) \\&\quad ={\mathcal {O}}(n^{m^2+4m+1}). \end{aligned} \end{aligned}$$
(4)
Finally, in Step
3, we take the minimum over at most
\(n^{\frac{m^2}{2}+\frac{5m}{2}}\) many values. The backtracking and reconstruction of the final schedule can be done in time
\({\mathcal {O}}(n)\) for constant
m.
Hence, in total, Step
2 is the bottleneck and the total time complexity is
\({\mathcal {O}}(n^{m^2+4m+1})\).
\(\square \)
Corollary 13
For a PFB with release dates and a fixed number
m of machines, the makespan can be minimized in
\({\mathcal {O}}(n^{m^2+4m+1})\) time. The same holds for the total completion time.
\(\square \)
In particular, for any fixed number
m of machines, the problems
with
\(f\in \{C_{\max }, \sum C_j \}\) can be solved in polynomial time.
$$\begin{aligned} Fm \mid r_j, p_{ij} = p_i, p\text {batch}, b_i \mid f \end{aligned}$$
3.1 Improved running time for makespan minimization
Sung et al. (
2000, Theorem 6) showed that for minimizing the makespan in a PFB, it is optimal to use the socalled firstonlyempty batching on the last machine
\(M_m\), that is, to completely fill all batches on the last machine, except for the first batch, which might contain less jobs if
n is not a multiple of
\(b_m\). In a permutation schedule ordered by the job indices with firstonlyempty batching on
\(M_m\), a makespan of
c can be achieved if and only if for each job
\(J_j\) we have enough time to process the
\(nj+1\) jobs
\(J_j, J_{j+1},\dots ,J_n\) on
\(M_m\) after
\(J_j\) has been completed on
\(M_{m1}\). In other words, a makespan of
c can be achieved, if and only if
\(c\ge c_{(m1)j}+\left\lceil \frac{nj+1}{b_m}\right\rceil \) for every
\(j\in [n]\). Hence, the optimal makespan is given by
\(\max _{j=1,2,\ldots ,n} c_{(m1)j}+\left\lceil \frac{nj+1}{b_m}\right\rceil \). Therefore, the problem to minimize the makespan on an
mmachine PFB can be solved in the following way: We first find a schedule for the first
\(m1\) machines that minimizes the objective function
\(\bigoplus _{j=1}^n f_j(c_{(m1)j})\) with
\(\bigoplus =\max \) and
\(f_j(c_{(m1)j})=c_{(m1)j}+\left\lceil \frac{nj+1}{b_m}\right\rceil \). By Theorem
12, this can be done in time
for fixed
m. Afterwards, this schedule is extended by a firstonlyempty batching on
\(M_m\), which does not further increase the asymptotic runtime. Thus, for the makespan objective, we can strengthen Corollary
13:
$$\begin{aligned} {\mathcal {O}}(n^{(m1)^2+4(m1)+1})={\mathcal {O}}(n^{m^2+2m2}) \end{aligned}$$
Corollary 14
For a PFB with release dates and a fixed number
m of machines, the makespan can be minimized in
\({\mathcal {O}}(n^{m^2+2m2})\) time.
\(\square \)
4 Equal release dates
The dynamic program presented in the previous section works for a very general class of objective functions. Still, Corollary
13 only holds for makespan and total completion time because for other standard objective functions permutation schedules are not optimal, as shown in Example
4.
However, in the case where all release dates are equal, it turns out that permutation schedules are always optimal. We first show that in the case of equal release dates, any feasible schedule
S can be transformed into a feasible permutation schedule
\(S^\prime \) with equal objective value.
Lemma 15
Let
S be a feasible schedule for a PFB where all jobs are released at time 0. Then there exists a feasible permutation schedule
\(S^\prime \) such that
\(C_j(S) = C_j(S^\prime )\) for all
\(j = 1, 2, \ldots , n\).
Proof
Let
\(\sigma \) be the ordering of jobs on the last machine. Since
\(r_j = 0\) for all
\(j = 1, 2, \ldots , n\), ordering
\(\sigma \) is an earliest release date ordering. Thus we can use Lemma
2 to construct a permutation schedule
\(S^\prime \), where jobs are ordered by
\(\sigma \) on all machines and the multiset of completion times is the same as for
S. Furthermore, since the ordering on the last machine in
\(S^\prime \) is the same as in
S, in
\(S^\prime \) each job has the same completion time as in
S, i.e.
\(C_j(S) = C_j(S^\prime )\) for all
\(j=1,2,\ldots ,n\).
\(\square \)
From Lemma
15, it follows that permutation schedules are optimal for PFBs with equal release dates, as stated in the following theorem. The theorem generalizes the result presented by Rachamadugu et al. (
1982) for usual proportionate flow shop without batching. Also compare Sung et al. (
2000, Lemma 1). In the theorem, we use the term
scheduling objective function to describe any objective function only dependent on the completion times of the jobs.
Theorem 16
For any scheduling objective function, if an optimal schedule exists for a PFB problem without release dates, then there also exists an optimal permutation schedule.
Proof
Apply Lemma
15 to an optimal schedule
\(S^*\) in order to obtain a permutation schedule
\(S^\prime \). Note that, by Lemma
15, for every job, the completion time on the last machine in schedule
\(S^\prime \) is exactly the same as in schedule
\(S^*\). Thus, for any completion time dependent objective function, schedules
\(S^*\) and
\(S^\prime \) have the same objective value.
\(\square \)
Note that for all regular objective functions (including those studied in this paper), at least one optimal solution always exists. Indeed, Lemma
7 shows that for regular objective functions batchactive schedules are optimal, and there are only finitely many batch active schedules for a PFB. Thus, the restriction in the statement of the theorem is only a formality. However, Theorem
16 holds for a much wider range of scheduling objective functions than studied in this paper. Importantly, since finish times for all jobs stay exactly the same when moving from schedule
\(S^*\) to schedule
\(S^\prime \) in the proof of Theorem
16, the theorem holds also for nonregular objective functions (as long as they are defined in such a way that at least one optimal solution exists).
Of course, it may still be hard to find optimal permutations. However, the following theorem shows that for several traditional objective functions, an optimal permutation can be found efficiently. For the solution of due date related objectives, earliest due date orderings are of particular importance. They are defined analogously to earliest release date orderings: an ordering
\(\sigma \) is an
earliest due date ordering, if
\(d_{\sigma (1)} \le d_{\sigma (2)} \le \ldots \le d_{\sigma (n)}\).
Theorem 17
Consider a PFB without release dates.
(i)
For minimizing the total weighted completion time, any ordering by nonincreasing weights is optimal.
(ii)
For minimizing maximum lateness and total tardiness, any earliest due date ordering is optimal.
Proof
For proving (i), assume jobs are indexed by any given nonincreasing order of weights, i.e.,
\(w_{1}\ge w_{2} \ge \dots \ge w_{n}\). By Theorem
16, there exists an optimal permutation schedule
\(S^*\) minimizing the total weighted completion time. Suppose there is at least one pair of jobs
\(J_{j_1}, J_{j_2}\) with
\(j_1 < j_2\), but
\(C_{j_1}(S^*) > C_{j_2}(S^*)\). Swapping
\(J_{j_1}\) and
\(J_{j_2}\) in
\(S^*\) yields a new permutation schedule
\(S^\prime \), with objective value
Due to
\(j_1 < j_2\), we have
\(w_{j_1} \ge w_{j_2}\) and with
\(C_{j_1}(S^*) > C_{j_2}(S^*)\) it follows that
Therefore, it we obtain
Performing such exchanges of jobs sequentially, we eventually obtain an optimal permutation schedule in which jobs are scheduled in the given order of nonincreasing weights. Hence, any given ordering of the jobs by nonincreasing weights is optimal.
$$\begin{aligned} \sum w_j C_j (S^\prime )= & {} \sum w_j C_j (S^*) \\&+(w_{j_1} w_{j_2})(C_{j_2}(S^*)  C_{j_1}(S^*)). \end{aligned}$$
$$\begin{aligned} (w_{j_1} w_{j_2})(C_{j_2}(S^*)  C_{j_1}(S^*))\le 0. \end{aligned}$$
$$\begin{aligned} \sum w_j C_j (S^\prime ) \le \sum w_j C_j (S^*). \end{aligned}$$
Part (ii) can be proven by exchange arguments analogous to (i). A detailed proof can be found in the appendix.
\(\square \)
Theorem 18
For a PFB without release dates and a fixed number
m of machines, the total weighted completion time can be minimized in
\(O(n^{m^2+2m2})\) time. The same holds for the maximum lateness and the total tardiness.
Proof
With a time complexity of
\(O(n^{m^2+4m+1})\), this statement follows directly from Theorems
12 and
17. The improvement in the running time can be obtained by the following two observations. First note that, due to vanishing release dates, the size of
\(\varGamma _i\) is at most
\(n^i\) instead of
\(n^{i+1}\). Comparing with the estimate (
4) in the proof of Theorem
12, this yields an improvement by the factor
\(n^{2m}\) because each of the
m factors
\(\varGamma _i^2\) can be bounded by
\(n^{2i}\) instead of
\(n^{2i+2}\). Second, Sung et al. (
2000, Theorem 1) showed that it is optimal to use the socalled lastonlyempty batching on the first machine
\(M_1\), that is, to completely fill all batches on
\(M_1\), except for the last batch, which might contain less jobs if
n is not a multiple of
\(b_1\). If we do so, the schedule of
\(M_1\) is fixed. This also fixes all values of the parameters
\(k_1\) and
\(t_1\) dependent on the current job index
j. Hence, in the runtime estimation (
4), we may leave out the factor
\(b_1\varGamma _1^2\). Since both
\(\varGamma _1\) and
\(b_1\) are bounded by
n (already considering the improvement above), this results in a further improvement by a factor of
\(n^3\) in the total runtime. Therefore, we obtain a total time complexity of
\(\square \)
$$\begin{aligned} O(n^{m^2+4m+12m3})=O(n^{m^2+2m2}). \end{aligned}$$
It is also possible to minimize the (weighted) number of late jobs in
\(O(n^{m^2+2m1})\) time. However, to achieve this result, it is necessary to adjust the algorithm from Sect.
3 slightly. Suppose that jobs are ordered in an earliest due date order and suppose further
\({\mathcal {J}}\) is the set of ontime jobs in an optimal solution for PFB to minimize the (weighted) number of late jobs. Note that in this case, we can find an optimal schedule by first scheduling all jobs in
\({\mathcal {J}}\), in earliest due date order, using the algorithm from Sect.
3, and then schedule all late jobs in an arbitrary, feasible way.
Usually, however, the optimal set of ontime jobs is not known in advance. Thus the dynamic program has to be adapted in such a way that it finds the optimal set of ontime jobs and a corresponding optimal schedule simultaneously. This can be done by, roughly speaking, allowing the algorithm to ignore a job: in the
jth step of the algorithm, when job
\(J_j\) (in earliest due date order) is scheduled, the algorithm can not only decide to schedule the job in sequence, but can alternatively decide to skip it, making it late (and paying a penalty
\(w_j\)). In all other regards, the algorithm remains exactly the same as presented in Sect.
3. The technical details to prove correctness and running time of the algorithm are, as one would expect, very similar to Sect.
3 and are moved to the appendix.
We obtain the following theorem. The additional factor
n in comparison to Theorem
18 stems from the fact that we do not know the optimal permutation in advance, and therefore we cannot make use of lastonlyempty batching on the first machine as easily as in the proof of Theorem
18.
Theorem 19
For a PFB without release dates and a fixed number
m of machines, the weighted number of late jobs can be minimized in
\({\mathcal {O}}(n^{m^2+2m1})\) time.
In particular, combining Theorems
18 and
19, we obtain that, for any fixed number
m of machines, the problems
with
\(f\in \{\sum w_jC_j, L_{\max }, \sum T_j, \sum U_j, \sum w_j U_j \}\) can be solved in polynomial time.
$$\begin{aligned} Fm \mid p_{ij} = p_i, p\text {batch}, b_i \mid f \end{aligned}$$
Note that, while minimizing makespan and total completion time in a PFB with a fixed number
m of machines and without release dates can be handled in the same manner, a time complexity of
\(O(n^{m^2+2m2})\) is strictly speaking no longer polynomial for those two problems. Indeed, since in problems
with
\(f \in \{C_{\max }, \sum C_j \}\), jobs are completely identical, an instance can be encoded concisely by simply encoding the number
n of jobs, rather than every job on its own. Such an encoding of the jobs takes only
\(O(\log n)\) space, meaning that even algorithms linear in
n would no longer be polynomial in the length of the instance.
$$\begin{aligned} Fm\mid p_{ij} = p_i, p\text {batch}, b_i\mid f \end{aligned}$$
In the literature, scheduling problems for which such a concise encoding in
\(O(\log n)\) space is possible are usually referred to as
highmultiplicity scheduling problems (Hochbaum and Shamir
1990,
1991). They are often difficult to handle and require additional algorithmic consideration, since even specifying a full schedule normally takes at least
O(
mn) time (one start and end time for each job on each machine). In order to resolve this issue, for these highmultiplicity scheduling problems special attention is often paid to finding optimal schedules with specific, regular structures. Such a structure may then allow for a concise
\(O(\log n)\) encoding of the solution. Notice, however, that a priori the dynamic programming algorithm presented in this paper does not give rise to such a regular structure.
In the case of
\(m =2\) machines and the makespan objective, Hertrich (
2018) shows that the
O(
n) algorithm by Ahmadi et al. (
1992) can be adjusted to run in polynomial time, even in the high multiplicity sense. For all other cases, the theoretical complexity of the two problems
with
\(f \in \{C_{\max }, \sum C_j \}\) is left open for future research. However, note that in practical applications jobs usually have additional meaning attached (for instance patient identifiers, in our pharmacy example from the beginning) and thus such instances are usually encoded with a length of at least
n.
$$\begin{aligned} Fm\mid p_{ij} = p_i, p\text {batch}, b_i\mid f \end{aligned}$$
5 Conclusions and future work
In this paper, motivated by an application in modern pharmaceutical production, we investigated the complexity of scheduling a proportionate flow shop of batching machines (PFB). The focus was to provide the first complexity results and polynomial time algorithms for PFBs with
\(m> 2\) machines.
Our main result is the construction of a new algorithm, using a dynamic programming approach, which schedules a PFB with release dates to minimize the makespan or total completion time in polynomial time for any fixed number
m of machines. In addition, we showed that, if all release dates are equal, the constructed algorithm can also be used to minimize the weighted total completion time, the maximum lateness, the total tardiness, and the (weighted) number of late jobs. Hence, for a PFB without release dates and with a fixed number of
m machines, these objective functions can be minimized in polynomial time. Previously these complexity results were only known for the special case of
\(m=2\) machines, while for each
\(m\ge 3\) the complexity status was unknown. For minimizing weighted total completion time and weighted number of late jobs, these results were even unknown in the case of
\(m=2\).
An important structural result in this paper is that permutation schedules are optimal for PFBs with release dates and the makespan or total completion time objective, as well as for any PFB problem without release dates. This result was needed in order to show that the constructed algorithm can be correctly applied to the problems named above.
Concerning PFBs with release dates, recall that in the presence of release dates permutation schedules are not necessarily optimal for traditional scheduling objective functions other than makespan and total completion time (see Example
4). However, there are several special cases for which permutation schedules remain optimal, in particular if the order of release dates corresponds well to the order of due dates (in the case of due date objectives) and/or weights (in the case of weighted objectives). For example, Hertrich (
2018) shows that, when minimizing the weighted total tardiness, if there exists a job order
\(\sigma \) that is an earliest release date ordering, an earliest due date ordering, and a nonincreasing weight ordering at the same time, then there exists an optimal schedule which is a permutation schedule with jobs ordered by
\(\sigma \). This, of course, implies analogous results for weighted total completion time, total tardiness and maximum lateness (again, see Hertrich (
2018)). In those special cases, clearly the algorithm presented in this paper can be used to solve the problems in polynomial time. Still, the complexity status of PFBs with release dates in the general case, where orderings do not correspond well with each other, remains open for all traditional scheduling objectives other than makespan and total completion time.
Another problem left open for future study is the complexity of minimizing the weighted total tardiness in the case where all release dates are equal. This problem is open, even for the special case of
\(m = 2\). In this paper, it is shown that permutation schedules are optimal for all objective functions in the absence of release dates. Furthermore, if an optimal job order is known and the number of machines is fixed, then an optimal schedule can be found in polynomial time, using the algorithm presented in this paper.
Unfortunately, the complexity of finding an optimal job order to minimize the weighted total tardiness in a PFB remains open. Note that for scheduling a single batching machine with equal processing times and for usual proportionate flow shop without batching the weighted total tardiness can be minimized in polynomial time (Brucker and Knust (
2009)). Here, proportionate flow shop can be reduced to the single machine problem with all processing times equal to the processing time of the bottleneck machine (see, e.g., Sung et al. (
2000)). However, such a reduction is not possible for PFBs, since there may be several local bottleneck machines which influence the quality of a solution.
Observe also that, in the special case of
\(m = 2\) machines, given an arbitrary schedule for the first machine, finding an optimal schedule for the second machine is similar to solving problem
\(1\mid p_j = p, r_j \mid \sum w_j T_j\) or the analogous problem on a batching machine, which also both have open complexity status (see Baptiste (
2000), Brucker and Knust (
2009)). So attempting to split up the problem in such a manner does not work either. Of course, if instead of an arbitrary schedule on the first machine, some special schedule is selected, it may be possible that the special structure for the release dates on the second machine helps to solve the problem. Notice, though, that this still leaves open the question of which schedule to select on the first machine.
The last open complexity question concerns the case where the number of machines is no longer fixed, but instead part of the input. We are unfortunately not aware of any promising approaches to find reductions from NPhard problems. However, interestingly, Hertrich (
2018) proves that for all versions of PFBs, minimizing the total completion time is always at least as hard as minimizing the makespan. This reduction does not hold for scheduling problems in general.
One noteworthy special case, where positive complexity may be more readily achievable, arises from fixing the number of jobs
n, leaving only the number of machines
m as part of the input. Note that in this case, highmultiplicity, as described in Sect.
4, is not a concern: since
n is fixed, a schedule can be put out in
O(
m) time and since each machine has an individual processing time, the input also has at least size
O(
m). For PFBs with exactly two jobs,
\(n=2\), and an arbitrary number of machines, Hertrich (
2018) shows that the makespan can be minimized in
\(O(m^2)\) time. A machinewise dynamic program is provided that finds all schedules, in which the completion time of one job cannot be improved without delaying the other job. In other words, the program constructs all Paretooptimal schedules, if the completion time of each job is considered as a separate objective. If the number of jobs
n is larger than 2, then the dynamic program can still be applied, but its runtime in that case is pseudopolynomial (see Hertrich (
2018)).
Finally, note that while the algorithm provided in this paper has the benefit of being very general, thus being applicable to many different problems, it incurs some practical limitations due to its relatively large running time. For example, for
\(m=2\), our algorithm to minimize the makespan has a runtime of
\({\mathcal {O}}(n^{6})\) (Corollary
14). In contrast, the algorithm specialized to the case of
\(m=2\) by Sung and Yoon (
1997) runs in
\({\mathcal {O}}(n^2)\). The reason for the larger runtime of our dynamic program is that, due to its generality, it needs to consider all possible job completion times in the set
\(\varGamma \), while the algorithm by Sung and Yoon (
1997) makes use of the fact that this is not necessary in the special case
\(m=2\).
Now that many of the previously open complexity questions are solved, for future research it would be interesting to find more practically efficient algorithms, in particular for small numbers of machines like
\(m = 3\) or
\(m = 4\), and/or for specific objective functions. This might be possible by using some easy structural properties, which were not of use for speeding up the algorithm in the general case but which may well help with dedicated algorithms for small numbers of machines. For example, if the objective function is regular, then on any machine, when a batch is started, it is always optimal to include as many jobs as possible (either all waiting jobs, or as many as the maximum batch size). For the general algorithm provided in this paper, this property was not of use, as our algorithm adds the jobs one by one to the schedule and it only becomes clear how many jobs wait at a machine during the backtracking phase. For a dedicated algorithm, on the other hand, it may well be possible use this property to reduce the number of possible schedules that have to be considered.
If fast, dedicated algorithms cannot be obtained, then instead approximations or heuristics need to be considered in order to solve PFBs in practice. In the literature, approximations and heuristics often use restrictions to permutation schedules in order to simplify difficult scheduling problems. To this end, the results on the optimality of permutation schedules presented in this paper can be of great use to future work. In particular, the results from Sect.
2 can be of help when designing fast heuristics for makespan, total completion time and related objective functions. For the same reason, it would be interesting to quantify the quality of permutation schedules (e.g. in terms of an approximation factor) for PFB problems where permutation schedules are not optimal.
Another approach to improving the algorithms’ efficiency, especially if negative complexity results for the case with an arbitrary number of machines are achieved, would be to consider PFBs from a parameterized complexity point of view. Recently, parameterized complexity has started to receive more attention in the scheduling community (cf. Mnich and Wiese (
2015), Weiß et al. (
2016), Mnich and van Bevern (
2018), Diessel and Ackermann (
2019)). A problem with input size
n is said to be
fixedparameter tractable with respect to a parameter
k if it can be solved in a running time of
\({\mathcal {O}}(f(k)p(n))\) where
f is an arbitrary computable function and
p a polynomial not depending on
k. Although our algorithm is polynomial for any fixed number
m of machines, it is not fixedparameter tractable in
m because
m appears in the exponent of the running time. Clearly, a fixedparameter tractable algorithm would be preferable, both in theory and, most likely, in practice, if one could be found.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/.
Appendix A: Proof of Theorem 17 (ii)
In this appendix we give a detailed proof of Theorem
17 (ii). Assume jobs are indexed in any given earliest due date ordering, i.e.
\(d_{1}\le d_{2}\le \dots \le d_{n}\). By Theorem
16, there exists an optimal permutation schedule
\(S^*\) with respect to maximum lateness or total tardiness. Suppose there is at least one pair of jobs
\(J_{j_1}\),
\(J_{j_2}\) with
\(j_1 < j_2\), but
\(C_{j_1}(S^*)>C_{j_2}(S^*)\). Swapping
\(J_{j_1}\) and
\(J_{j_2}\) in
\(S^*\) yields a new permutation schedule
\(S^\prime \).
We first show that
\(L_{\max } (S^\prime ) \le L_{\max }(S^*)\). Because the lateness of all jobs other than
\(J_{j_1}\) and
\(J_{j_2}\) remains unchanged when performing the swap, it suffices to show that
where
\(L_j(S^\prime ) = C_j(S^\prime )  d_j\). However, due to
\(j_1 < j_2\), we have
\(d_{j_1} \le d_{j_2}\). Hence, with
\(C_{j_1}(S^*)>C_{j_2}(S^*)\) it follows that
and
which finishes the proof for maximum lateness.
$$\begin{aligned} \max \{L_{j_1}(S^\prime ), L_{j_2}(S^\prime )\} \le \max \{L_{j_1}(S^*), L_{j_2}(S^*)\} \end{aligned}$$
$$\begin{aligned} L_{j_1}(S^\prime ) = C_{j_2}(S^*)  d_{j_1} < C_{j_1}(S^*)  d_{j_1} = L_{j_1}(S^*) \end{aligned}$$
$$\begin{aligned} L_{j_2}(S^\prime ) = C_{j_1}(S^*)  d_{j_2} \le C_{j_1}(S^*)  d_{j_1} = L_{j_1}(S^*), \end{aligned}$$
Finally, we show that
\(\sum T_j (S^\prime ) \le \sum T_j(S^*)\). Because the tardiness of all jobs other than
\(J_{j_1}\) and
\(J_{j_2}\) remains unchanged when performing the swap, it suffices to show that
where
\(T_j(S^\prime ) = \max \{C_j(S^\prime )  d_j,0\}\). We distinguish three cases. Firstly, suppose
\(J_{j_1}\) and
\(J_{j_2}\) are both late in
\(S^\prime \), i.e.,
\(C_{j_1}(S^\prime )> d_{j_1}\) and
\(C_{j_2}(S^\prime )> d_{j_2}\). Then it follows that
Secondly, suppose
\(J_{j_1}\) is on time in
\(S^\prime \), i.e.
\(C_{j_1}(S^\prime )\le d_{j_1}\). Using
\(d_{j_1} \le d_{j_2}\), we obtain
Thirdly, suppose
\(J_{j_2}\) is on time in
\(S^\prime \), i.e.
\(C_{j_2}(S^\prime )\le d_{j_2}\). Using
\(C_{j_1}(S^*)> C_{j_2}(S^*)\), we obtain
Hence, performing the described swap does neither increase the maximum lateness nor the total tardiness. Therefore, as in (i), we can perform such exchanges sequentially in order to obtain an optimal permutation schedule with jobs scheduled in the given earliest due date order. Hence, any given earliest due date ordering of the jobs is optimal.
\(\square \)
$$\begin{aligned} T_{j_1}(S^\prime ) + T_{j_2}(S^\prime ) \le T_{j_1}(S^*)+ T_{j_2}(S^*) \end{aligned}$$
$$\begin{aligned} T_{j_1}(S^\prime ) + T_{j_2}(S^\prime )&= C_{j_1}(S^\prime )d_{j_1} + C_{j_2}(S^\prime )d_{j_2}\\&= C_{j_2}(S^*)d_{j_1} + C_{j_1}(S^*)d_{j_2}\\&= C_{j_1}(S^*)d_{j_1} + C_{j_2}(S^*)d_{j_2}\\&\le T_{j_1}(S^*) + T_{j_2}(S^*). \end{aligned}$$
$$\begin{aligned} T_{j_1}(S^\prime ) + T_{j_2}(S^\prime )&= 0 + \max \{C_{j_2}(S^\prime )d_{j_2},0\}\\&= \max \{C_{j_1}(S^*)d_{j_2},0\}\\&\le \max \{C_{j_1}(S^*)d_{j_1},0\}\\&= T_{j_1}(S^*)\\&\le T_{j_1}(S^*) + T_{j_2}(S^*). \end{aligned}$$
$$\begin{aligned} T_{j_1}(S^\prime ) + T_{j_2}(S^\prime )&= \max \{C_{j_1}(S^\prime )d_{j_1},0\}+0\\&= \max \{C_{j_2}(S^*)d_{j_1},0\}\\&\le \max \{C_{j_1}(S^*)d_{j_1},0\}\\&= T_{j_1}(S^*)\\&\le T_{j_1}(S^*) + T_{j_2}(S^*). \end{aligned}$$
Appendix B: Proof of Theorem 19
Suppose that jobs are indexed in an earliest due date order, i.e.
\(d_1\le d_2 \le ... \le d_n\). As in Sect.
3, the modified dynamic program tries to schedule these jobs one by one, but it is also allowed to leave jobs out that will be late anyway, which then will be scheduled in the end after all ontime jobs. This procedure is justified by the following lemma.
Lemma 20
Consider a PFB instance
I without release dates to minimize the weighted number of late jobs. Suppose jobs are indexed by an earliest due date ordering. Then there exists an optimal permutation schedule in which the ontime jobs are scheduled in the order of their indices.
Proof
By Theorem
16, there exists an optimal permutation schedule
\(S^*\). Let
\({\mathcal {J}}\) be the set of ontime jobs in
\(S^*\). Let
\(I_{\mathcal {J}}\) be the modified PFB instance that contains only the jobs in
\({\mathcal {J}}\). By ignoring all jobs outside of
\({\mathcal {J}}\), schedule
\(S^*\) defines a feasible permutation schedule
\(S^*_{\mathcal {J}}\) for
\(I_{\mathcal {J}}\) that has maximum lateness 0. By Theorem
17 (ii), there exists also a permutation schedule
\(S'_{\mathcal {J}}\) for
\(I_{\mathcal {J}}\) in which jobs are scheduled in the order of their indices and that has maximum lateness 0. By scheduling the remaining jobs arbitrarily after the ontime jobs,
\(S'_{\mathcal {J}}\) can be extended to a schedule
\(S'\) for
I which has the same weighted number of late jobs as
\(S^*\) and in which all ontime jobs are scheduled in the order of their indices.
\(\square \)
Recall that for a PFB instance
I and a job index
\(j\in [n]\),
\(I_j\) denotes the modified instance containing only jobs
\(J_1,\dots ,J_j\). Also, recall the notation
\(\varGamma =\varGamma _1 \times \varGamma _2 \times \ldots \times \varGamma _m\) and
\({\mathcal {B}}=\left[ b_1\right] \times \left[ b_2\right] \times \ldots \times \left[ b_m\right] \), where
\(\times \) is the standard Cartesian product. The variables of the new dynamic program are defined almost as in Sect.
3. However, in contrast, we say that a permutation schedule
S for instance
\(I_j\)
corresponds to vectors
and
if the last
ontime job in
S finishes machine
\(M_i\) at time
\(t_i\) and is processed in a batch with exactly
\(k_i\)
ontime jobs there for all
\(i\in [m]\). If there are no ontime jobs, then
S cannot correspond to any vectors
t and
k. Note that we do not require that
\(J_j\) itself is on time, just that there is some ontime job. The difference to Sect.
3 is that we ignore all late jobs in this definition. Therefore, only for the ontime jobs it is assumed that they are processed in the order of job indices.
$$\begin{aligned} t = (t_1, t_2, \ldots , t_i, \ldots , t_{m1}, t_m) \in \varGamma \end{aligned}$$
$$\begin{aligned} k = (k_1, k_2, \ldots , k_i, \ldots , k_{m1}, k_m) \in {\mathcal {B}} \end{aligned}$$
Similar to Sect.
3, we define
\(\widetilde{{\mathcal {S}}}(j,t,k)\) to be the set of feasible permutation schedules
S for instance
\(I_j\) satisfying the following properties:
The variables of our new dynamic program are defined as follows. For
\(j\in [n]\),
\(t\in \varGamma \), and
\(k\in {\mathcal {B}}\), let
be the minimum weighted number of late jobs of a schedule
\(S\in \widetilde{{\mathcal {S}}}(j,t,k)\). If no such schedule exists, the value of
w is defined to be
\(+\infty \).
1.
the ontime jobs in
S are ordered by their indices,
2.
S corresponds to
t and
k, and
3.
S is
\(\varGamma \)active.
$$\begin{aligned} w(j,t,k)=w(j,t_1,t_2,\dots ,t_m,k_1,k_2,\dots ,k_m) \end{aligned}$$
Using the above definitions as well as Lemmas
9 and
20, if there is at least one ontime job in the optimal schedule, the minimum weighted number of late jobs is given by
\(\min _{t,k} w(n,t,k)\),
\(t\in \varGamma \),
\(k\in {\mathcal {B}}\). In Lemmas
21 and
22 (below) we provide formulas to recursively compute the values
w(
j,
t,
k),
\(j=1,2,\dots ,n\),
\(t\in \varGamma \),
\(k\in {\mathcal {B}}\). In total, we then obtain Algorithm 2 as our modified dynamic program.
×
We now turn to providing and proving the correctness of the recursive formula to compute the values
w(
j,
t,
k),
\(j=1,2,\dots ,n\),
\(t\in \varGamma \),
\(k\in {\mathcal {B}}\). As the recursion is slightly more complicated than in Sect.
3, we first introduce two kinds of auxiliary variables
x and
y for our dynamic program. These are then used to formulate and prove Lemmas
21 and
22, which deal with the starting values
w(1,
t,
k) and the actual recurrence relation, respectively, in analogy to Lemmas
10 and
11. Finally, by connecting these results, we obtain the correctness and runtime of Algorithm 2 as stated in Theorem
19.
For any
\(j\in [n]\),
\(t_i\in \varGamma _i\), and
\(k_i\in [b_i]\),
\(i\in [m]\), let
where
The values of
x can be interpreted as the objective value of a schedule in
\(\widetilde{{\mathcal {S}}}(j,t,k)\) for
\(I_j\) that only schedules
\(J_j\) on time and all other jobs late.
$$\begin{aligned} x(j,t,k) = \left\{ \begin{array}{ll} \sum _{j'=1}^{j1} w_{j'}, &{} \text {if (i)(iv) hold},\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned}$$
(5)
(i)
\(k_i=1\) for all
\(i\in [m]\),
(ii)
\(t_1\ge p_1\),
(iii)
\(t_{i+1}\ge t_{i}+p_{i+1}\) for all
\(i\in [m1]\), and
(iv)
\(t_m\le d_j\).
For
\(j>1\),
\(t_i\in \varGamma _i\), and
\(k_i\in [b_i]\),
\(i\in [m]\), let
where the minimum over the empty set is defined to be
\(+\infty \). Here,
\((**)\) is given by conditions
for all
\(i\in [m]\). The values of
y can be interpreted as the best possible objective value of a schedule in
\(\widetilde{{\mathcal {S}}}(j,t,k)\) that schedules
\(J_j\) and at least one other job on time.
$$\begin{aligned} \begin{aligned}&y(j,t,k) \\&\quad = \left\{ \begin{array}{ll} \min \{w(j1,t',k') \mid (**) \}, &{} \text {if (ii)(iv) hold},\\ +\infty , &{} \text {otherwise,} \end{array} \right. \end{aligned} \end{aligned}$$
(6)
(v)
\(t'_i\in \varGamma _i\),
(vi)
\(k'_i\in [b_i]\),
(vii)
if
\(k_i=1\), then
\(t'_i\le t_ip_i\), and
(viii)
if
\(k_i>1\), then
\(t'_i=t_i\) and
\(k'_i=k_i1\),
Analogous to Lemma
10, we obtain the following lemma.
Lemma 21
For
\(t_i\in \varGamma _i\) and
\(k_i\in [b_i]\),
\(i\in [m]\), we have
\(w(1,t,k)=x(1,t,k)\).
Proof
As in the proof of Lemma
10, conditions (i)–(iii) are necessary for the existence of a schedule in
\(\widetilde{{\mathcal {S}}}(1,t,k)\). Also, since a schedule for
\(I_1\) can only correspond to
t and
k if
\(J_1\) is on time, (iv) is necessary as well. Conversely, if (i)–(iv) are satisfied, then the vector
t of completion times uniquely defines a feasible schedule
\(S \in \widetilde{{\mathcal {S}}}(1,t,k)\) by
\(c_{i1}(S)=t_i\). Note that
S is the only schedule in
\(\widetilde{{\mathcal {S}}}(1,t,k)\) and has objective value 0 and therefore, clearly, is optimal.
\(\square \)
Analogous to Lemma
11, we now turn to the recurrence formula to calculate
w for
\(j>1\) from the values of
w for
\(j1\).
Lemma 22
For
\(j\ge 2\),
\(t_i\in \varGamma _i\), and
\(k_i\in [b_i]\),
\(i\in [m]\), the values of
w are given by
$$\begin{aligned} w(j,t,k) = \min \{x(j,t,k), y(j,t,k), w(j1,t,k)+w_j\}. \end{aligned}$$
Roughly speaking, for scheduling
\(J_j\) there are three possibilities: either job
\(J_j\) is scheduled as the only ontime job, as one of several ontime jobs, or as a late job. In the first two cases,
w(
j,
t,
k) equals
x(
j,
t,
k) or
y(
j,
t,
k), respectively, while in the third case it equals
\(w(j1,t,k)+w_j\), as we will see in the proof.
Proof of Lemma 22:
We first prove the “
\(\ge \)” direction. If the lefthand side equals
\(+\infty \), then this direction follows immediately. Otherwise, by definition of
w, there exists a schedule
S for
\(I_j\) in
\(\widetilde{{\mathcal {S}}}(j,t,k)\) with objective value
w(
j,
t,
k). Schedule
S naturally defines a feasible permutation schedule
\(S'\) for instance
\(I_{j1}\) by ignoring job
\(J_j\). We distinguish three cases.
Firstly, if
\(J_j\) is not on time in
S, then
\(S'\) is in
\(\widetilde{{\mathcal {S}}}(j1,t,k)\) and has an objective value of
\(w(j,t,k)w_j\) because
\(J_j\) is not contained in instance
\(J_{j1}\). Therefore, by definition of
\(w(j1,t,k)\), it follows that
\(w(j1,t,k)\le w(j,t,k)w_j\), which implies the “
\(\ge \)” direction in the first case.
Secondly, if
\(J_j\) is the only ontime job in
S, then (i) must hold because
S corresponds to
k and only ontime jobs are counted in this definition. Furthermore, (ii) and (iii) must hold because
S is feasible and corresponds to
t. Finally (iv) holds because
\(J_j\) is on time. Hence, we obtain
\(x(j,t,k) = \sum _{j'=1}^{j1} w_{j'} = w(j,t,k)\) in this case, which implies the “
\(\ge \)” direction in the second case.
Thirdly, if
\(J_j\) is one of at least two ontime jobs in
S, then
\(S'\) corresponds to unique vectors
\(t'\) and
\(k'\). Hence, we obtain
\(S'\in \widetilde{{\mathcal {S}}}(j1,t',k')\) and, therefore,
Analogous arguments to the proof of Lemma
11 show that
\(t'\) and
\(k'\) satisfy
\((**)\). Moreover, as in the second case, (ii)–(iv) hold because
S is feasible, corresponds to
t, and
\(J_j\) is on time. Therefore, we have
which completes the “
\(\ge \)” direction.
$$\begin{aligned} w(j,t,k)&= \sum _{j'=1}^j w_{j'} U_{j'}(S) \\&= \sum _{j'=1}^{j1} w_{j'} U_{j'}(S')\\&\ge w(j1,t',k'). \end{aligned}$$
$$\begin{aligned} w(j,t,k) \ge w(j1,t',k') \ge y(j,t,k), \end{aligned}$$
Now we prove the “
\(\le \)” direction. If the righthand side equals
\(+\infty \), then this direction follows immediately. Otherwise, we again distinguish three cases.
Firstly, suppose the minimum on the righthand side is attained by the term
\(w(j1,t,k)+w_j\). Since this must be finite, there is a schedule
\(S'\in \widetilde{{\mathcal {S}}}(j1,t,k)\) for instance
\(I_{j1}\) with objective value
\(w(j1,t,k)\). By scheduling
\(J_j\) as a late job,
\(S'\) can be extended to a schedule
S that has objective value
\(w(j1,t,k)+w_j\). Moreover, since
\(J_j\) is late,
S still corresponds to
t and
k, i.e.
\(S\in \widetilde{{\mathcal {S}}}(j,t,k)\), which implies
\(w(j,t,k)\le w(j1,t,k)+w_j\). Hence, the “
\(\le \)” direction follows in this case.
Secondly, suppose the minimum on the righthand side is attained by the term
x(
j,
t,
k). Since this must be finite, we obtain that (i)–(iv) hold. Hence, the schedule
S which schedules
\(J_j\) such that
\(c_{ij}=t_i\) and all other jobs late is feasible for
\(I_j\), an element of
\(\widetilde{{\mathcal {S}}}(j,t,k)\), and has objective value
\(x(j,t,k)=\sum _{j'=1}^{j1}w_{j'}\). This implies
\(w(j,t,k)\le x(j,t,k)\). Hence, the “
\(\le \)” direction follows in this case.
Thirdly, suppose the minimum on the righthand side is attained by the term
y(
j,
t,
k). Since this must be finite, we obtain that (ii)–(iv) hold. Let
\(t'\) and
\(k'\) be the minimizers in the definition of
y(
j,
t,
k). Again due to finiteness, there must be a schedule
\(S'\in \widetilde{{\mathcal {S}}}(j1,t',k')\) for
\(I_{j1}\) with objective value
\(w(j1,t',k')=y(j,t,k)\). By ignoring late jobs in
\(S'\) (which can be scheduled arbitrary late), we can use the same arguments as in the proof of Lemma
11 to extend
\(S'\) to schedule
\(S\in \widetilde{{\mathcal {S}}}(j,t,k)\) for
\(I_j\). Due to (iv),
\(J_j\) is on time in
S. Hence,
S has the same objective value as
\(S'\), namely
y(
j,
t,
k). Thus, it follows that
\(w(j,t,k)\le y(j,t,k)\), which completes the “
\(\le \)” direction.
\(\square \)
Analogous to Theorem
12, we now deduce the desired theorem by proving correctness and running time of Algorithm 2.
Theorem 19
For a PFB without release dates and a fixed number
m of machines, the weighted number of late jobs can be minimized in
\({\mathcal {O}}(n^{m^2+2m1})\) time.
Proof
Without loss of generality, jobs are indexed in an earliest due date ordering. By Lemmas
21 and
22, Algorithm 2 correctly computes the values of function
w. If there is a feasible schedule where at least one job is on time, we obtain due to Lemmas
9 and
20 that the minimum weighted number of late jobs is exactly
\(\min _{t,k} w(n,t,k)\), where the minimum is taken over each
\(t\in \varGamma \) and
\(k\in {\mathcal {B}}\). In this case the backtracking procedure in Step
3 retrieves an optimal schedule. If, however, in every feasible schedule all jobs are late, then all values of
w will be
\(+\infty \). In this case, any feasible schedule is optimal, with objective value
\(\sum _{j=1}^n w_n\). Hence, in any case, the schedule returned by Algorithm 2 is optimal.
Recall from the proof of Theorem
12 that
\(\left\varGamma \right\cdot \left{\mathcal {B}} \right\le n^{\frac{m^2}{2}+\frac{5m}{2}}\). Therefore, Step
1 of Algorithm 2 involves at most
\(n^{\frac{m^2}{2}+\frac{5m}{2}}\) iterations, each of which can be performed in constant time because
m is fixed.
Similarly, Step
2 involves at most
\(\left( n1\right) n^{\frac{m^2}{2}+\frac{5m}{2}} \le n^{\frac{m^2}{2}+\frac{5m}{2}+1}\) iterations. In each of these iterations, computing
x(
j,
t,
k) with (
5) can be done in constant time since
m is fixed. The same holds for computing
w(
j,
t,
k) using Lemma
22, once
y(
j,
t,
k) is known. However, the computation of
y(
j,
t,
k) by applying (
6) is the bottleneck of this step. In full analogy to the proof of Theorem
12, we obtain that the total time complexity of computing all
yvalues is bounded by
\({\mathcal {O}}\left( n\prod _{i=1}^m b_i \varGamma _i^2\right) \). Due to vanishing release dates, we have
\(\varGamma _i\le n^i\). Comparing with the estimation (
4) in the proof of Theorem
12, this results in a runtime bound of
\({\mathcal {O}}(n^{m^2+2m+1})\).
However, the following argument allows us to reduce this runtime by another factor of
\(n^2\), resulting in
\({\mathcal {O}}(n^{m^2+2m1})\). Even though we do not know the optimal job permutation beforehand, we know by Sung et al. (
2000, Theorem 1) that it is optimal to use the socalled lastonlyempty batching on the first machine
\(M_1\), that is, to completely fill all batches on
\(M_1\), except for the last batch, which might contain less jobs if
n is not a multiple of
\(b_1\). This has two consequences. First, it suffices to consider the
\(\left\lceil \frac{n}{b_1} \right\rceil \) possible completion times
\(\varGamma '_1=\left\{ p_1, 2p_1, \dots , \left\lceil \frac{n}{b_1} \right\rceil p_1\right\} \) on
\(M_1\), instead of all elements of
\(\varGamma _1\). Second, for given
\(t_1\in \varGamma '_1\) and
\(k_1\in [b_1]\), the possible previous values
\(t'_1\) and
\(k'_1\) are uniquely determined. Thus, in our runtime estimate, we may replace
\(\varGamma _1\) with
\(\varGamma '_1\) and leave out the square, such that instead of the term
\(b_1\varGamma _1^2\) we now have
\(b_1\varGamma '_1\in {\mathcal {O}}\left( b_1\frac{n}{b_1}\right) ={\mathcal {O}}(n)\). Previously, this term was bounded by
\({\mathcal {O}}(n^3)\), resulting in the claimed improvement.
Finally, in Step
3, we take the minimum over at most
\(n^{\frac{m^2}{2}+\frac{5m}{2}}\) many values. The backtracking and reconstruction of the final schedule can be done in time
\({\mathcal {O}}(n)\) for constant
m.
Hence, in total, Step
2 is the bottleneck and the total time complexity is
\({\mathcal {O}}(n^{m^2+2m1})\).
\(\square \)
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.