Abstract

Two difficulties arise when solving the set covering problem (SCP) with metaheuristic approaches: solution infeasibility and set redundancy. In this paper, we first present a review and analysis of the heuristic approaches that have been used in the literature to address these difficulties. We then present a new formulation that can be used to solve the SCP as an unconstrained optimization problem and that eliminates the need to address the infeasibility and set redundancy issues. We show that all local optimums with respect to the new formulation and a 1-flip neighbourhood structure are feasible and free of redundant sets. In addition, we adapt an existing greedy heuristic for the SCP to the new formulation and compare the adapted heuristic to the original heuristic using 88 known test problems for the SCP. Computational results show that the adapted heuristic finds better results than the original heuristic on most of the test problems in shorter computation times.

1. Introduction

The set covering problem (SCP) is a popular optimization problem that has been applied to a wide range of industrial applications, including scheduling, manufacturing, service planning, and location problems [14]. The SCP is NP hard in the strong sense [5]. The mathematical formulation of the SCP is as follows. Let be a universe of elements, and let be a collection of subsets , where . Each set covers at least one element of and has an associated cost . The objective is to find a subcollection of sets that covers all of the elements in at a minimal cost. The mathematical programming model of the SCP is usually formulated as follows. (i)Let be a zero-one matrix where if element is covered by set and otherwise. (ii)Let where if set (with cost ) is part of the solution and otherwise.

Minimize subject to

The objective function (1) drives the search toward solutions at minimal cost. Constraint (2) (full coverage constraint) imposes the requirement that all the elements of the universe must be covered. If constraint (2) is not satisfied, the solution is infeasible. If constraint (2) is satisfied and the objective function is minimized, the solution will cover all of the elements at the minimal cost (optimal solution). If constraint (2) is relaxed, the objective function will drive the search toward an empty solution because the empty solution has the lowest cost (0). These observations show that the objective function and the full coverage constraint of the SCP guide the search in two opposite directions.

When solving the model with metaheuristic algorithms, two issues arise: solution infeasibility and set redundancy. A solution to the SCP is considered to be infeasible if one or more of the elements of the universe are uncovered. A set is considered to be redundant if all the elements covered by the set are also covered by other sets in the solution.

In this paper, we first review and analyze the literature to highlight the difficulties in dealing with solution infeasibility and set redundancy when solving the SCP with metaheuristic algorithms (Section 2). We then present a new formulation that can be used to solve the SCP as an unconstrained optimization problem and that eliminates the need for addressing the infeasibility and redundancy issues (Section 3). The new formulation uses a maximization objective that can replace both the cost minimization objective and the full coverage constraint of the classical formulation. The new formulation can also be seen as a new penalty approach that has many advantages over the existing penalty approaches (Section 3.2) for the SCP. Third, we present a simple descent heuristic that is based on the new formulation and that uses a simple 1-flip neighbourhood structure. The proposed descent heuristic is an adaptation of an existing greedy heuristic for the SCP. We show that all local optimums with respect to the new formulation and the 1-flip neighbourhood structure are feasible and free of redundant sets. Finally, the proposed descent heuristic is compared to the original greedy heuristic using 88 known set covering problems (Section 5).

2. Literature Review

In general, metaheuristic algorithms can be divided into three categories.(i)Constructive metaheuristics: in each iteration, a new local optimum is found by constructing a new solution from scratch. A level of randomness is added to the construction step in order to avoid constructing the same solution over and over. (ii)Evolutionary algorithms: in each iteration, two or more solutions are combined to create a new solution. (iii)Local search: in each iteration, the current solution is replaced by one of its immediate neighbors (the solution is usually modified slightly).

In the following sections, we review the literature of solving the SCP with metaheuristic approaches and analyze how each category of metaheuristics addresses solution infeasibility and set redundancy.

2.1. Constructive Metaheuristics

When the SCP is solved with constructive metaheuristics, the local optimums found at the end of each constructive iteration are usually feasible. In fact, the constructive iteration ends when all of the elements are covered. For this reason, these metaheuristics do not have to deal with the infeasibility issue. However, the local optimums are not necessarily free of redundant sets, and a redundancy removal heuristic is needed. Constructive metaheuristics for the SCP includes ant colony optimization [610], Meta-RaPS [11], and GRASP [12]. All of these metaheuristics use a dedicated redundancy removal operator that removes redundant sets at the end of each iteration.

2.2. Evolutionary Algorithms

Evolutionary algorithms for the SCP need to address both infeasibility and set redundancy issues. Most evolutionary algorithms that are used to solve the SCP are based on the genetic algorithm (GA). Most of the GAs use a binary string solution representation where if the set is part of the solution and otherwise. The infeasibility issue arises when the crossover or mutation operator of the GA produces a child (solution) that does not cover all of the elements. In fact, a simple bit flip from 1 to 0 during crossover or mutation can produce an infeasible solution. If a cost minimization objective function is used, infeasible solutions will be preferred over feasible ones because infeasible solutions are usually cheaper. Two main approaches have been used in the literature to address the infeasibility issue.

The first approach uses a repair heuristic to transform infeasible solutions to feasible solutions before the evaluation step of the GA. A greedy-like repair heuristic is usually used [1315]. In each iteration, the repair operator covers an uncovered element by selecting a new set that covers the element and adding it to the solution. In [15], all of the solutions are repaired for evaluation, but only 5% of them are replaced with the corresponding repaired versions. The aim is to allow the search to explore infeasible regions of the search space, which tend to be more effective than limiting the search to only feasible regions. A simpler repair heuristic is used in [16]. During the evaluation of a solution, a set is added to the solution if it covers an uncovered element(s) and is not already part of the solution. By adding new sets, repair heuristics may introduce redundant sets into the solutions. For this reason, genetic algorithms that use a repair operator also use a redundancy removal procedure that is applied after the repair and just before evaluation.

The second approach involves penalizing the objective value of infeasible solutions to drive the search toward the feasible region. A penalty term that makes infeasible solutions less attractive than feasible ones is added to the objective function. In [17], the same penalty is added to the objective value of all infeasible solutions. is high enough to guarantee that all feasible solutions have lower objective values than all infeasible solutions (). A drawback of using such an objective function is that infeasible solutions cannot be compared to each other because the objective function does not reflect the degree of infeasibility. Objective functions that penalize infeasible solutions while reflecting the degree of infeasibility are proposed in [16, 18]. In [18], the penalty attributed to an infeasible solution is proportional to the number of elements that are not covered in the solution. In [16], the penalty is proportional to the minimum cost it would take to cover all of the uncovered elements. In all discussed penalty approaches, the penalties are high enough to ensure that all infeasible solutions have higher objective values than all feasible ones. An immediate disadvantage of using such high penalties is that feasible solutions will always be preferred over infeasible ones. As a result, infeasible solutions will have low chances of surviving in the population, and the infeasible region of the search space will not be effectively explored.

2.3. Local Search

The feasibility constraint makes designing an effective local search metaheuristic for the SCP a difficult task. For this reason, few-local-search only heuristics have been developed for the SCP [8, 19]. Instead, most of the local search algorithms have combined local search with other techniques such as Lagrangian relaxation, subgradient optimization, group theory, and linear programming [1, 2024]. In [25], after noting the difficulty of defining a good neighbourhood to solve the unicost set covering problem with local search, the authors proposed that the problem could be transformed to an equivalent satisfiability problem (SAT) that can be solved more adequately with local search.

Most local search algorithms for the SCP use a simple 1-flip neighbourhood structure defined by moves that only add (remove) one set at a time to (from) the solution. When a local optimum is reached, which is usually a feasible solution, it is difficult to decide in which direction to continue the search. Two cases arise.(i)If the search space is restricted to the feasible region, only redundant sets are allowed to be removed. If no redundant sets exist in the solution, at least one redundant set must be added before a remove move is allowed to be performed. As a result, the infeasible region of the search space will not be explored and the search will tend to fall into local optimums and cycles. A more complex neighbourhood called -flip is used in [8] to make the search in the feasible region more effective. The -flip neighbourhood of a given solution consists of all of the solutions that can be obtained by adding (removing) at most sets to (from) the solution. Eventhough the proposed heuristic is more effective than a simple 1-flip heuristic, it is not sufficient to avoid local optimums and cycles and it is significantly slower than the 1-flip heuristics.(ii)If the search space is not restricted to the feasible region, the cost minimization objective drives the search toward the infeasible region, by removing sets from the current configuration (to minimize the cost), and it is unclear when to restore feasibility. In such situations, penalty approaches are usually used to penalize infeasible solutions.

If the penalty weights are too high, neighbors in the feasible region will be preferred over neighbors in the infeasible region, making the infeasible region unreachable. Lower or dynamic penalty weights are usually used to make the search more effective by allowing it to reach infeasible regions.

If the penalty weights are too low, the final solution found is not guaranteed to be feasible. A tabu search heuristic that uses such low penalties is proposed in [19] for the unicost set covering problem. A simple 1-flip neighbourhood structure is used. The objective is to minimize where is the number of sets used in the solution and is the number of uncovered elements. If a set covers only one uncovered element, adding (removing) it to (from) the solution will not have any effect on the objective function. As a result, this set might be left out of the solution, making it infeasible. To overcome the fact that this objective function does not guarantee feasibility, the neighbourhood is restricted such that if a set is removed during one iteration, one or more sets must be added in the next iteration to restore feasibility. Eventhough such a low penalty approach allows the search to reach the infeasible region, additional neighbourhood restrictions are used to restore feasibility, and the infeasible region is only scratched.

Dynamic penalty approaches, in which the penalty weights are repeatedly adjusted, are used to balance the search between the feasible and infeasible regions without using a repair operator or neighbourhood restrictions [1, 2022, 26]. The most frequent dynamic penalty approaches that have been used in the literature are based on Lagrangian relaxation [27] and subgradient optimization [28]. Dynamic penalty approaches can be very effective but are difficult to be designed and implemented.

3. Proposed Formulation

In this work, we propose a new formulation of the SCP with a maximization objective. The aim of the proposed formulation is to express the real objective of the SCP in the objective function which is to cover all elements at a minimal cost. We view covering an element as collecting a gain at a given cost. In this perspective, we attribute a gain to each element. Because all elements must be covered, the gain attributed to each element must be higher than the cost of at least one of the sets that covers the element; otherwise, there is no benefit of covering that element. Let be the cost of the cheapest set among the sets that cover the element . A gain is attributed to each element where is a small positive constant.(i)Let be a zero-one matrix where if element is covered by set and otherwise. (ii)Let where if set (with cost ) is part of the solution and otherwise. (iii)Let where if element (with gain ) is covered in the solution and otherwise.

Maximize subject to

Constraint (5) is a relaxation of constraint (2) because it does not impose coverage of all the elements; its only purpose is to keep track of which elements of are part of the cover. Constraint (6) is the integrity constraint in mathematical programming. Constraints (5) and (6) do not need to be addressed as constraints in heuristic approaches but are presented for completeness of the mathematical programming formulation.

Claim 1. The optimal solution of the proposed formulation is a feasible solution (covers all elements).

Proof. Suppose that the optimal solution does not cover all of the elements and has an objective value . Let be an uncovered element. By the definition of the gain , we know that there is at least a set that covers element and has a cost . If the set is added to the cover, the new objective value is . Thus, is not optimal. By contradiction, we conclude that the optimal solution covers all of the elements.

Claim 2. The optimal solution of the proposed formulation covers all elements at a minimal cost.

Proof. We proved in Claim 1 that the optimal solution covers all of the elements. Hence, the first term of the objective function (4) is a constant in the optimal solution (). The objective function becomes Thus, the optimal solution of the proposed formulation is the cheapest feasible solution, which is the objective of the SCP.

From heuristic algorithms perspective, we replaced a constrained optimization problem with an unconstrained optimization problem that has the same optimal solutions. Unconstrained optimization problems are known to be much easier to solve with heuristic algorithms than constrained optimization problems.

3.1. Comparison to Penalty Approaches

Eventhough the proposed formulation is a full mathematical programming formulation for the SCP, it is similar to the existing penalty approaches but with some important differences. The objective function presented in (4) can be rewritten as where if element is uncovered and otherwise. The value of the gain can be seen as the penalty associated with not covering the element .

The proposed approach is different from high-penalty approaches because some infeasible solutions might have a better objective value than some feasible ones. For instance, let , , , and . The costs of the sets are , , and . The cheapest set that covers the element is with a cost . Thus, by definition of the gain, is equal to . Similarly, we find that and . Let be a feasible solution, and let be an infeasible one. Using the objective function (8), the objective value of is 10 and the objective value of is (). Thus, the infeasible solution has a lower (better) objective value than the feasible solution , which does not occur with high-penalty approaches.

The proposed approach is different from low-penalty approaches because the penalties are high enough to drive the search toward the feasible region. We showed that the optimal solutions with respect to the new formulation are guaranteed to be feasible. The proof of feasibility of the optimal solution also shows that any infeasible solution can be transformed to a feasible one with a better objective value. For instance, in the previous example, the infeasible solution can be transformed to a feasible solution (by adding the set to ) with an objective value of , which is lower (better) than the objective value of ().

The proposed penalty approach is different from dynamic penalty approaches because the penalty weights are static and no adjustment is needed.

When high-penalty approaches are used, the search process of a heuristic algorithm is disturbed by the high penalties and driven immediately to the feasible region. On the other hand, low penalties do not disturb the search but cannot ensure feasibility. The aim of our approach is to choose the lowest possible penalties that avoid disturbing the search process while ensuring feasibility. Ensuring feasibility means that any infeasible solution can be transformed to a feasible one with a better objective value.

3.2. Benefits of the New Formulation with respect to Metaheuristics

The new formulation eliminates all issues related to solution infeasibility and set redundancy that were discussed in the literature review (Section 2). Because the objective function naturally penalizes redundant sets, the use of a redundancy removal operator is not needed. The objective function also penalizes infeasible solutions. As a result, the use of a repair or penalty approaches in evolutionary algorithms and the use of neighbourhood restrictions in local search algorithms are not needed. Finally, because no constraints are involved and the only driver of the search is the objective function proposed with the new formulation, designing a good neighbourhood and local search algorithm is quite simple. Such a simple neighbourhood is presented in Section 4.

4. Proposed Descent Heuristic (DH)

In this section, we present a simple descent heuristic that is based on the new formulation and that uses a 1-flip neighbourhood structure. We also show that all local optimums with respect to the new formulation and the 1-flip neighbourhood are feasible and free of redundant sets.

The proposed descent heuristic (DH) is an adaptation of the classical greedy heuristic that has been used in the literature for the SCP [29]. In this greedy heuristic, the set with the minimum ratio is added to the solution in each iteration. The term is the number of elements that are covered by and are not covered by the current configuration . Once all of the elements are covered, redundant sets are removed in decreasing order of cost. In DH, the term of the classical greedy heuristic is replaced with , where is the variation in the objective function associated with adding (removing) the set .

DH starts from a given configuration and performs a sequence of moves on it until the solution is locally optimal. It uses a simple 1-flip neighbourhood structure with two types of moves: add and remove moves. add() adds the set to the configuration (flips from 0 to 1), while remove() removes the set from the configuration. In each iteration, the set with the maximum ratio is added (removed) to (from) the solution. The algorithm stops when the current configuration is better than all of its neighbors (). The outline of DH is presented in Algorithm 1.

sol empty solution;
loop
  find the set with the maximum ratio ;
  if ( ) then
    flip bit ;
  else
    stop;
  end if
end loop

4.1. Redundancy Removal

In contrast to the classical greedy heuristic, DH automatically removes the redundant sets from the solution. Let be a configuration where the set is redundant. The ratio associated with removing from is equal to . Because , the move remove() will be performed and the redundant sets will be removed. As a result, any solution that is improved with DH is necessarily free of redundant sets. The redundant sets are removed at any time during the progress of DH and not only at the end.

4.2. Feasibility

Consider to be a configuration where is not covered. Let be the cheapest set that covers , and let be its associated cost. The gain associated with is equal to . If is the only uncovered element covered by (worst case scenario), the ratio associated with adding the set to is equal to . Because (), the move add() will be performed and the solution will be feasible. As a result, any solution that is improved with DH is feasible.

4.3. Discussion

We showed that all of the solutions that are found with DH are feasible and free of redundant sets. With respect to the new formulation and the 1-flip neighbourhood structure, these solutions are local optimums. This is also true for all solutions obtained with any descent heuristic that is based on the new formulation and that uses the same neighbourhood structure. As a result, all local optimums with respect to the new formulation and the 1-flip neighbourhood structure are feasible and free of redundant sets.

5. Experimental Analysis

In this section, we present computational experiments with the proposed descent heuristic that is based on the new formulation. Although we showed in the previous sections that the new formulation provides many advantages over the classical formulation, the final performance of any metaheuristic algorithm depends on the implementation, the tuning of the parameters, and the sophistication of the approach. We do not assume that any metaheuristic approach that is based on the new formulation will outperform all metaheuristic approaches that are based on the classical formulation. In addition, experimenting with all classes of metaheuristics will not prove (or disprove) the superiority of the proposed formulation. Instead, we compare our descent heuristic to the original greedy heuristic that is based on the classical formulation. The aim is to compare the two formulations using similar algorithms. Since greedy heuristics are used for intensification in most of the metaheuristic approaches for the SCP, evaluating the effectiveness of a new descent heuristic that can replace these greedy heuristics provides a good indication of how suitable is the new formulation to metaheuristic approaches.

We compare DH to the classical greedy heuristic (GH) [29] on three classes of the known set covering problems.(i)OR-Library benchmarks: this class includes 65 small and medium size randomly generated problems that were frequently used in the literature. Most metaheuristic approaches for the SCP have been tested on these problems. They are available in OR-Library [30] and are described in Table 1.(ii)Airline and bus scheduling problems: this class includes fourteen real-world airline scheduling problems (AA instances) and two bus driver scheduling problems (bus instances). These problems were obtained from [31] and are described in Table 2.(iii)Railway scheduling problems: this class includes seven large-scale railway crew scheduling problems from Italian railways and are available in OR-Library [30]. These problems are described in Table 3.

Most metaheuristic approaches for the SCP have been exclusively tested on OR-Library benchmarks. Because these benchmarks are relatively small, we experimented with larger problems that have been less frequently used in the literature.

In all presented tables, the name of each instance is given in the first column, the size of each instance is given in the second column (number of elements number of sets), and the density of each instance is given in the third column. The density is the percentage of ones in the matrix described in Section 1). The optimal or best-known solution of each instance is given in the fourth column. The solutions obtained with each heuristic are presented in columns 5 and 6. The last two columns contain the number of iterations performed by each heuristic for each instance. The percentage deviations from the best-known solutions are presented in Figures 1, 2, 3 and 4.

In both DH and GH, each iteration involves finding the best set to be added (removed) to (from) the solution and updating the underlying data structure after a move is performed. Thus, the algorithmic complexity of each iteration is similar in both heuristics. In practice, the computation times are highly dependent on the implementation and the characteristics of the problem solved (size and density). For instance, finding the best move to be performed in each iteration can be implemented using a loop that iterates over all sets or using a priority-queue-based data structure. Preliminary testing showed that choosing one way or another greatly affects the speed comparison of the discussed heuristics. To avoid an implementation-dependent comparison, and because these aspects of the implementation are out of the scope of this work, we recorded the number of iterations instead.

Both heuristics are deterministic, and only one run is required. The value of used in all DH runs is equal to . Smaller values of epsilon have caused numerical problems for some instances.

Our descent heuristic performed better than GH by finding better solutions for most of the test problems. For OR-Library benchmarks, DH found better solutions than GH for 47 instances, equal solutions for 10 instances, and worse solutions for 9 instances. For the airline, bus, and railway scheduling problems, DH found better solutions than GH for all problems except one (equal solutions for RAIL516). The percentage deviations presented in Figures 1, 2, 3 and 4 and the average percentage deviation presented in Table 4 show that the solutions found by DH are also significantly better in quality than those found by GH (up to 7.41% better for OR-Library, up to 6.83% better for airline and bus problems, and up to 9.12% better for railway problems).

DH also performed fewer iterations than GH for most of the test problems. For OR-Library benchmarks, DH performed fewer iterations than GH for 56 instances, equal number of iterations for seven instances, and more iterations for only two instances. For the airline, bus, and railway scheduling problems, DH performed fewer iterations than GH for all problems except one (more iterations for BUS2). The average number of iterations performed by DH and GH is presented in Table 4. The average number of iterations shows that the number of iterations performed by DH is significantly smaller than the number of iterations performed by GH. Thus, DH is theoretically faster than GH.

As a result, the proposed descent heuristic that is based on the new formulation performs better than the corresponding greedy heuristic that is based on the classical formulation by finding better results for most of the test problems using fewer iterations, which can lead to shorter computation times.

6. Conclusions and Future Work

In this paper, we identified two issues that arise when solving the SCP with metaheuristic approaches: solution infeasibility and set redundancy. We highlighted the difficulties of addressing these issues when solving the SCP with the different classes of metaheuristics and proposed a new formulation that overcomes these difficulties. We showed that this formulation is, in fact, a new penalty approach that uses static penalty weights that are low enough to avoid disturbing the search but high enough to ensure the feasibility of the final solution. We also showed that all local optimums with respect to the new formulation and the 1-flip neighbourhood structure are feasible and free of redundant sets. As a result, building metaheuristic approaches for the SCP using the new formulation is straightforward.

To provide a first computational experience using the new formulation, we adapted a known greedy heuristic for the SCP to the new formulation and compared the adapted version to the original version using 88 set covering problems. The adapted version that is based on the new formulation found better solutions than the original version that is based on the classical formulation for 69 tests problems, equal solutions for ten problems, and worse solutions for nine problems. In addition, the adapted version performed fewer iterations than the original version for 78 test problems, equal number of iterations for two problems, and more iterations for eight problems. Thus the adapted version finds better solutions than the original version in potentially shorter computation times. Moreover, the adapted version was easier to implement because we did not need to handle feasibility and set redundancy.

Most current metaheuristic approaches for the SCP incorporate a descent or greedy heuristic that is responsible for the intensification part of the search. Thus, having a more effective descent heuristic can lead to better metaheuristic approaches.