Skip to main content
Top
Published in: Theory of Computing Systems 1/2022

Open Access 22-11-2021

The Power of the Weighted Sum Scalarization for Approximating Multiobjective Optimization Problems

Authors: Cristina Bazgan, Stefan Ruzika, Clemens Thielen, Daniel Vanderpooten

Published in: Theory of Computing Systems | Issue 1/2022

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions – both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.
Notes
This work was supported by the DFG grants RU 1524/6-1 and TH 1852/4-1 as well as by a bilateral cooperation project funded by the German Academic Exchange Service (DAAD, Project-ID 57388848) and by Campus France, PHC PROCOPE 2018 (Project no. 40407WF).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Almost any real-world optimization problem asks for optimizing more than one objective function (e.g., the minimization of cost and time in transportation systems or the maximization of profit and safety in investments). Clearly, these objectives are conflicting, often incommensurable, and, yet, they have to be taken into account simultaneously. The discipline dealing with such problems is called multiobjective optimization. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality: a solution is called efficient (or Pareto optimal) if no other feasible solution exists that is not worse in any objective function and better in at least one objective. The images of the efficient solutions in the objective space are called nondominated points. In contrast to single objective optimization, where one typically asks for one optimal solution, the main goal of multiobjective optimization is to compute the set of all nondominated points and, for each of them, one corresponding efficient solution. Each of these solutions corresponds to a different compromise among the set of objectives and may potentially be relevant for a decision maker.
Several results in the literature, however, show that multiobjective optimization problems are hard to solve exactly [11, 12] and, in addition, the cardinalities of the set of nondominated points (the nondominated set) and the set of efficient solutions (the efficient set) may be exponentially large for discrete problems (and are typically infinite for continuous problems). This impairs the applicability of exact solution methods to real-life problems and provides a strong motivation for studying approximations of multiobjective optimization problems.
Both exact and approximate solution methods for multiobjective optimization problems often resort to using single objective auxiliary problems, which are called scalarizations of the original multiobjective problem. This refers to the transformation of a multiobjective optimization problem into a single objective auxiliary problem based on a procedure that might use additional parameters, auxiliary points, or variables. The resulting scalarized optimization problems are then solved using methods from single objective optimization and the obtained solutions are interpreted in the context of Pareto optimality.
The simplest and most widely used scalarization technique is the weighted sum scalarization (see, e.g., [12]). Here, the scalarized auxiliary problem is constructed by assigning a weight to each of the objective functions and summing up the resulting weighted objective functions in order to obtain the objective function of the scalarized problem. If the weights are chosen to be positive, then every optimal solution of the resulting weighted sum problem is efficient. Moreover, the weighted sum scalarization does not change the feasible set and, in many cases, boils down to the single objective version of the given multiobjective problem — which represents an important advantage of this scalarization especially for combinatorial problems. However, only some efficient solutions (called supported solutions) can be obtained by means of the weighted sum scalarization, while many other efficient solutions (called unsupported solutions) cannot. Consequently, a natural question is to determine which approximations of the whole efficient set can be obtained by using this very important scalarization technique.

1.1 Previous Work

Besides many specialized approximation algorithms for particular multiobjective optimization problems, there exist several general approximation methods that can be applied to broad classes of multiobjective problems. An extensive survey of these general approximation methods is provided in [18].
Most general approximation methods for multiobjective problems are based on the seminal work of Papadimitriou and Yannakakis [21], who present a method for generating a \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation for general multiobjective minimization and maximization problems with a constant number of positive-valued, polynomially computable objective functions. They show that a \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation with size polynomial in the encoding length of the input and \(\frac {1}{\varepsilon }\) always exists. Moreover, their results show that the construction of such an approximation is possible in (fully) polynomial time, i.e., the problem admits a multiobjective (fully) polynomial-time approximation scheme or MPTAS (MFPTAS), if and only if a certain auxiliary problem called the gap problem can be solved in (fully) polynomial time.
More recent articles building upon the results of [21] present methods that additionally yield bounds on the size of the computed \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation relative to the size of the smallest \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation possible [4, 10, 22]. Moreover, it has recently been shown in [5] that an even better \((1,1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation (i.e., an approximation that is exact in one objective function and (1 + ε)-approximate in all other objective functions) always exists, and that such an approximation can be computed in (fully) polynomial time if and only if the so-called dual restrict problem (introduced in [10]) can be solved in (fully) polynomial time.
Other works study how the weighted sum scalarization can be used in order to compute a set of solutions such that the convex hull of their images in the objective space yields an approximation guarantee of \((1+\varepsilon ,\dots ,1+\varepsilon )\) [79]. Using techniques similar to ours, Diakonikolas and Yannakakis [9] show that such a so-called ε-convex Pareto set can be computed in (fully) polynomial time if and only if the weighted sum scalarization admits a (fully) polynomial-time approximation scheme. Additionally, they consider questions regarding the cardinality of ε-convex Pareto sets.
Besides the general approximation methods mentioned above that work for both minimization and maximization problems, there exist several general approximation methods that are restricted either to minimization problems or to maximization problems.
For minimization problems, there are two general approximation methods that are both based on using (approximations of) the weighted sum scalarization. The previously best general approximation method for multiobjective minimization problems with an arbitrary constant number of objectives that uses the weighted sum scalarization can be obtained by combining two results of Glaßer et al. [15, 16]. They introduce another auxiliary problem called the approximate domination problem, which is similar to the gap problem. Glaßer et al. show that, if this problem is solvable in polynomial time for some approximation factor α ≥ 1, then an approximating set providing an approximation factor of α ⋅ (1 + ε) in every objective function can be computed in fully polynomial time for every ε > 0. Moreover, they show that the approximate domination problem with α := σp can be solved by using a σ-approximation algorithm for the weighted sum scalarization of the p-objective problem. Together, this implies that a ((1 + ε) ⋅ σp,…,(1 + ε) ⋅ σp)-approximation can be computed in fully polynomial time for p-objective minimization problems provided that the objective functions are positive-valued and polynomially computable and a σ-approximation algorithm for the weighted sum scalarization exists. As this result is not explicitly stated in [15, 16], no bounds on the running time are provided.
For biobjective minimization problems, Halffmann et al. [17] show how to obtain a \((\sigma \cdot (1+2\varepsilon ),\sigma \cdot (1+\frac {2}{\varepsilon }))\)-approximation for any given 0 < ε ≤ 1 if a polynomial-time σ-approximation algorithm for the weighted sum scalarization is given.
Obtaining general approximation methods for multiobjective maximization problems using the weighted sum scalarization seems to be much harder than for minimization problems. Indeed, Glaßer et al. [15] show that certain translations of approximability results from the weighted sum scalarization of an optimization problem to the multiobjective version that work for minimization problems are not possible in general for maximization problems.
An approximation method specifically designed for multiobjective maximization problems is presented by Bazgan et al. [3]. Their method is applicable to biobjective maximization problems that satisfy an additional structural assumption on the set of feasible solutions and the objective functions: For each two feasible solutions none of which approximates the other one by a factor of α in both objective functions, a third solution approximating both given solutions in both objective functions by a certain factor depending on α and a parameter c must be computable in polynomial time. The approximation factor obtained by the algorithm then depends on α and c.

1.2 Our Contribution

Our contribution is twofold: First, in order to better capture the approximation quality in the context of multiobjective optimization problems, we introduce a new notion of approximation for the multiobjective case. This new notion comprises the common notion of approximation, but is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. Second, we provide a precise analysis of the approximation quality obtainable for multiobjective optimization problems by means of an exact or approximate algorithm for the weighted sum scalarization – with respect to both the common and the new notion of approximation.
In order to motivate the new notion of approximation, consider the biobjective case, in which a (2 + ε,2 + ε)-approximation can be obtained from the results of Glaßer et al. [15, 16] using an exact algorithm for the weighted sum scalarization. As illustrated in Fig. 1, this approximation guarantee is actually too pessimistic: Since each point y in the image of the approximating set is nondominated (since it is the image of an optimal solution of the weighted sum scalarization), no images of feasible solutions can be contained in the shaded region. Thus, every feasible solution is actually either (1,2 + ε)- or (2 + ε,1)-approximated. Consequently, the approximation quality obtained in this case can be more accurately described by using two vectors of approximation factors. In order to capture such situations and allow for a more precise analysis of the approximation quality obtained for multiobjective problems, our new multi-factor notion of approximation uses a set of vectors of approximation factors instead of only a single vector.
The second part of our contribution consists of a detailed analysis of the approximation quality obtainable by using the weighted sum scalarization – both for multiobjective minimization problems and for multiobjective maximization problems. For minimization problems, we provide an efficient algorithm that approximates a multiobjective problem using an exact or approximate algorithm for its weighted sum scalarization. We analyze the approximation quality obtained by the algorithm both with respect to the common notion of approximation that uses only a single vector of approximation factors as well as with respect to the new multi-factor notion. With respect to the common notion, our algorithm matches the best previously known approximation guarantee of (σp + ε,…,σp + ε) obtainable for p-objective minimization problems and any ε > 0 from a σ-approximation algorithm for the weighted sum scalarization. More importantly, we show that this result is best-possible in the sense that it comes arbitrarily close to the best approximation guarantee obtainable by supported solutions for the case that an exact algorithm is used to solve the weighted sum problem (i.e., when σ = 1).
When analyzing the algorithm with respect to the new multi-factor notion of approximation, however, a much stronger approximation result is obtained. Here, we show that every feasible solution is approximated with some (possibly different) vector \((\alpha _{1},\dots ,\alpha _{p})\) of approximations factors such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j} = \sigma \cdot p + \varepsilon \). In particular, the worst-case approximation factor of σp + ε can actually be tight in at most one objective for any feasible point. This shows the multi-factor notion of approximation yields a much stronger approximation result by allowing a refined analysis of the obtained approximation guarantee. Moreover, for σ = 1, we show that the obtained multi-factor approximation result comes arbitrarily close to the best multi-factor approximation result obtainable by supported solutions. We also demonstrate that our algorithm applies to a large variety of multiobjective minimization problems and yields the currently best approximation results for several problems.
Multiobjective maximization problems, however, turn out to be much harder to approximate by using the weighted sum scalarization. Here, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously when using only supported solutions.
In summary, our results yield essentially tight bounds on the power of the weighted sum scalarization with respect to the approximation of multiobjective minimization and maximization problems – both in the common notion of approximation and in the new multi-factor notion.
The remainder of the paper is organized as follows: In Section 2, we formally introduce multiobjective optimization problems and provide the necessary definitions concerning their approximation. Section 3 contains our general approximation algorithm for minimization problems (Section 3.1). Moreover, we show in Section 3.2 that the obtained approximation results are tight. Section 4 concludes the paper by listing applications of our general approximation algorithm and directions for future work. In the appendix, we present both a faster approximation algorithm for minimization problems in the biobjective case (Appendix A) as well as our impossibility result for maximization problems (Appendix B).

2 Preliminaries

In the following, we consider a general multiobjective minimization or maximization problem π of the following form (where either all objective functions are to be minimized or all objective functions are to be to maximized):
$$ \begin{array}{@{}rcl@{}} \min/\max && f(x)=(f_{1}(x),\dots,f_{p}(x))\\ \text{s. t. } && x \in X \end{array} $$
Here, as usual, we assume a constant number p ≥ 2 of objectives. The elements xX are called feasible solutions and the set X, which is assumed to be nonempty, is referred to as the feasible set. An image y = f(x) of a feasible solution xX is also called a (feasible) point. We let \(Y:= f(X) := \{f(x): x\in X\}\subseteq \mathbb {R}^{p}\) denote the set of feasible points.
We assume that the objective functions take only positive rational values and are polynomially computable. Moreover, for each \(j\in \{1,\dots ,p\}\), we assume that there exist strictly positive rational lower and upper bounds LB(j),UB(j) of polynomial encoding length such that LB(j) ≤ fj(x) ≤UB(j) for all xX. We let \(\text {LB}:= \min \limits _{j=1,\dots ,p}\text {LB}(j)\) and \(\text {UB}:= \max \limits _{j=1,\dots ,p}\text {UB}(j)\).
Definition 1
For a minimization problem π, we say that a point y = f(x) ∈ Y is dominated by another point \(y^{\prime }=f(x^{\prime })\in Y\) if \(y^{\prime }\neq y\) and
$$ \begin{array}{@{}rcl@{}} y^{\prime}_{j}=f_{j}(x^{\prime}) \leq f_{j}(x)=y_{j} \text{ for all } j\in\{1,\dots,p\}. \end{array} $$
Similarly, for a maximization problem π, we say that a point y = f(x) ∈ Y is dominated by another point \(y^{\prime }=f(x^{\prime })\in Y\) if \(y^{\prime }\neq y\) and
$$ \begin{array}{@{}rcl@{}} y^{\prime}_{j}=f_{j}(x^{\prime}) \geq f_{j}(x)=y_{j} \text{ for all } j\in\{1,\dots,p\}. \end{array} $$
If the point y = f(x) is not dominated by any other point \(y^{\prime }\), we call y nondominated and the feasible solution xX efficient. The set YN of nondominated points is called the nondominated set and the set XE of efficient solutions is called the efficient set or Pareto set.

2.1 Notions of Approximation

We first recall the standard definitions of approximation for single objective optimization problems.
Definition 2
Consider a single objective optimization problem π and let α ≥ 1. If π is a minimization problem, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) if \(f(x) \leq \alpha \cdot f(x^{\prime })\). If π is a maximization problem, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) if \(\alpha \cdot f(x) \geq f(x^{\prime })\). A feasible solution that α-approximates every feasible solution of π is called an α-approximation for π.
A (polynomial-time) α-approximation algorithm is an algorithm that, for every instance I with encoding length |I|, computes an α-approximation for π in time bounded by a polynomial in |I|.
The following definition extends the concept of approximation to the multiobjective case.
Definition 3
Let \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathbb {R}^{p}\) with αj ≥ 1 for all \(j\in \{1,\dots ,p\}\).
For a minimization problem π, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) (or, equivalently, that the feasible point y = f(x) α-approximates the feasible point \(y^{\prime }=f(x^{\prime })\)) if
$$ \begin{array}{@{}rcl@{}} f_{j}(x) \leq \alpha_{j}\cdot f_{j}(x^{\prime}) \text{ for all } j\in\{1,\dots,p\}. \end{array} $$
Similarly, for a maximization problem π, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) (or, equivalently, that the feasible point y = f(x) α-approximates the feasible point \(y^{\prime }=f(x^{\prime })\)) if
$$ \begin{array}{@{}rcl@{}} \alpha_{j}\cdot f_{j}(x) \geq f_{j}(x^{\prime}) \text{ for all } j\in\{1,\dots,p\}. \end{array} $$
The standard notion of approximation for multiobjective optimization problems used in the literature is the following one.
Definition 4
Let \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathbb {R}^{p}\) with αj ≥ 1 for all \(j\in \{1,\dots ,p\}\).
A set \(P\subseteq X\) of feasible solutions is called an α-approximation for the multiobjective problem π if, for any feasible solution \(x^{\prime }\in X\), there exists a solution xP that α-approximates \(x^{\prime }\).
In the following definition, we generalize the standard notion of approximation for multiobjective problems by allowing a set of vectors of approximation factors instead of only a single vector, which allows for tighter approximation results.
Definition 5
Let \(\mathcal {A}\subseteq \mathbb {R}^{p}\) be a set of vectors with αj ≥ 1 for all \(\alpha \in \mathcal {A}\) and all \(j\in \{1,\dots ,p\}\). Then a set \(P\subseteq X\) of feasible solutions is called a (multi-factor) \(\mathcal {A}\)-approximation for the multiobjective problem π if, for any feasible solution \(x^{\prime }\in X\), there exist a solution xP and a vector \(\alpha \in \mathcal {A}\) such that x α-approximates \(x^{\prime }\).
Note that, in the case where \(\mathcal {A}=\{(\alpha _{1},\dots ,\alpha _{p})\}\) is a singleton, an \(\mathcal {A}\)-approximation for a multiobjective problem according to Definition 5 is equivalent to an \((\alpha _{1},\dots ,\alpha _{p})\)-approximation according to Definition 4.

2.2 Weighted Sum Scalarization

Given a p-objective optimization problem π and a vector \(w=(w_{1},\ldots ,w_{p})\in \mathbb {R}^{p}\) with wj > 0 for all j ∈{1,…,p}, the weighted sum problem (or weighted sum scalarization) πWS(w) associated with π is defined as the following single objective optimization problem:
$$ \begin{array}{@{}rcl@{}} \min/\max && \sum\limits_{j=1}^{p} w_{j} \cdot f_{j}(x)\\ \text{s. t. } && x \in X \end{array} $$
Definition 6
A feasible solution xX is called supported if there exists a vector \(w=(w_{1},\ldots ,w_{p}) \in \mathbb {R}^{p}\) of positive weights such that x is an optimal solution of the weighted sum problem πWS(w). In this case, the feasible point y = f(x) ∈ Y is called a supported point. The set of all supported solutions is denoted by XS.
It is well-known that every supported point is nondominated and, correspondingly, every supported solution is efficient (cf. [12]).
In the following, we assume that there exists a polynomial-time σ-approximation algorithm WSσ for the weighted sum problem, where σ ≥ 1 can be either a constant or a function of the input size. When calling WSσ with some specific weight vector w, we denote this by WSσ(w). This algorithm then returns a solution \(\hat x\) such that \({\sum }_{j=1}^{p} w_{j} f_{j}(\hat x) \leq \sigma \cdot {\sum }_{j=1}^{p} w_{j} f_{j}(x)\) for all xX, if π is a minimization problem, and \(\sigma \cdot {\sum }_{j=1}^{p} w_{j} f_{j}(\hat x) \geq {\sum }_{j=1}^{p} w_{j} f_{j}(x)\) for all xX, if π is a maximization problem. The running time of algorithm WSσ is denoted by \(T_{WS_{\sigma }}\).
The following result shows that a σ-approximation for the weighted sum problem is also a σ-approximation of any solution in at least one of the objectives.
Lemma 1
Let \(\hat {x} \in X\) be a σ-approximation for πWS(w) for some positive weight vector \(w\in \mathbb {R}^{p}\). Then, for any feasible solution xX, there exists at least one i ∈{1,…,p} such that \(\hat {x}\) σ-approximates x in objective fi.
Proof
Consider the case where π is a multiobjective minimization problem (the proof for the case where π is a maximization problem works analogously). Then, we must show that, for any feasible solution xX, there exists at least one i ∈{1,…,p} such that \(f_{i}(\hat {x}) \leq \sigma \cdot f_{i}(x)\).
Assume by contradiction that there exists some \(x^{\prime } \in X\) such that \(f_{j}(\hat {x}) > \sigma \cdot f_{j}(x^{\prime })\) for all j ∈{1,…,p}. Then, we obtain \({\sum }_{j=1}^{p} w_{j} \cdot f_{j}(\hat {x}) > \sigma \cdot {\sum }_{j=1}^{p} w_{j} \cdot f_{j}(x^{\prime })\), which contradicts the assumption that \(\hat {x}\) is a σ-approximation for πWS(w). □

3 A Multi-Factor Approximation Result for Minimization Problems

In this section, we study the approximation of multiobjective minimization problems by (approximately) solving weighted sum problems. In Section 3.1, we propose a multi-factor approximation algorithm that significantly improves upon the \(((1+\varepsilon )\cdot \sigma \cdot p, \dots ,(1+\varepsilon )\cdot \sigma \cdot p)\)-approximation algorithm that can be derived from Glaßer et al. [15]. Moreover, we show in Section 3.2 that the resulting approximation is tight.

3.1 Approximation Results

Proposition 1
Let \(\bar {x}\in X\) be a feasible solution of π and let \(b=(b_{1},\dots ,b_{p})\) be such that \(b_{j}\leq f_{j}(\bar {x})\leq (1+\varepsilon )\cdot b_{j}\) for \(j=1,\dots ,p\) and some ε > 0. Applying WSσ(w) with \(w_{j}:= \frac {1}{b_{j}}\) for \(j=1,\dots ,p\) yields a solution \(\hat x\) that \((\alpha _{1},\dots ,\alpha _{p})\)-approximates \(\bar x\) for some \(\alpha _{1},\dots ,\alpha _{p}\geq 1\) such that αiσ for at least one i ∈{1,…,p} and
$$ \begin{array}{@{}rcl@{}} \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$
Proof
Since \(\hat x\) is the solution returned by WSσ(w), we have
$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{p}\frac{1}{b_{j}} f_{j}(\hat{x}) \leq \sigma\cdot\left( \sum\limits_{j=1}^{p}\frac{1}{b_{j}}f_{j}(\bar{x})\right) \leq \sigma\cdot(1+\varepsilon)\cdot\left( \sum\limits_{j=1}^{p} 1\right) = (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$
Since \(\frac {1}{b_{j}} \geq \frac {1}{f_{j}(\bar {x})}\), we get \({\sum }_{j=1}^{p} \frac {f_{j}(\hat {x})}{f_{j}(\bar {x})} \leq {\sum }_{j=1}^{p}\frac {1}{b_{j}} f_{j}(\hat {x})\), which yields
$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{p} \frac{f_{j}(\hat{x})}{f_{j}(\bar{x})} \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$
Setting \(\alpha _{j} := \max \limits \left \{1,\frac {f_{j}(\hat {x})}{f_{j}(\bar {x})}\right \}\) for \(j=1,\dots ,p\), we have
$$ \begin{array}{@{}rcl@{}} \sum\limits_{j:\alpha_{j}>1} \alpha_{j} \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$
The worst case approximation factors αj are then obtained when equality holds in the previous inequality.
Moreover, by Lemma 1, there exists at least one i ∈{1,…,p} such that \(f_{i}(\hat {x}) \leq \sigma \cdot f_{i}(\bar {x})\). Thus, we have αiσ for at least one i ∈{1,…,p}, which proves the claim. □
Proposition 1 motivates to apply the given σ-approximation algorithm WSσ for πWS iteratively for different weight vectors w in order to obtain an approximation of the multiobjective minimization problem π. This is formalized in Algorithm 1, whose correctness and running time are established in Theorem 1.
Theorem 1
For a p-objective minimization problem, Algorithm 1 outputs an \(\mathcal {A}\)-approximation where
$$ \begin{array}{@{}rcl@{}} \mathcal{A} = \bigg\{&(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i} \leq \sigma \text{ for at least one \textit{i}, and}\\ & \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = \sigma\cdot p\ + \varepsilon \bigg\} \end{array} $$
in time \(\displaystyle T_{\text {WS}_{\sigma }}\cdot \sum \limits _{i=1}^{p} \prod\limits_{j\neq i}\left \lceil \log _{1+\frac {\varepsilon }{\sigma p}}\frac {\text {UB}(j)}{\text {LB}(j)}\right \rceil \in \mathcal {O}\left (T_{\text {WS}_{\sigma }} \left (\frac {\sigma }{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).
Proof
In order to approximate all feasible solutions, we can iteratively apply Proposition 1 with \(\varepsilon ^{\prime } := \frac {\varepsilon }{\sigma \cdot p}\) instead of ε, leading to the modified constraint on the sum of the αj where the right-hand side becomes \((1+\varepsilon ^{\prime })\cdot \sigma \cdot p = \sigma \cdot p\ + \varepsilon \). More precisely, we iterate with \(b_{j} = LB(j)\cdot (1+ \varepsilon ^{\prime })^{i_{j}}\) and ij = 0,…,uj, where uj is the largest integer such that \(LB(j)\cdot (1+\varepsilon ^{\prime })^{u_{j}} \leq UB(j)\), for each j ∈{1,…,p}. Actually, this iterative application of Proposition 1 involves redundant weight vectors. More precisely, consider a weight vector w = (w1,…,wp) where \(w_{j}= \frac {1}{b_{j}}\) with \(b_{j} = LB(j)\cdot (1+\varepsilon ^{\prime })^{t_{j}}\) for j = 1,…,p, and let k be an index such that \(t_{k} = \min \limits _{j=1,\ldots ,p} t_{j}\). Then problem πWS(w) is equivalent to problem \({\varPi }^{\text {WS}}(w^{\prime })\) with \(w^{\prime }_{j} =\frac {1}{b^{\prime }_{j}}\), where \(b^{\prime }_{j} = LB(j)\cdot (1+\varepsilon ^{\prime })^{t_{j} - t_{k}}\) for j = 1,…,p. Therefore, it is sufficient to consider all weight vectors w for which at least one component wk is set to \(\frac {1}{LB(k)}\) (see Fig. 2 for an illustration). The running time follows. □
Note that the set \(\mathcal {A}\) in Theorem 1 is infinite. By rounding up each approximation factor αj to the nearest power of 1 + ε, however, it is possible to obtain the same result for the finite set
$$ \begin{array}{@{}rcl@{}} \bar{\mathcal{A}} = \bigg\{&(\bar{\alpha}_{1},\dots,\bar{\alpha}_{p}) : \bar{\alpha}_{j}=(1+\varepsilon)^{l_{j}} \text{ for some } l_{j}\in\mathbb{Z}_{\geq0}~(j=1,\dots,p), \\ & \bar{\alpha}_{i} \leq (1+\varepsilon)\cdot\sigma \text{ for at least one \textit{i}, and} {\sum}_{j:\bar{\alpha}_{j}>1} \bar{\alpha}_{j} = (1+\varepsilon)\cdot (\sigma\cdot p\ \!+ \varepsilon) \bigg\}. \end{array} $$
Also note that, depending on the structure of the weighted sum algorithm WSσ, the practical running time of Algorithm 1 could be improved by not solving every weighted sum problem from scratch, but using the information obtained in previous iterations.
Moreover, as illustrated in Fig. 2, Algorithm 1 also directly yields a subdivision of the objective space into hyperrectangles such that all solutions whose images are in the same hyperrectangle are approximated by the same solution (possibly with different approximation guarantees): For each weight vector \(w=(\frac {1}{b_{1}},\dots ,\frac {1}{b_{p}})\) considered in the algorithm (where bk = LB(k) for at least one k), all solutions \(\bar {x}\) with images in the hyperrectangles \(\times _{j=1}^{p} \left [b_{j}\cdot (1+\varepsilon ^{\prime })^{\ell },b_{j}\cdot (1+\varepsilon ^{\prime })^{\ell +1}\right ]\) for \(\ell =0,1,\dots \) are approximated by the solution returned by WSσ(w).
When the weighted sum problem can be solved exactly in polynomial time, Theorem 1 immediately yields the following result:
Corollary 1
If WSσ = WS1 is an exact algorithm for the weighted sum problem, Algorithm 1 outputs an \(\mathcal {A}\)-approximation where
$$ \begin{array}{@{}rcl@{}} \mathcal{A} &= & \bigg\{(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i}=1 \text{ for at least one}~i, \text{ and }\\ && \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = p + \varepsilon\bigg\} \end{array} $$
in time \(\mathcal {O}\left (T_{\text {WS}_{1}} \left (\frac {1}{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).
Another special case worth mentioning is the situation where the weighted sum problem admits a polynomial-time approximation scheme. Here, similar to the case in which an exact algorithm is available for the weighted sum problem, we can still obtain a set of vectors α of approximation factors with \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \) while only losing the property that at least one αi equals 1.
Corollary 2
If the weighted sum problem admits a polynomial-time (1 + τ)-approximation for any τ > 0, then, for any ε > 0 and any \(0<\tau <\frac {\varepsilon }{p}\), Algorithm 1 can be used to compute an \(\mathcal {A}\)-approximation where
$$ \begin{array}{@{}rcl@{}} &&\mathcal{A} = \bigg\{(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i}\leq 1+\tau \text{ for at least one}~i, \text{ and }\\ &&~~\quad{\sum}_{j:\alpha_{j}>1} \alpha_{j} = p + \varepsilon\bigg\} \end{array} $$
in time \(\mathcal {O}\left (T_{\text {WS}_{1+\tau }} \left (\frac {1+\tau }{\varepsilon -\tau \cdot p} \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).
Proof
Given ε > 0 and \(0<\tau <\frac {\varepsilon }{p}\), apply Algorithm 1 with ετp and σ := 1 + τ. □
Since any component of a vector in the set \(\mathcal {A}\) from Theorem 1 can get arbitrarily close to σp + ε in the worst case, the best “classical” approximation result using only a single vector of approximation factors that is obtainable from Theorem 1 reads as follows:
Corollary 3
Algorithm 1 computes a \((\sigma \cdot p + \varepsilon ,\dots ,\sigma \cdot p + \varepsilon )\)-approximation in time \(\mathcal {O}\left (T_{\text {WS}_{\sigma }} \left (\frac {1}{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).

3.2 Tightness Results

When solving the weighted sum problem exactly, Corollary 1 states that Algorithm 1 obtains a set \(\mathcal {A}\) of approximation factors in which \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \) for each \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathcal {A}\).
The following theorem shows that this multi-factor approximation result is arbitrarily close to the best possible result obtainable by supported solutions:
Theorem 2
For ε > 0, let
$$ \begin{array}{@{}rcl@{}} \mathcal{A}:=\{\alpha\in\mathbb{R}^{p}: \alpha_{1},\dots,\alpha_{p}\geq 1, \alpha_{i}=1 \text{ for at least one}~i, \text{ and } \sum\limits_{j:\alpha_{j}>1}\alpha_{j}=p-\varepsilon\}. \end{array} $$
Then there exists an instance of a p-objective minimization problem for which the set XS of supported solutions is not an \(\mathcal {A}\)-approximation.
Proof
In the following, we only specify the set Y of images. A corresponding instance consisting of a set X of feasible solutions and an objective function f can then easily be obtained, e. g., by setting X := Y and \(f:= \text {id}_{\mathbb {R}^{p}}\).
For M > 0, let \(Y:= \{y^{1},\dots ,y^{p},\tilde {y}\}\) with \(y^{1}=\left (M,\frac {1}{p},\dots ,\frac {1}{p}\right )\), \(y^{2}=\left (\frac {1}{p},M,\frac {1}{p},\dots ,\frac {1}{p}\right )\), …, \(y^{p}=\left (\frac {1}{p},\dots ,\frac {1}{p},M\right )\) and \(\tilde {y}=\left (\frac {M+1}{p},\dots ,\frac {M+1}{p}\right )\). Note that the point \(\tilde {y}\) is unsupported, while \(y^{1},\dots ,y^{p}\) are supported (an illustration for the case p = 2 is provided in Fig. 3).
Moreover, the ratio of the j-th components of the points yj and \(\tilde {y}\) is exactly
$$ \begin{array}{@{}rcl@{}} \frac{M}{\nicefrac{(M+1)}{p}} = p\cdot\frac{M}{M+1}, \end{array} $$
which is larger than pε for \(M>\frac {p}{\varepsilon }-1\). Consequently, for such M, the point \(\tilde {y}\) is not α-approximated by any of the supported points \(y^{1},\dots ,y^{p}\) for any \(\alpha \in \mathcal {A}\), which proves the claim. □
We remark that the set of points Y constructed in the proof of Theorem 2 can easily be obtained from instances of many well-known multiobjective minimization problems such as multiobjective shortest path, multiobjective spanning tree, multiobjective minimum (s-t-) cut, or multiobjective TSP (e.g., for multiobjective shortest path, a collection of p + 1 disjoint s-t-paths whose cost vectors correspond to the points \(y^{1},\dots ,y^{p},\tilde {y}\) suffices). Consequently, the result from Theorem 2 holds for each of these specific problems as well.
Moreover, note that also the classical approximation result obtained in Corollary 3 is arbitrarily close to best possible in case that the weighted sum problem is solved exactly: While Corollary 3 shows that a \((p+\varepsilon ,\dots ,p+\varepsilon )\)-approximation is obtained from Algorithm 1 when solving the weighted sum problem exactly, the instance constructed in the proof of Theorem 2 shows that the supported solutions do not yield an approximation guarantee of \((p-\varepsilon ,\dots ,p-\varepsilon )\) for any ε > 0. This yields the following theorem:
Theorem 3
For any ε > 0, there exists an instance of a p-objective minimization problem for which the set XS of supported solutions is not a \((p-\varepsilon ,\dots ,p-\varepsilon )\)-approximation.

4 Conclusion

The weighted sum scalarization is the most frequently used method to transform a multiobjective into a single objective optimization problem. In this article, we contribute to a better understanding of the quality of approximations for general multiobjective optimization problems which rely on this scalarization technique. To this end, we refine and extend the common notion of approximation quality in multiobjective optimization. As we show, the resulting multi-factor notion of approximation more accurately describes the approximation quality in multiobjective contexts.
For multiobjective minimization problems, we also present an efficient approximation algorithm, which turns out to be best possible under some additional assumptions. This algorithm can be applied to a large variety of minimization problems and improves upon the previously best approximation results for several well-known problems. If the weighted sum scalarization admits a polynomial-time approximation scheme, our algorithm yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee \((\alpha _{1},\dots ,\alpha _{p})\) such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \). This yields the best known approximation results for the multiobjective versions of the weighted planar TSP [20] and minimum weight planar vertex cover [1]. If the weighted sum scalarization admits a polynomial-time σ-approximation algorithm, our algorithm yields both a (classical) \((\sigma \cdot p+\varepsilon ,\dots ,\sigma \cdot p+\varepsilon )\)-approximation and a multi-factor approximation where each feasible solution is approximated with some approximation guarantee \((\alpha _{1},\dots ,\alpha _{p})\) such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=\sigma \cdot p+\varepsilon \) and αiσ for at least one i. These results yield the best known approximation guarantees for many well-studied problems whose single objective version does not admit a polynomial time approximation scheme unless P = NP (and, consequently, the multiobjective version does not admit an MPTAS under the same assumption). Problems of this kind include, e.g., minimum weight vertex cover [2], minimum k-spanning tree [14], minimum weight edge dominating set [13], and minimum metric k-center [19], whose single objective versions admit 2-approximation algorithms, as well as the minimum weight set cover problem, where a \((1+\ln |S|)\)-approximation exists with |S| denoting the cardinality of the ground set to cover [6].
Interestingly, it is easy to show that similar approximation results cannot be obtained for multiobjective maximization problems (see Appendix B).
Since the new multi-factor notion of approximation is independent of the specific algorithms used here, a natural direction for future research is to analyze new and existing approximation algorithms more precisely with the help of this new notion. This may yield both a better understanding of existing approaches as well as more accurate approximation results.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix A: Biobjective Minimization Problems

We now focus on biobjective minimization problems. We first specialize some of the general results of Section 3.1 to the case p = 2. Afterwards, we propose a specific approximation algorithm for biobjective problems, which significantly improves upon the running time of Algorithm 1 in the case where an exact algorithm WS1 for the weighted sum problem is available.
Theorem 1, which is the main general result of Section 3.1, can trivially be specialized to the case p = 2. It is more interesting to consider the situation where the weighted sum can be solved exactly, corresponding to Corollary 1. In this case, we obtain the following result:
Corollary 4
If WSσ = WS1 is an exact algorithm for the weighted sum problem and p = 2, Algorithm 1 yields an \(\mathcal {A}\)-approximation where
$$ \begin{array}{@{}rcl@{}} \mathcal{A} = & \{(1,2 + \varepsilon), (2 + \varepsilon, 1) \} \end{array} $$
in time \(\mathcal {O}\left (T_{\text {WS}_{1}} \frac {1}{\varepsilon } \log \frac {\text {UB}}{\text {LB}} \right )\).
It is worth pointing out that, unlike in Corollary 1, the set \(\mathcal {A}\) of approximation factors is now finite. This type of result can be interpreted as a disjunctive approximation result: Algorithm 1 outputs a set P ensuring that, for any xX, there exists \(x^{\prime } \in P\) such that \(x^{\prime }\) (1, 2 + ε)-approximates x or \(x^{\prime }\) (2 + ε, 1)-approximates x.
In the biobjective case, we may scale the weights in the weighted sum problem to be of the form (γ, 1) for some γ > 0. In the following, we make use of this observation and refer to a weight vector (γ, 1) simply as γ.
Algorithm 2 is a refinement of Algorithm 1 in the biobjective case when an exact algorithm WS1 for the weighted sum problem is available. Algorithm 1 requires to test all the u1 + u2 + 1 weights \(\left (\frac {1}{LB(1)},\frac {1}{LB(2)(1+\varepsilon ^{\prime })^{u_{2}}} \right )\), \(\left (\frac {1}{LB(1)},\frac {1}{LB(2)(1+\varepsilon ^{\prime })^{u_{2}-1}} \right )\), …, \(\left (\frac {1}{LB(1)},\frac {1}{LB(2)} \right )\), \(\left (\frac {1}{LB(1)(1+\varepsilon ^{\prime })},\frac {1}{LB(2)} \right ),\ldots ,\left (\frac {1}{LB(1)(1+\varepsilon ^{\prime })^{u_{1}}},\frac {1}{LB(2)} \right )\), or equivalently the u1 + u2 + 1 weights of the form (γt, 1), where \(\gamma _{t}=\frac {LB(2)}{LB(1)}(1+\varepsilon ^{\prime })^{u_{2}-t+1}\) for t = 1,…,u1 + u2 + 1. Instead of testing all these weights, Algorithm 2 considers only a subset of these weights. More precisely, in each iteration, the algorithm selects a subset of consecutive weights {γ,…,γr}, solves WS1(γt) for the weight γt with \(t=\lfloor \frac {\ell +r}{2}\rfloor \), and decides whether 0, 1, or 2 of the subsets {γ,…,γt} and {γt,…,γr} need to be investigated further. This process can be viewed as developing a binary tree where the root, which corresponds to the initialization, requires solving two weighted sum problems, while each other node requires solving one weighted sum problem. This representation is useful to bound the running time of our algorithm. The following technical result on binary trees will be useful for this purpose:
Lemma 2
A binary tree with height h and k nodes with two children contains \(\mathcal {O}(k\cdot h)\) nodes.
Proof
In order to show the claimed upper bound on the number of nodes, we first show that any binary tree T with height h and k nodes with two children that has the maximum possible number of nodes among all such binary trees must have the following property: If v is a node with two children at level , then all nodes u at the levels \(0,\dots ,\ell -1\) must also have two children.
So assume by contradiction that T is a binary tree maximizing the number of nodes among all trees with height h and k nodes with two children, but T does not have this property. Then there exists a node v in T with two children at some level and a node u with at most one child at a lower level \(\ell ^{\prime }\in \{0,\dots ,\ell -1\}\). Then, the binary tree \(T^{\prime }\), that results from making one node w that is a child of v in T an (additional) child of u, also has height h, contains k nodes with two children, and has the same number of nodes as T. Moreover, the level of w in \(T^{\prime }\) changes to \(\ell ^{\prime }+1<\ell +1\). Hence, also the level of any leave of the subtree rooted at w must have decreased by at least one. Thus, giving any leave of this subtree an additional child in \(T^{\prime }\) would yield a binary tree of height h and k nodes with two children, and a strictly larger number of nodes than T, contradicting the maximality of T.
By the above property, in any binary tree maximizing the number of nodes among the trees satisfying the assumptions of the lemma, there are only nodes with two children on all levels \(i<h^{\prime }:= \lfloor \log _{2}(k+1)\rfloor \) and only nodes with at most one child on all levels \(i>h^{\prime }\). Level \(h^{\prime }\) may contain nodes with two children, but there is at least one node with only one child on this level.
Consequently, there are at most k nodes in total on the levels \(0,\dots ,h^{\prime }-1\) and at most k + 1 nodes at level \(h^{\prime }\). Moreover, there are at most 2(k + 1) nodes at level \(h^{\prime }+1\), each of which is the root of a subtree (path) consisting of at most \(h-h^{\prime }\) nodes (each with at most one child). Overall, this proves an upper bound of at most \(k + (k+1) + 2(k+1)\cdot (h-h^{\prime })\in \mathcal {O}(k\cdot h)\) on the number of nodes in the tree. □
Theorem 4
For a biobjective minimization problem, Algorithm 2 returns a {(1, 2 + ε), (2 + ε, 1)}-approximation in time
$$ \begin{array}{@{}rcl@{}} \mathcal{O}\left( T_{\text{WS}_{1}}\cdot\log \left( \frac{1}{\varepsilon}\cdot\log \frac{\text{UB}}{\text{LB}}\right) \cdot\log \frac{\text{UB}}{\text{LB}} \right). \end{array} $$
Proof
The approximation guarantee of Algorithm 2 derives from Theorem 1. We just need to prove that the subset of weights used here is sufficient to preserve the approximation guarantee.
In lines 21-22, the weights γi for i = + 1,…t − 1 are not considered if x (1, 2 + ε)-approximates xt or if xt (2 + ε, 1)-approximates x. We show that, indeed, these weights are not needed. To this end, first observe that any solution \(x^{i} := \text {WS}_{1}(\gamma _{i})\) for i ∈{ + 1,…t − 1} is such that
$$ \begin{array}{@{}rcl@{}} f_{1}(x^{\ell}) \leq f_{1}(x^{i}) \leq f_{1}(x^{t})\phantom{.} \quad & \text{ and } \\ f_{2}(x^{\ell}) \geq f_{2}(x^{i}) \geq f_{2}(x^{t}). \quad & \quad \end{array} $$
since γ > γi > γt. Thus, if x (1, 2 + ε)-approximates xt, we obtain
$$ \begin{array}{@{}rcl@{}} f_{2}(x^{\ell}) \leq (2+\varepsilon)\cdot f_{2}(x^{t}) \leq (2+\varepsilon)\cdot f_{2}(x^{i}), \quad \end{array} $$
which shows that x also (1, 2 + ε)-approximates xi. Therefore, xi and the corresponding weight γi are not needed.
Similarly, if xt (2 + ε, 1)-approximates x, we have
$$ \begin{array}{@{}rcl@{}} f_{1}(x^{t}) \leq (2+\varepsilon)\cdot f_{1}(x^{\ell}) \leq (2+\varepsilon)\cdot f_{1}(x^{i}), \quad \end{array} $$
which shows that xt (2 + ε, 1)-approximates xi. Therefore xi and the corresponding weight γi are again not needed.
In lines 23-24, the weights γi for i = t + 1,…,r − 1 are not considered if xt (1, 2 + ε)-approximates xr or if xr (2 + ε, 1)-approximates xt for similar reasons.
Also, in line 19, xt can be discarded and the weights γi for i = + 1,…,r − 1 can be ignored if x (1, 2 + ε)-approximates xt and xr (2 + ε, 1)-approximates xt. Indeed, using similar arguments as before, we obtain that x (1, 2 + ε)-approximates xi for i = + 1,…,t and xr (2 + ε, 1)-approximates xi for i = t,…,r − 1 in this case. Consequently, compared to Algorithm 1, only superfluous weights are discarded in Algorithm 2 and the approximation guarantee follows by Theorem 1.
We now prove the claimed bound on the running time. Algorithm 2 explores a set of weights of cardinality u1 + u2 + 1 = \(\left \lfloor \log _{1+\varepsilon ^{\prime }}\frac {\text {UB}(1)}{\text {LB}(1)}\right \rfloor + \left \lfloor \log _{1+\varepsilon ^{\prime }}\frac {\text {UB}(2)}{\text {LB}(2)}\right \rfloor + 1\). The running time is obtained by bounding the number of calls to algorithm WS1, which corresponds to the number of nodes of the binary tree implicitly developed by the algorithm. The height of this tree is \(\log _{2}(u_{1}+u_{2}+1) \in \mathcal {O}\left (\log \left (\frac {1}{\varepsilon }\cdot \log \frac {\text {UB}}{\text {LB}}\right )\right )\).
In order to bound the number of nodes with two children in the tree, we observe that we generate such a node (i.e. add the pairs (,t) and (t,r) to Q) only if x does not (1, 2 + ε)-approximate xt and xt does not (2 + ε, 1)-approximate x, and also xt does not (1, 2 + ε)-approximate xr and xr does not (2 + ε, 1)-approximate xt. Hence, whenever a node with two children is generated, the corresponding solution xt does neither (1, 2 + ε) nor (2 + ε, 1)-approximate any previously generated solution and vice versa, so their objective values in both of the two objective functions must differ by more than a factor (2 + ε). Using that the j th objective value of any feasible solution is between LB(j) and UB(j), this implies that there can be at most
$$ \begin{array}{@{}rcl@{}} \min\left\{\log_{2+\varepsilon} \left( \frac{\text{UB}(1)}{\text{LB}(1)}\right) ; \log_{2+\varepsilon} \left( \frac{\text{UB}(2)}{\text{LB}(2)}\right) \right\} \in \mathcal{O}\left( \log \frac{\text{UB}}{\text{LB}} \right) \end{array} $$
nodes with two children in the tree.
Using the obtained bounds on the height of the tree and the number of nodes with two children, Lemma 2 shows that the total number of nodes in the tree is
$$ \begin{array}{@{}rcl@{}} \mathcal{O}\left( \log \left( \frac{1}{\varepsilon}\cdot\log \frac{\text{UB}}{\text{LB}}\right) \cdot\log \frac{\text{UB}}{\text{LB}} \right), \end{array} $$
which proves the claimed bound on the running time. □

Appendix B: Maximization Problems

We now demonstrate that the weighted sum scalarization is much less powerful for approximating multiobjective maximization problems. Indeed, the following theorem shows that, for maximization problems, a polynomial approximation factor can, in general, not be obtained in more than one of the objective functions simultaneously even if the approximating set consists of all the supported solutions:
Theorem 5
For any p ≥ 2 and any polynomial pol, there exists an instance I of a p-objective maximization problem such that at least one unsupported solution is not approximated with an approximation guarantee of 2pol(|I|) in p − 1 of the objective functions by any supported solution.
Proof
Given p ≥ 2 and a polynomial pol, consider the p-objective maximization problem where each instance I is given by a (p + 1)-tuple \((x^{1},\dots ,x^{p},\tilde {x})\) of pairwise different vectors \(x^{1},\dots ,x^{p},\tilde {x}\in \mathbb {Z}^{p}\) and the feasible set is \(X=\{x^{1},\dots ,x^{p},\tilde {x}\}\).
Given the encoding length |I| of such an instance, we set M := 2pol(|I|) + 1 and \(f(x^{1})=(M,\frac {1}{p},\dots ,\frac {1}{p})\), \(f(x^{2})=(\frac {1}{p},M,\frac {1}{p},\dots ,\frac {1}{p})\), …, \(f(x^{p})=(\frac {1}{p},\dots ,\frac {1}{p},M)\) and \(f(\tilde {x})=\left (\frac {M}{p},\dots ,\frac {M}{p}\right )\). Then, the solution \(\tilde {x}\) is unsupported, while \(x^{1},\dots ,x^{p}\) are supported.
Moreover, the ratio of the j-th components of the images \(f(\tilde {x})\) and f(x) for any j is exactly M = 2pol(|I|) + 1 > 2pol(|I|), which shows that x does not yield an approximation guarantee of 2pol(|I|) in objective function fj for any j. □
Literature
1.
2.
go back to reference Bar-Yehuda, R., Even, S.: A linear-time approximation algorithm for the weighted vertex cover problem. Journal of Algorithms 2(2), 198–203 (1981)MathSciNetCrossRef Bar-Yehuda, R., Even, S.: A linear-time approximation algorithm for the weighted vertex cover problem. Journal of Algorithms 2(2), 198–203 (1981)MathSciNetCrossRef
3.
go back to reference Bazgan, C., Gourvès, L, Monnot, J.: Approximation with a fixed number of solutions of some multiobjective maximization problems. Journal of Discrete Algorithms 22, 19–29 (2013)MathSciNetCrossRef Bazgan, C., Gourvès, L, Monnot, J.: Approximation with a fixed number of solutions of some multiobjective maximization problems. Journal of Discrete Algorithms 22, 19–29 (2013)MathSciNetCrossRef
4.
go back to reference Bazgan, C., Jamain, F., Vanderpooten, D.: Approximate Pareto sets of minimal size for multi-objective optimization problems. Oper. Res. Lett. 43(1), 1–6 (2015)MathSciNetCrossRef Bazgan, C., Jamain, F., Vanderpooten, D.: Approximate Pareto sets of minimal size for multi-objective optimization problems. Oper. Res. Lett. 43(1), 1–6 (2015)MathSciNetCrossRef
5.
go back to reference Bazgan, C., Herzel, A., Ruzika, S., Thielen, C., Vanderpooten, D.: One-exact approximate Pareto sets. J. Glob. Optim. 80, 87–115 (2021)MathSciNetCrossRef Bazgan, C., Herzel, A., Ruzika, S., Thielen, C., Vanderpooten, D.: One-exact approximate Pareto sets. J. Glob. Optim. 80, 87–115 (2021)MathSciNetCrossRef
7.
go back to reference Daskalakis, C., Diakonikolas, I., Yannakakis, M.: How good is the chord algorithm? SIAM J. Comput. 45(3), 811–858 (2016)MathSciNetCrossRef Daskalakis, C., Diakonikolas, I., Yannakakis, M.: How good is the chord algorithm? SIAM J. Comput. 45(3), 811–858 (2016)MathSciNetCrossRef
8.
go back to reference Diakonikolas, I.: Approximation of Multiobjective Optimization Problems. PhD thesis, Columbia University (2011) Diakonikolas, I.: Approximation of Multiobjective Optimization Problems. PhD thesis, Columbia University (2011)
9.
go back to reference Diakonikolas, I., Yannakakis, M.: Succinct approximate convex Pareto curves. In: Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 74–83 (2008) Diakonikolas, I., Yannakakis, M.: Succinct approximate convex Pareto curves. In: Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 74–83 (2008)
10.
go back to reference Diakonikolas, I., Yannakakis, M.: Small approximate Pareto sets for biobjective shortest paths and other problems. SIAM J. Comput. 39(4), 1340–1371 (2009)MathSciNetCrossRef Diakonikolas, I., Yannakakis, M.: Small approximate Pareto sets for biobjective shortest paths and other problems. SIAM J. Comput. 39(4), 1340–1371 (2009)MathSciNetCrossRef
11.
go back to reference Ehrgott, M.: Hard to say it’s easy - four reasons why combinatorial multiobjective programmes are hard. In: Proceedings of the 14th International Conference on Multiple Criteria Decision Making (MCDM), Lecture Notes in Economics and Mathematical Systems, vol. 487, pp 69–80 (1998) Ehrgott, M.: Hard to say it’s easy - four reasons why combinatorial multiobjective programmes are hard. In: Proceedings of the 14th International Conference on Multiple Criteria Decision Making (MCDM), Lecture Notes in Economics and Mathematical Systems, vol. 487, pp 69–80 (1998)
12.
go back to reference Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)MATH Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)MATH
13.
go back to reference Fujito, T., Nagamochi, H.: A 2-approximation algorithm for the minimum weight edge dominating set problem. Discret. Appl. Math. 118(3), 199–207 (2002)MathSciNetCrossRef Fujito, T., Nagamochi, H.: A 2-approximation algorithm for the minimum weight edge dominating set problem. Discret. Appl. Math. 118(3), 199–207 (2002)MathSciNetCrossRef
14.
go back to reference Garg, N.: Saving an epsilon: A 2-approximation for the k-MST problem in graphs. In: Proceedings of the 37th ACM Symposium on the Theory of Computing (STOC), pp. 396–402 (2005) Garg, N.: Saving an epsilon: A 2-approximation for the k-MST problem in graphs. In: Proceedings of the 37th ACM Symposium on the Theory of Computing (STOC), pp. 396–402 (2005)
15.
go back to reference Glaßer, C., Reitwießner, C., Schmitz, H., Witek, M.: Approximability and hardness in multi-objective optimization. In: Proceedings of the 6th Conference on Computability in Europe (CiE), LNCS, vol. 6158, pp 180–189 (2010) Glaßer, C., Reitwießner, C., Schmitz, H., Witek, M.: Approximability and hardness in multi-objective optimization. In: Proceedings of the 6th Conference on Computability in Europe (CiE), LNCS, vol. 6158, pp 180–189 (2010)
16.
go back to reference Glaßer, C., Reitwießner, C., Schmitz, H., Witek, M.: Hardness and approximability in multi-objective optimization. Technical Report TR10-031, Electronic Colloquium on Computational Complexity (ECCC) (2010) Glaßer, C., Reitwießner, C., Schmitz, H., Witek, M.: Hardness and approximability in multi-objective optimization. Technical Report TR10-031, Electronic Colloquium on Computational Complexity (ECCC) (2010)
17.
go back to reference Halffmann, P., Ruzika, S., Thielen, C., Willems, D.: A general approximation method for bicriteria minimization problems. Theor. Comput. Sci. 695, 1–15 (2017)MathSciNetCrossRef Halffmann, P., Ruzika, S., Thielen, C., Willems, D.: A general approximation method for bicriteria minimization problems. Theor. Comput. Sci. 695, 1–15 (2017)MathSciNetCrossRef
18.
go back to reference Herzel, A., Ruzika, S., Thielen, C.: Approximation methods for multiobjective optimization problems: a survey. INFORMS Journal on Computing 33 (4), 1284–1299 (2021)MathSciNet Herzel, A., Ruzika, S., Thielen, C.: Approximation methods for multiobjective optimization problems: a survey. INFORMS Journal on Computing 33 (4), 1284–1299 (2021)MathSciNet
19.
go back to reference Hochbaum, D.S., Shmoys, D.B.: A unified approach to approximation algorithms for bottleneck problems. J. ACM 33(3), 533–550 (1986)MathSciNetCrossRef Hochbaum, D.S., Shmoys, D.B.: A unified approach to approximation algorithms for bottleneck problems. J. ACM 33(3), 533–550 (1986)MathSciNetCrossRef
20.
go back to reference Klein, P.: A linear-time approximation scheme for TSP in undirected planar graphs with edge-weights. SIAM J. Comput. 37(6), 1926–1952 (2008)MathSciNetCrossRef Klein, P.: A linear-time approximation scheme for TSP in undirected planar graphs with edge-weights. SIAM J. Comput. 37(6), 1926–1952 (2008)MathSciNetCrossRef
21.
go back to reference Papadimitriou, C., Yannakakis, M.: On the approximability of trade-offs and optimal access of web sources. In: Proceedings of the 41st Annual IEEE Symposium on the Foundations of Computer Science (FOCS), pp. 86–92 (2000) Papadimitriou, C., Yannakakis, M.: On the approximability of trade-offs and optimal access of web sources. In: Proceedings of the 41st Annual IEEE Symposium on the Foundations of Computer Science (FOCS), pp. 86–92 (2000)
22.
go back to reference Vassilvitskii, S., Yannakakis, M.: Efficiently computing succinct trade-off curves. Theor. Comput. Sci. 348(2-3), 334–356 (2005)MathSciNetCrossRef Vassilvitskii, S., Yannakakis, M.: Efficiently computing succinct trade-off curves. Theor. Comput. Sci. 348(2-3), 334–356 (2005)MathSciNetCrossRef
Metadata
Title
The Power of the Weighted Sum Scalarization for Approximating Multiobjective Optimization Problems
Authors
Cristina Bazgan
Stefan Ruzika
Clemens Thielen
Daniel Vanderpooten
Publication date
22-11-2021
Publisher
Springer US
Published in
Theory of Computing Systems / Issue 1/2022
Print ISSN: 1432-4350
Electronic ISSN: 1433-0490
DOI
https://doi.org/10.1007/s00224-021-10066-5

Other articles of this Issue 1/2022

Theory of Computing Systems 1/2022 Go to the issue

Premium Partner