1 Introduction

Let F={f 1,…,f n } be a set of n objects in the plane, with weights w 1,w 2,…,w n >0, respectively. In this paper, we are interested in the problem of finding an independent set of maximum weight. Here a set of objects is independent if no pair of objects intersect.

A natural approach to this problem is to build an intersection graph G=(V,E), where the objects form the vertices, and two objects are connected by an edge if they intersect, and weights are associated with the vertices. We want the maximum independent set in G. This is of course an problem, and it is known that no approximation factor is possible within |V|1−ε for any ε>0 if NPZPP [27]. In fact, even if the maximum degree of the graph is bounded by 3, no PTAS is possible in this case [11].

In geometric settings, better results are possible. If the objects are fat (e.g., disks and squares), PTASs are known. One approach [15, 22] relies on a hierarchical spatial subdivision, such as a quadtree, combined with dynamic programming techniques [7]; it works even in the weighted case. Another approach [15] relies on a recursive application of a nontrivial generalization of the planar separator theorem [30, 38]; this approach is limited to the unweighted case. If the objects are not fat, only weaker results are known. For the problem of finding a maximum independent set of unweighted axis-parallel rectangles, an O(loglogn)-approximation algorithm was recently given by Chalermsook and Chuzhoy [14]. For line segments, a roughly \(O(\sqrt{\mathrm{Opt}})\)-approximation is known [2]; recently, Fox and Pach [24] have improved the approximation factor to n ε for not only for line segments but curve segments that intersect a constant number of times.

In this paper we are interested in the problem of finding a large independent set in a set of weighted or unweighted pseudo-disks. A set of objects is a collection of pseudo-disks, if the boundary of every pair of them intersects at most twice. This case is especially intriguing because previous techniques seem powerless: it is unclear how one can adapt the quadtree approach [15, 22] or the generalized separator approach [15] for pseudo-disks.

Even a constant-factor approximation in the unweighted case is not easy. Consider the most obvious greedy strategy for disks (or fat objects): select the object f i F of the smallest radius, remove all objects that intersect f i from F, and repeat. This is already sufficient to yield a constant-factor approximation by a simple packing argument [21, 32]. However, even this simplest algorithm breaks down for pseudo-disks—as pseudo-disks are defined “topologically”, how would one define the “smallest” pseudo-disk in a collection?

Independent Set via Local Search

Nevertheless, we are able to prove that a different strategy can yield a constant-factor approximation for unweighted pseudo-disks: local search. In the general settings, local search was used to get (roughly) a Δ/4 approximation to independent set, where Δ is the maximum degree in the graph, see [26] for a survey. In the geometric settings, Agarwal and Mustafa [2, Lemma 4.2] had a proof that a local-search algorithm gives a constant-factor approximation for the special case of pseudo-disks that are rectangles; their proof does not immediately work for arbitrary pseudo-disks. Our proof provides a generalization of their lemma.

In fact, we are able to do more: we show that local search can actually yield a PTAS for unweighted pseudo-disks! This gives us by accident a new PTAS for the special case of disks and squares. Though the local-search algorithm is slower than the quadtree-based PTAS in these special cases [15], it has the advantage that it only requires the intersection graph as input, not its geometric realization; previously, an algorithm with this property was only known in further special cases, such as unit disks [36]. Our result uses the planar separator theorem, but in contrast to the separator-based method in [15], a standard version of the theorem suffices and is needed only in the analysis, not in the algorithm itself.

Planar graphs are special cases of disk intersection graphs, and so our result applies. Of course, PTASs for planar graphs have been around for quite some time [9, 30], but the fact that a simple local-search algorithm already yields a PTAS for planar graphs is apparently not well known, if it was known at all.

We can further show that the same local-search algorithm gives a PTAS for independent set for fat objects in any fixed dimension, reproving known results in [15, 22].

This strategy, unfortunately, works only in the unweighted case.

Independent Set via LP

It is easy to extract a large independent set from a sparse unweighted graph. For example, greedily, we can order the vertices from lowest to highest degree, and pick them one by one into the independent set, if none of its neighbors was already picked into the independent set. Let d G be the average degree in G. Then a constant fraction of the vertices have degree O(d G ), and the selection of such a vertex can eliminate O(d G ) candidates. Thus, this yields an independent set of size Ω(n/d G ). Alternatively, for better constants, we can order the vertices by a random permutation and do the same. Clearly, the probability of a vertex v to be included in the independent set is 1/(d(v)+1). An easy calculation leads to Turán’s theorem, which states that any graph G has an independent set of size ≥n/(d G +1) [6].

Now, our intersection graph G may not be sparse. We would like to “sparsify” it, so that the new intersection graph is sparse and the number of vertices is close to the size of the optimal solution. Interestingly, we show that this can be done by solving the LP relaxation of the independent set problem. The relaxation provides us with a fractional solution, where every object f i has value x i ∈[0,1] associated with it. Rounding this fractional solution into a feasible solution is not a trivial task, as no such scheme exists in the general case. Our basic approach is somewhat similar to the local-ratio technique [10], but, more precisely, it is a variant of the contention resolution scheme of Chekuri et al. [18]. To this end, we prove a technical lemma (see Lemma 4.1) that shows that the total sum of terms of the form x i x j , over pairs f i f j that intersect is bounded by the boundary complexity of the union of objects of F, where is the size of the fractional solution. The proof contains a nice application of the standard Clarkson technique [19].

This lemma implies that on average, if we pick f i into our random set of objects, with probability x i , then the resulting intersection graph would be sparse. This is by itself sufficient to get a constant-factor approximation for the unweighted case. For the weighted case, we follow a greedy approach: we examine the objects in a certain order (based on a quantity we call “resistance”), and choose an object with probability around x i , on condition that it does not intersect any previously chosen object. We argue, for our particular order, that each object is indeed chosen with probability Ω(x i ). This leads to a constant-factor approximation for weighted pseudo-disks.

Interestingly, the rounding scheme we use can be used in more general settings, when one tries to find an independent set that maximizes a submodular target function. See Sect. 4.6 for details.

Linear Union Complexity

Our LP analysis works more generally for any class of objects with linear union complexity. We assume that the boundary of the union of any k of these objects has at most ϱk vertices, for some fixed ϱ. For pseudo-disks, the boundary of the union is made out of at most 6n−12 arcs, implying ϱ=6 in this case [28].

A family F of simply connected regions bounded by simple closed curves in general position in the plane is k-admissible (with k even) if for any pair f i ,f j F, we have: (i) f i f j and f j f i are connected, and (ii) their boundary intersect at most k times. Whitesides and Zhao [40] showed that the union of such n objects has at most 3kn−6 arcs; that is, ϱ=3k. So, our LP analysis applies to this class of objects as well. For more results on union complexity, see the sermon by Agarwal et al. [5].

Our local-search PTAS works more generally for unweighted admissible regions in the plane. For an arbitrary class of unweighted objects with linear union complexity in the plane, local search still yields a constant-factor approximation.

Rectangles

LP relaxation has been used before, notably, in Chalermsook and Chuzhoy’s recent breakthrough in the case of axis-parallel rectangles [14], but their analysis is quite complicated. Although rectangles do not have linear union complexity in general, we observe in Sect. 5 that a variant of our approach can yield a readily accessible proof of a sublogarithmic O(logn/loglogn) approximation factor for rectangles, even in the weighted case, where previously only logarithmic approximation is known [4, 12, 16] (Chalermsook and Chuzhoy’s result is better but currently is applicable only to unweighted rectangles).

Discussion

Local search and LP relaxation are of course staples in the design of approximation algorithms, but are not seen as often in computational geometry. Our main contribution lies in the fusion of these approaches with combinatorial geometric techniques.

In a sense, one can view our results as complementary to the known results on approximate geometric set cover by Brönnimann and Goodrich [13] and Clarkson and Varadarajan [20]. They consider the problem of finding the minimum number of objects in F to cover a given point set. Their results imply a constant-factor approximation for families of objects with linear union complexity, for instance. One version of their approaches is indeed based on LP relaxation [23, 31]. The “dual” hitting set problem is to find the minimum number of points to pierce a given set of objects. Brönnimann and Goodrich’s result combined with a recent result of Pyrga and Ray [37] also implies a constant-factor approximation for pseudo-disks for this piercing problem. The piercing problem is actually the dual of the independent set problem (this time, we are referring to linear programming duality). We remark, however, that the rounding schemes for set cover and piercing are based on different combinatorial techniques, namely, ε-nets, which are not sufficient to deal with independent set (one obvious difference is that independent set is a maximization problem).

In Theorem 4.6, we point out a combinatorial consequence of our LP analysis: for any collection of unweighted pseudo-disks, the ratio of the size of the minimum piercing set to the size of maximum independent set is at most a constant. (It is easy to see that the ratio is always at least 1; for disks or fat objects, it is not difficult to obtain a constant upper bound by packing arguments.) This result is of independent interest; for example, getting tight bounds on the ratio for axis-parallel rectangles is a long-standing open problem.

In an interesting independent development, Mustafa and Ray [35] have recently applied the local search paradigm to obtain a PTAS for the geometric set cover problem for (unweighted) pseudo-disks and admissible regions.

2 Preliminaries

In the following, we have a set F of n objects in the plane, such that the union complexity of any subset XF is bounded by ϱ|X|, where ϱ is a constant. Here, the union complexity of X is the number of arcs on the boundary of the union of the objects of X. Let denote the arrangement of F, and denote the set of vertices of .

In the following, we assume that deciding if two objects intersect takes constant time.

3 Approximation by Local Search: Unweighted Case

3.1 The Algorithm

In the unweighted case, we may assume that no object is fully contained in another.

We say that a subset L of F is b-locally optimal if L is an independent set and one cannot obtain a larger independent set from L by deleting at most b objects and inserting at most b+1 objects of F.

Our algorithm for the unweighted case simply returns a b-locally optimal solution for a suitable constant b, by performing a local search. We start with L←∅. For every subset XFL of size at most b+1, we verify that X by itself is independent, and, furthermore, that the set YL of objects intersecting the objects of X is of size at most |X|−1. If so, we do L←(LY)∪X. Every such exchange increases the size of L by at least one, and as such it can happen at most n times. Naively, there are \(\binom{n}{b+1}\) subsets X to consider, and for each such subset X it takes O(nb) time to compute Y. Therefore, the running time is bounded by O(n b+3). (The running time can be probably improved by being a bit more careful about the implementation.)

3.2 Analysis

We present two alternative ways to analyze this algorithm. The first approach uses only the fact that the union complexity is low. The second approach is more direct, and uses the property that the regions are admissible.

3.2.1 Analysis Using Union Complexity

The following lemma by Afshani and Chan [1], which was originally intended for different purposes, will turn out to be useful here (the proof exploits linearity of planar graphs and the Clarkson technique [19]):

Lemma 3.1

Suppose we have n disjoint simply connected regions in the plane and a collection of disjoint curves, where each curve intersects at most k regions. Call two curves equivalent if they intersect precisely the same subset of regions. Then the number of equivalent classes is at most c 0 nk 2 for some constant c 0.

Let be an optimal solution, and let L be a b-locally optimal solution. We will upper bound in terms of |L|.

Let denote the set of objects in that intersect at least b+1 objects of L. Let be the set of remaining objects in .

If intersects f j L, then the pair of objects contributes at least two vertices to the boundary of the union of . Indeed, the objects of (resp. L) are disjoint from each other since this is an independent set, and no object is contained inside another (by assumption). We remind the reader that for any subset XF, the union complexity of the regions of X is ≤ϱ|X|. As such, the union complexity of is . Thus,

On the other hand, by applying Lemma 3.1 with L as the regions and the boundaries of as the curves, the objects in form at most c 0 b 2|L| equivalent classes. Each equivalent class contains at most b objects: Otherwise we would be able to remove b objects from L and intersect b+1 objects in this equivalence class to get an independent set larger than L. This would contradict the b-local optimality of L. Thus, .

Combining the two inequalities, we get

For example, we can set b=⌈ϱ/2⌉ and the approximation factor is O(ϱ 3).

Theorem 3.2

Given a set of n unweighted objects in the plane with linear union complexity, for a sufficiently large constant b, any b-locally optimal independent set has size \(\varOmega(\operatorname{opt})\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects.

3.2.2 Better Analysis for Admissible Regions

A set of regions F is admissible, if for any two regions f,f′∈F, we see that ff′ and f′∖f are both simply connected (i.e., connected and contains no holes). Note that we do not care how many times the boundaries of the two regions intersect, and furthermore, by definition, no region is contained inside another.

Lemma 3.3

Let F be a set of admissible regions, and consider a independent set of regions IF, and a region fFI. Then the core region fI=f∖⋃ gI g is non-empty and simply connected.

Proof

It is easy to verify that for the regions of I to split f into two connected components, they must intersect, which contradicts their disjointness. □

Lemma 3.4

Let X,YF be two independent sets of regions. Then the intersection graph G of XY is planar.

Proof

Lemma 3.3 implies the planarity of this graph.

figure a

Indeed, for a region fX, the core f′=f∖ ∪ gY g is non-empty and simply connected. Place a vertex v f inside this region, and for every object gY that intersects f, create a curve from v f to a point p f,g on the boundary of g that lies inside f. Clearly, we can create these curves in such a way that they do not intersect each other.

Similarly, for every region gY, we place a vertex v g inside g, and connect it to all the points p f,g placed on its boundary, by curves that are contained in g, and they are interior disjoint. Clearly, together, these vertices and curves form a planar drawing of G. □

We need the following version of the planar separator theorem. Below, for a set of vertices U in a graph G, let Γ(U) denote the set of neighbors of U, and let \(\overline{\varGamma} ({U} ) =\varGamma ({U} ) \cup U\).

Lemma 3.5

[25]

There are constants c 1, c 2 and c 3, such that for any planar graph G=(V,E) with n vertices, and a parameter r, one can find a set of XV of size at most \(c_{1}n/\sqrt{r}\), and a partition of VX into n/r sets V 1,…,V n/r , satisfying: (i) |V i |≤c 2 r, (ii) Γ(V i )∩V j =∅, for ij, and (iii) \(\vert {\varGamma ({V_{i}} )\cap X} \vert \le c_{3}\sqrt{r} \).

Let be the optimal solution and L be a b-locally optimal solution. Consider the bipartite intersection graph G of . By Lemma 3.4, we can apply Lemma 3.5 to G, for r=b/(c 2+c 3). Note that \(\vert {\overline{\varGamma}({V_{i}} )} \vert \leq c_{2}r +c_{3}\sqrt{r} < b\) for each i. Let

Observe that i +b i s i , for all i. Indeed, otherwise, we can throw away the vertices of \(\mathsf {L}\cap\overline{\varGamma} ({V_{i}} )\) from L, and replace them by , resulting in a better solution. This would contradict the local optimality of L. Thus,

It follows that . We can set b to the order of 1/ε 2, and we get the following.

Theorem 3.6

Given a set of n unweighted admissible regions in the plane, any b-locally optimal independent set has size \(\geq(1-O(1/\sqrt{b}))\operatorname{opt}\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects. In particular, one can compute an independent set of size \(\geq(1-\varepsilon)\operatorname{opt}\), in time \(n^{O(1/\varepsilon^{2})}\).

3.2.3 Analysis for Fat Objects in Any Fixed Dimension

We show that the same algorithm gives a PTAS for the case when the objects in F are fat. This result in fact holds in any fixed dimension d. For our purposes, we use the following definition of fatness: the objects in F are fat if for every axis-aligned hypercube B of side length r, we can find a constant number c of points such that every object that intersects B and has diameter at least r contains one of the chosen points.

Smith and Wormald [38] proved a family of geometric separator theorems, one version of which will be useful for us and is stated below (see also [15]):

Lemma 3.7

[38]

Given a collection of n fat objects in a fixed dimension d with constant maximum depth, there exists an axis-aligned hypercube B such that at most 2n/3 objects are inside B, at most 2n/3 objects are outside B, and at most O(n 1−1/d) objects intersect the boundary of B.

We need the following extension of Smith and Wormald’s separator theorem to multiple clusters (whose proof is similar to the extension of the standard planar separator theorem in [25]):

Lemma 3.8

There are constants c 1, c 2, c 3 and c 4, such that for any intersection graph G=(V,E) of n fat objects in a fixed dimension d with constant maximum depth, and a parameter r, one can find a set of XV of size at most c 1 n/r 1/d, and a partition of VX into n/r sets V 1,…,V n/r , satisfying: (i) |V i |≤c 2 r, (ii) Γ(V i )∩V j =∅, for ij, and (iii) ∑ i |Γ(V i )∩X|≤c 3 n/r 1/d, and (iv) |Γ(V i )∩X|≤c 4 r.

Proof

Assume that all objects are unmarked initially. We describe a recursive procedure for a given set S of objects. If |S|≤c 2 r, then S is a “leaf” subset and we stop the recursion. Otherwise, we apply Lemma 3.7. Let S′ and S″ be the subset of all objects inside and outside the separator hypercube B, respectively. Let \(\widehat{S}\) be the subset of all objects intersecting the boundary of B. We mark the objects in \(\widehat{S}\) and recursively run the procedure for the subset \(S'\cup\widehat{S}\) and for the subset \(S''\cup\widehat{S}\).

Note that some objects may be marked more than once. Let X be the set of all objects that have been marked at least once. For each leaf subset S i , generate a subset V i of all unmarked objects in S i . Property (i) is obvious. Properties (ii) and (iv) hold, because the unmarked objects in each leaf subset S i can only intersect objects within S i and cannot intersect unmarked objects in other S j ’s.

The total number of marks satisfies the recurrence T(n)=0 if nc 2 r, and

otherwise. The solution is T(n)=O(n/r 1/d). Thus, we have |X|=O(n/r 1/d). Furthermore, for each object fX, the number of leaf subsets that f is in is equal to 1 plus the number of marks that f receives. Thus, (iii) follows. □

Let be the optimal solution and L be a b-locally optimal solution. Consider the bipartite intersection graph G of , which has maximum depth 2. We proceed as in the proof from Sect. 3.2.2, using Lemma 3.8 instead of Lemma 3.5. Note that (iii)–(iv) are weaker properties but are sufficient for the same proof to go through. The only differences are that square roots are replaced by dth roots, and we now set r=b/(c 2+c 4), so that \(\vert {\overline{\varGamma}({V_{i}} )} \vert \leq c_{2}r + c_{4}r < b\). We conclude:

Theorem 3.9

Given a set of n fat objects in a fixed dimension d, any b-locally optimal independent set has size \(\geq(1-O(1/b^{1/d}))\operatorname{opt}\), where \(\operatorname{opt}\) is the size of the maximum independent set of the objects. In particular, one can compute an independent set of size \(\geq(1-\varepsilon)\operatorname{opt}\), in time \(n^{O(1/\varepsilon^{d})}\).

4 Approximation by LP Relaxation: Weighted Case

4.1 The Algorithm

We are interested in computing a maximum-weight independent set of the objects in F={f 1,…,f n }, with weights w 1,…,w n , respectively. To this end, let us solve the following LP relaxation:

(1)

where denotes the set of vertices of the arrangement .

In the following, x i will refer to the value assigned to this variable by the solution of the LP. Similarly, Opt=∑ i w i x i will denote the weight of the relaxed optimal solution, which is at least the weight \(\operatorname{opt}\) of the optimal integral solution.

We will assume, for the time being, that no two objects of F fully contain each other.

For every object f i , let its resistance be the total sum of the values of the objects that intersect it. Formally, we have

We pick the object in F with minimal resistance, and set it as the first element in the permutation Π of the objects. We compute the permutation by performing this “extract-min”, with the variant that objects that are already in the permutation are ignored. Formally, if we computed the first i objects in the permutation to be Π i =〈π 1,…,π i 〉, then the (i+1)th object π i+1 is the one realizing

$$\eta_{i+1} = \min_{f\in\mathsf{F}\setminus\varPi_i} \eta({f,\mathsf{F} \setminus \varPi_i} ). $$
(2)

The algorithm starts with an empty candidate set C and an empty independent set I, and scans the objects according to the permutation in reverse order. At the ith stage, the algorithm first decides whether to put the object π ni in C, by flipping a coin that with probability x(π ni )/τ comes up heads, where x(π ni ) is the value assigned by the LP to the object π ni and τ is some parameter to be determined shortly. If π ni is put into C then we further check whether π ni intersects any of the objects already added to the independent set I. If it does not intersect any objects in I, then it adds π ni to I and continues to the next iteration.

In the end of the execution, the set I is returned as the desired independent set.

4.2 Analysis

Let F be a set of n objects in the plane, and let u(m) be the maximum union complexity of mn objects of F. Furthermore, we assume that the function u(⋅) is a monotone increasing function which is well behaved; namely, u(n)/n is a non-decreasing function, and there exists a constant c, such that u(xr)≤c u(r), for any r and 1≤x≤2. In the following, a vertex p of is denoted by (p,i,j), to indicate that it is the result of the intersection of the ith and jth object.

The key to our analysis lies in the following inequality, which we prove by adapting the Clarkson technique [19].

Lemma 4.1

Let H be any subset of F. Then , where .

Proof

Consider a random sample R′ of H, where an object f i is being picked up with probability x i /2. Clearly, we find that appears on the boundary of the union of the objects of R′, if and only if f i and f j are being picked, and none of the objects that cover p are being chosen into R′. In particular, let denote the vertices on the boundary of the union of the objects of R′. We have

by the inequality ∏ k (1−a k )≥1−∑ k a k for a k ∈[0,1], since \(\sum_{f_{k}\ni\mathsf{p}} x_{k} \leq1\) (as the LP solution is valid). On the other hand, the number of vertices on the union is . Thus,

To bound last expression, observe that . Furthermore, by Chernoff inequality,  Pr[|R′|>(t+1)μ]≤2t. Thus, , since u(⋅) is well behaved. □

Lemma 4.2

For any i, the resistance of the ith object π i (as defined by Eq. (2)) is .

Proof

Fix an i, and let K=F∖{π 1,…,π i−1}. By Lemma 4.1,

by the monotonicity of u(n)/n, where . It follows that . □

Lemma 4.3

For a sufficiently large constant c, setting , the algorithm in Sect4.1 outputs in expectation an independent set of weight Ω((n/u(n))Opt).

Proof

Indeed, the jth object in the permutation is added to C with probability x j /τ. Let K be the set of all objects of F that were already considered and intersect f j . Clearly, is exactly the resistance η j of f j . Furthermore by picking c large enough, we have by Lemma 4.2. This implies that

by the inequality ∏ k (1−a k )≥1−∑ k a k for a k ∈[0,1]. Now, we have

$$y_j = \mathop{\mathbf{Pr}} [ {f_j \in I} ] = \mathop{\mathbf{Pr}} \bigl[ {f_j \in I \mid{f_j \in C}} \bigr] \cdot\mathop{\mathbf{Pr}} [{f_j \in C} ] \geq\frac {x_j}{2 \tau}.$$

As such, the expected value of the independent set output is

$$\sum_j y_j w_j = \sum _j \frac{x_j}{2 \tau} w_j = \varOmega \biggl({\frac{\mathrm{Opt}}{\tau}} \biggr) = \varOmega\biggl({\frac {n}{\mathsf{u} ({n} )} \mathrm {Opt}} \biggr),$$

as . □

4.3 Remarks

Variant

In the conference version of this paper [17], we proposed a different variant of the algorithm, where instead of ordering the objects by increasing resistance, we order the objects by decreasing weights. An advantage of the resistance-based algorithm is that it is oblivious to (i.e., does not look at) the input weights. This feature is shared, for example, by Varadarajan’s recent algorithm for weighted geometric set cover via “quasi-random sampling” [39]. Another advantage of the resistance-based algorithm is its extendibility to other settings; see Sect. 4.6.

Derandomization

The variance of the expected weight of the returned independent set I could be high, but fortunately the algorithm can be derandomized by the standard method of conditional probabilities/expectations [34]. To this end, observe that the above analysis provide us with a constructive way to estimate the weight of the generated solution. It is now straightforward to decide for each region whether to include it or not inside the generated solution, using conditional probabilities. Indeed, for each object we compute the expected weight of the solution if it is include in the solution, and if it is not included in the solution, and pick the one that has higher value.

Coping with Object Containment

We have assumed that no object is fully contained in another, but this assumption can be removed by adding the constraint \(\sum_{f_{i}\subset f_{j}} x_{j} \le \nobreak 1\) for each i to the LP. Then, for any subset H of F, we have

and so Lemma 4.1 still holds. The rest of the analysis then holds verbatim.

Time to Solve the LP

This LP is a packing LP with O(n 2) inequalities, and n variables. As such, it can be (1+ε)-approximated in O(n 3+ε −2 n 2logn)=O(n 3) by a randomized algorithm that succeeds with high probability [29]. For our purposes, it is sufficient to set ε to be a sufficient small constant, say ε=10−4.

We have thus proved:

Theorem 4.4

Given a set of n weighted objects in the plane with union complexity O(u(n)), one can compute an independent set of total weight \(\varOmega( (n / \mathsf{u}({n} )) \operatorname{opt})\), where \(\operatorname{opt}\) is the maximum weight over all independent sets of the objects. The running time of the randomized algorithm is O(n 3), and polynomial for the deterministic version.

The running time of the deterministic algorithm of Theorem 4.4 is dominated by the time it takes to deterministically solve (approximately) the LP. One can use the ellipsoid algorithm to this end, but faster algorithm are known, see [29] and references therein.

Corollary 4.5

Given a set of n weighted pseudo-disks in the plane, one can compute, in O(n 3) time, a constant-factor approximation to the maximum-weight independent set of pseudo-disks.

Theorem 4.4 can be applied to cases where the union complexity is low. Even in the case of fat objects, where PTASs are known [15, 22], the above approach is still interesting as it can be extended to more general settings, as noted in Sect. 4.6.

4.4 A Combinatorial Result: Piercing Number

In the unweighted case, we obtain the following result as a byproduct:

Theorem 4.6

Given a set of n pseudo-disks in the plane, let \(\operatorname{opt}\) be the size of the maximum independent set and let \(\operatorname{opt}'\) be the size of the minimum set of points that pierce all the pseudo-disks. Then \(\operatorname{opt}=\varOmega(\operatorname{opt}')\).

Proof

By the preceding analysis, we have \(\operatorname{opt}=\varOmega(\mathrm{Opt})\), i.e., the integrality gap of our LP is a constant. (Here, all the weights are equal to 1.)

For piercing, the LP relaxation is

Let Opt′ be the value of this LP. Known analysis [23, 31] implies that the integrality gap of this LP is constant if there exist ε-nets of linear size for a corresponding class of hypergraphs formed by objects in F and points in . Pyrga and Ray [37, Theorem 12] obtained such an existence proof for this (“primal”) hypergraph for pseudo-disks. Thus, \(\operatorname{opt}'=O(\mathrm{Opt}')\).

To conclude, observe that the two LPs are precisely the dual of each other, and so Opt=Opt′. □

4.5 A Discrete Version of the Independent Set Problem

We now show that our algorithm can be extended to solve a variant of the independent set problem where we are given not only a set F of n weighted objects but also a set P of m points. The goal is to select a maximum-weight subset such that each point pP is contained in at most one object of . (The original problem corresponds to the case where P is the entire plane.) Unlike in the original independent set problem, it is not clear if local search yields good approximation here, even in the unweighted case.

We can use the same LP as in Sect. 4.1 to solve this problem, except that we now have a constraint for each pP instead of each . In the rest of the algorithm and analysis, we just reinterpret “f i f j ≠∅” to mean “f i f j P≠∅”.

Lemma 4.1 is now replaced by the following.

Lemma 4.7

Let H be any subset of F. Then .

Proof

Consider a random sample R of H, where an object f i is being picked up with probability x i /2. Let be the cells in the vertical decomposition of the complement of the union . For a cell , let \(x_{\varDelta}= \sum_{f_{i} \in \mathsf{H}, f_{i} \cap\mathrm{int}(\varDelta) \ne\emptyset} x_{i}\) be the total energy of the objects of H that intersects the interior of Δ. A minor modification of the analysis of Clarkson [19] implies that, for any constant c, .

A point of pP is active for R, if it is outside the union of objects of R. Let P′ be the set of active points in P. We have

Furthermore, by arguing as in Lemma 4.1, every point pP has probability at least \(\prod_{f_{i} \in\mathsf{H},\mathsf{p}\in f_{i}} (1-x_{i}/2) \geq1/2\) to be active. Thus,

again, by arguing as in Lemma 4.1. □

The above proof is inspired by a proof from [3]. There is an alternative argument based on shallow cuttings, but the known proof for the existence of such cuttings requires a more complicated sampling analysis [33].

The rest of the analysis then goes through unchanged. We therefore obtain an O(1)-approximation algorithm for the discrete independent set problem for unweighted or weighted pseudo-disks in the plane.

4.6 Contention Resolution and Submodular Functions

The algorithm of Theorem 4.4 can be interpreted as a contention resolution scheme; see Chekuri et al. [18] for details. The basic idea is that, given a feasible fractional solution x∈[0,1]n, a contention resolution scheme scales down every coordinate of x (by some constant b) such that, given a random sample C of the objects according to x (i.e., the ith object f i is picked with probability bx i ), the contention resolution scheme computes (in our case) an independent set I such that  Pr[f i If i C]≥c, for some positive constant c. The proof of Lemma 4.3 implies exactly this property in our case.

As such, we can apply the results of Chekuri et al. [18] to our settings. In particular, they show that one can obtain constant approximation to the optimal solution, when considering independence constraints and submodular target function. Intuitively, submodularity captures the diminishing-returns nature of many optimization problems. Formally, a function g:2F→ℝ is submodular if g(XY)+g(XY)≤g(X)+g(Y), for any X,YF.

As a concrete example, consider a situation where each object in F represents a coverage area by a single antenna. If a point is contained inside such an object, it is fully serviced. However, even if it is not contained in a object, it might get some reduced coverage from the closest object in the chosen set. In particular, let ν(r) be some coverage function which is the amount of coverage a point gets if it is at distance r from the closest object in the current set I. We assume here ν(⋅) is a monotone decreasing function. Because of interference between antennas we require that the regions these antennas represent do not intersect (i.e., the set of antennas chosen needs to be an independent set).

Lemma 4.8

Let P be a set of points, and let F be a set of objects in the plane. Let ν(⋅) be a monotone decreasing function. For a subset HF, consider the target function

$$\alpha_{\mathsf{P}} ({\mathsf{H}} ) = \sum_{\mathsf {p}\in\mathsf{P}}\nu\bigl({\mathsf{d}_{\mathsf{p}} ({\mathsf{H}} )} \bigr),$$

where d p (H) is the distance of p to its nearest neighbor in H. Then the function α P (H) is submodular.

Proof

The proof is not hard and is included for the sake of completeness. For a point pP, it is sufficient to prove that the function ν(d p (H)) is submodular, as α P (H) is just the sum of these functions, and a sum of submodular functions is submodular.

To prove the latter, it is sufficient to prove that for any sets XYF, and an object fFY,

$$\nu\bigl({\mathsf{d}_{\mathsf{p}} \bigl({X \cup\{{f} \}} \bigr)} \bigr) - \nu\bigl({\mathsf{d}_{\mathsf{p}} ({X } )} \bigr)\geq \nu\bigl({\mathsf{d}_{\mathsf{p}} \bigl({Y \cup\{ {f} \}}\bigr)} \bigr) - \nu\bigl({\mathsf{d}_{\mathsf{p}}({Y } )} \bigr).$$

To this end, let x and y be the closest objects to p in X and Y, respectively. Similarly, let x , y , f be the distance of p to x,y and f, respectively. The above then becomes

$$\nu\bigl({ \min({\ell_x,\ell_f} ) } \bigr) - \nu\bigl({\ell_x } \bigr)\geq \nu\bigl({\min({\ell_y, \ell_f} ) }\bigr) - \nu\bigl({\ell_y}\bigr). $$
(3)

Observe that as XY, we have x y , so ν( y )≥ν( x ) as ν is monotone decreasing. Now, one of the following holds:

  • If f y x then Eq. (3) becomes ν( f )−ν( x )≥ν( f )−ν( y ), which holds.

  • If y f x then Eq. (3) becomes ν( f )−ν( x )≥ν( y )−ν( y ), which is equivalent to ν( f )≥ν( x ). This in turn holds by the decreasing monotonicity of ν.

  • If y x f then Eq. (3) becomes 0=ν( x )−ν( x )≥ν( y )−ν( y )=0.

We conclude that α P (⋅) is submodular. □

To solve our problem, using the framework of Chekuri et al. [18], we need the following:

  1. (A)

    The target function is indeed submodular and can be computed efficiently. This is Lemma 4.8.

  2. (B)

    State an LP that solves the fractional problem (and its polytope contains the optimal integral solution). This is just the original LP, see Eq. (1).

  3. (C)

    Observe that our rounding (i.e., contention resolution) scheme is still applicable in this case. This follows by Lemma 4.3.

One can now plug this into the algorithm of Chekuri et al. [18] and get an Ω(α)-approximation algorithm, where α is the rounding scheme gap. The algorithm of Chekuri et al. [18] uses a continuous optimization to find the maximum of a multi-linear extension of the target function inside the feasible polytope, and then uses this fractional value with the rounding scheme to get the desired approximation.

We thus get the following.

Problem 4.9

Let P be a set of n points in the plane, and let F be a set of m objects in the plane. Let ν(r) be a monotone decreasing function, which returns the amount of coverage a point gets if it is at distance r from one of the regions of F. Consider the scoring function that for an independent set HF returns the total coverage it provides; that is,

$$\alpha_{\mathsf{P}} ({\mathsf{H}} ) = \sum_{\mathsf {p}\in\mathsf{P}}\nu\bigl({\mathsf{d}_{\mathsf{p}} ({\mathsf{H}} )} \bigr).$$

We refer to the problem of computing the independent set maximizing this function as the partial coverage problem.

Theorem 4.10

Given a set of n points in the plane and a set m of unweighted objects in the plane with union complexity O(u(n)), one can compute, in polynomial time, an independent set. Furthermore, this independent set provides an Ω(n/u(n))-approximation to the optimal solution of the partial coverage problem.

Observe that the above algorithm applies for any pricing function that is submodular. In particular, one can easily encode into this function weights for the ranges, or other similar considerations.

5 Weighted Rectangles

5.1 The Algorithm

For the (original) independent set problem in the case of weighted axis-aligned rectangles, we can solve the same LP, where the set contains both intersection points and corners of the given rectangles.

Define two subgraphs and of the intersection graph: if the boundaries of f i and f j intersect zero or two times, put f i f j in ; if the boundaries intersect four times instead, put f i f j in .

We first extract an independent set I of using the algorithm of Theorem 4.4.

It is well known (e.g., see [8]) that forms a perfect graph (specifically, a comparability graph), so find a Δ-coloring of the rectangles of I in , where Δ denotes the maximum clique size, i.e., the maximum depth in . Let I′ be the color subclass of I of the largest total weight. Clearly, the objects in I′ are independent, and we output this set.

5.2 Analysis

As in Lemma 4.1, let H be any subset of F. Observe that if , then f i contains a corner of f j or vice versa. Letting V j denote the corners of f j , we have

Applying the same analysis as before, we conclude that the expected total weight of I (i.e., the independent set of ) computed by the algorithm is of size Ω(Opt).

To analyze I′, we need a new lemma which bounds the maximum depth of R:

Lemma 5.1

Δ=O(logn/loglogn) with probability at least 1−1/n.

Proof

Fix a parameter t>1. Fix a point . The depth of p in , denoted by depth(p,R), is a sum of independent 0-1 random variables with overall mean \(\mu= \sum_{\mathsf{p}\in f_{i}}x_{i} \le 1\). By the Chernoff bound [34, p. 68],

$$\mathop{\mathbf{Pr}} \bigl[ {\mathrm{depth}(\mathsf{p},\mathsf {R}) > (1+\delta)\mu} \bigr] < \biggl[\frac{e^\delta}{(1+\delta)^{1+\delta}}\biggr]^\mu $$

for any δ>0 (possibly large). By setting δ so that t=(1+δ)μ, this probability becomes less than (e/t)t. Since , the probability that Δ>t is at most O((e/t)t n 2), which is at most 1/n by setting the value of t to be Θ(logn/loglogn). □

By construction of I′, we know that

$$\sum_{f_i \in I'} w_i\ \ge\ \frac{1}{\varDelta}\sum_{f_i\in I} w_i\ \ge\ \frac{1}{t}\sum_{f_i \in I} w_i - \mathrm{Opt}\cdot1_{\varDelta>t}$$

where 1 A denotes the indicator variable for event A. With t=Θ(logn/loglogn), we conclude that

5.3 Remarks

Derandomization

This algorithm can also be derandomized by the method of conditional expectations. The trick is to consider the following random variable:

where δ p is the δ from the proof of Lemma 5.1 and t is the same as before. This variable Z lower-bounds \(\sum_{f_{i} \in I'} w_{i}\) (the bound is trivially true if Δ>t, since Z would be negative). Our analysis still implies that  E[Z]≥Ω(loglogn/logn)⋅Opt (since the standard proof of the Chernoff bound [34, pp. 68–69] actually shows that

$$\mathop{\mathbf{Pr}} \bigl[ {\mathrm{depth}(\mathsf{p},\mathsf{R}) > (1+\delta)\mu} \bigr] < \frac{\mathop {\mathbf{E}} [ {(1+\delta)^{\mathrm {depth}(\mathsf{p},\mathsf{R})}} ]}{(1+\delta)^{(1+\delta)\mu}} \le\biggl[\frac{e^{\delta}}{(1+\delta)^{1+\delta}} \biggr]^\mu,$$

which for δ=δ p and μ≤1 implies  E[(1+δ p )depth(p,R)−t]<(e/t)t). The advantage of working with Z is that we can calculate  E[Z] exactly in polynomial time, even when conditioned to the events that some objects are known to be in or not in R (since depth(p,R) is a sum of independent 0-1 random variables, making (1+δ p )depth(p,R) a product of independent random variables).

We have thus proved:

Theorem 5.2

Given a set of n weighted axis-aligned boxes in the plane, one can compute in polynomial time an independent set of total weight \(\varOmega(\log\log n/\log n)\cdot\nobreak \operatorname{opt}\), where \(\operatorname{opt}\) is the maximum weight over all independent sets of the objects.

Higher Dimensions

By a standard divide-and-conquer method [4], we get an approximation factor of O(logd−1 n/loglogn) for weighted axis-aligned boxes in any constant dimension d.