Skip to main content

About this book

This book constitutes the refereed proceedings of the 10th International Conference on Combinatorial Optimization and Applications, COCOA 2016, held in Hong Kong, China, in December 2016.

The 60 full papers included in the book were carefully reviewed and selected from 122 submissions. The papers are organized in topical sections such as graph theory, geometric optimization, complexity and data structure, combinatorial optimization, and miscellaneous.

Table of Contents


Graph Theory


On the Capture Time of Cops and Robbers Game on a Planar Graph

This paper examines the capture time of a planar graph in a pursuit-evasion games’ variant called the cops and robbers game. Since any planar graph requires at most three cops, we study the capture time of a planar graph G of n vertices using three cops, which is denoted by $$capt_3(G)$$. We present a new capture strategy and show that $$capt_3(G) \le 2n$$. This is the first result on $$capt_3(G)$$.

Photchchara Pisantechakool, Xuehou Tan

The Mixed Evacuation Problem

A dynamic network introduced by Ford and Fulkerson is a directed graph with capacities and transit times on its arcs. The quickest transshipment problem is one of the most fundamental problems in dynamic networks. In this problem, we are given sources and sinks. Then, the goal of this problem is to find a minimum time limit such that we can send exactly the right amount of flow from sources to sinks. In this paper, we introduce a variant of this problem called the mixed evacuation problem. This problem models an emergent situation in which people can evacuate on foot or by car. The goal is to organize such a mixed evacuation so that an efficient evacuation can be achieved. In this paper, we study this problem from the theoretical and practical viewpoints. In the first part, we prove the polynomial-time solvability of this problem in the case where the number of sources and sinks is not large, and also prove the polynomial-time solvability and computational hardness of its variants with integer constraints. In the second part, we apply our model to the case study of Minabe town in Wakayama prefecture, Japan.

Yosuke Hanawa, Yuya Higashikawa, Naoyuki Kamiyama, Naoki Katoh, Atsushi Takizawa

A Comprehensive Reachability Evaluation for Airline Networks with Multi-constraints

Airline network, including airports as network nodes and flight routes as directed network edges, has a lot of special features such as departure and arrival times, air ticket budget, flight capacity, transportation cost, etc. Thus, analyzing network behavior and service performance for such a network is much more difficult than that for many other networks. In this paper, taking China domestic airline network as a representative, we try to discuss the reachability issue for each airport respectively, which could reflect its regional connectivity level and service quality of civil aviation. More specifically, we evaluate reachability through many features including node degree, betweenness, closeness, etc. To get the values of some features, we design a fast Dijkstra-based all-pair shortest path algorithm with both time and budget requirements, then use Fenwick Tree to further improve the time efficiency. Finally, we implement Analytic Hierarchy Process (AHP) to convert the reachability feature into numerical values for all airports to measure their service qualities precisely. Our results for China domestic airline network with 210 airports and 69,160 flight routes will definitely become a guide to airline companies and civil aviation administration for their further development and management.

Xiaotian You, Xiaofeng Gao, Yaru Dang, Guihai Chen, Xinglong Wang

Approximation and Hardness Results for the Max k-Uncut Problem

In this paper, we propose the $$\mathsf{Max}~k\text {-}\mathsf{Uncut}$$ problem. Given an n-vertex undirected graph $$G = (V, E)$$ with nonnegative weights $$\{w_e \mid e \in E\}$$ defined on edges, and a positive integer k, the $$\mathsf{Max}~k\text {-}\mathsf{Uncut}$$ problem asks to find a partition $$\{V_1, V_2, \cdots , V_k\}$$ of V such that the total weight of edges that are not cut is maximized. This problem is just the complement of the classic $$\mathsf{Min}~k\text {-}\mathsf{Cut}$$ problem. We get this problem from the study of complex networks. For $$\mathsf{Max}~k\text {-}\mathsf{Uncut}$$, we present a randomized $$(1-\frac{k}{n})^2$$-approximation algorithm, a greedy $$(1-\frac{2(k-1)}{n})$$-approximation algorithm, and an $$\varOmega (\frac{1}{2} \alpha )$$-approximation algorithm by reducing it to $$\mathsf{Densest}~k\text {-}\mathsf{Subgraph}$$, where $$\alpha $$ is the approximation ratio for the $$\mathsf{Densest}~k\text {-}\mathsf{Subgraph}$$ problem. More importantly, we show that $$\mathsf{Max}~k\text {-}\mathsf{Uncut}$$ and $$\mathsf{Densest}~k\text {-}\mathsf{Subgraph}$$ are in fact equivalent in approximability up to a factor of 2. We also prove a weak approximation hardness result for $$\mathsf{Max}~k\text {-}\mathsf{Uncut}$$ under the assumption $$\text {P} \ne \text {NP}$$.

Peng Zhang, Chenchen Wu, Dachuan Xu, Xinghe Zhang

On Strong Tree-Breadth

In this paper, we introduce and investigate a new notion of strong tree-breadth. We say that a graph G has strong tree-breadth$$\rho $$ if there is a tree-decomposition T for G such that each bag B of T is equal to the complete $$\rho $$-neighbourhood of some vertex v in G, i. e., $$B = N_G^\rho [v]$$. We show thatit is NP-complete to determine if a given graph has strong tree-breadth $$\rho $$, even for $$\rho = 1$$;if a graph G has strong tree-breadth $$\rho $$, then we can find a tree-decomposition for G with tree-breadth $$\rho $$ in $$\mathcal {O}(n^2m)$$ time;with some additional restrictions, a tree-decomposition with strong breadth $$\rho $$ can be found in polynomial time;some graph classes including distance-hereditary graphs have strong tree-breadth 1.

Arne Leitert, Feodor F. Dragan

Computing a Tree Having a Small Vertex Cover

In this paper, we consider a new Steiner tree problem. This problem defines the weight of a Steiner tree as the minimum weight of vertex covers in the tree, and seeks a minimum-weight Steiner tree in a given vertex-weighted undirected graph. Since it is included by the Steiner tree activation problem, the problem admits an $$O(\log n)$$-approximation algorithm in general graphs with n vertices. This approximation factor is tight because it is known to be NP-hard to achieve an $$o(\log n)$$-approximation for the problem with general graphs. In this paper, we present constant-factor approximation algorithms for the problem with unit disk graphs and with graphs excluding a fixed minor.

Takuro Fukunaga, Takanori Maehara

On the Approximability of Partial VC Dimension

We introduce the problem Partial VC Dimension that asks, given a hypergraph $$H=(X,E)$$ and integers k and $$\ell $$, whether one can select a set $$C\subseteq X$$ of k vertices of H such that the set $$\{e\cap C, e\in E\}$$ of distinct hyperedge-intersections with C has size at least $$\ell $$. The sets $$e\cap C$$ define equivalence classes over E. Partial VC Dimension is a generalization of VC Dimension, which corresponds to the case $$\ell =2^k$$, and of Distinguishing Transversal, which corresponds to the case $$\ell =|E|$$ (the latter is also known as Test Cover in the dual hypergraph). We also introduce the associated fixed-cardinality maximization problem Max Partial VC Dimension that aims at maximizing the number of equivalence classes induced by a solution set of k vertices. We study the approximation complexity of Max Partial VC Dimension on general hypergraphs and on more restricted instances, in particular, neighborhood hypergraphs of graphs.

Cristina Bazgan, Florent Foucaud, Florian Sikora

Improved Precise Fault Diagnosis Algorithm for Hypercube-Like Graphs

The system reliability is an important issue for multiprocessor systems. The fault diagnosis has become crucial for achieving high reliability in multiprocessor systems. In the comparison-based model, it allows a processor to perform diagnosis by contrasting the responses from a pair of neighboring processors through sending the identical assignment. Recently, Ye and Hsieh devised an precise fault diagnosis algorithm to detect all faulty processors for hypercube-like networks by using the MM* model with $$O(N(\log _{2}N)^2)$$ time complexity, where N is the cardinality of processor set in multiprocessor systems. On the basis of Hamiltonian cycle properties, we improve the aforementioned results by presenting an O(N)-time precise fault diagnosis algorithm to detect all faulty processors for hypercube-like networks by using the MM* model.

Tai-Ling Ye, Dun-Wei Cheng, Sun-Yuan Hsieh

Finding Disjoint Paths on Edge-Colored Graphs: A Multivariate Complexity Analysis

The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph (MaxCDP) has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate how the complexity of the problem depends on graph parameters (distance from disjoint paths and size of vertex cover), and that is not FPT-approximable. Moreover, we introduce a new variant of the problem, called MaxCDDP, whose goal is to find the maximum number of vertex-disjoint and color-disjoint uni-color paths. We extend some of the results of MaxCDP to this new variant, and we prove that unlike MaxCDP, MaxCDDP is already hard on graphs at distance two from disjoint paths.

Riccardo Dondi, Florian Sikora

Total Dual Integrality of Triangle Covering

This paper concerns weighted triangle covering in undirected graph $$G=(V,E)$$, where a nonnegative integral vector $$\mathbf w=(w(e):e\in E)^T$$ gives weights of edges. A subset S of E is a triangle cover in G if S intersects every triangle of G. The weight of a triangle cover is the sum of w(e) over all edges e in it. The characteristic vector $$\mathbf x$$ of each triangle cover in G is an integral solution of the linear system $$\begin{aligned} \pi :A\mathbf x\ge \mathbf 1,\mathbf x\ge \mathbf 0, \end{aligned}$$where A is the triangle-edge incidence matrix of G. System $$\pi $$ is totally dual integral if $$\max \{\mathbf 1^T\mathbf y:A^{T}\mathbf y\le \mathbf w,\mathbf y\ge \mathbf 0\}$$ has an integral optimum solution $$\mathbf y$$ for each integral vector $$\mathbf w\in \mathbb Z_+^E$$ for which the maximum is finite. The total dual integrality of $$\pi $$ implies the nice combinatorial min-max relation that the minimum weight of a triangle cover equals the maximize size of a triangle packing, i.e., a collection of triangles in G (repetitions allowed) such that each edge e is contained in at most w(e) of them. In this paper, we obtain graphical properties that are necessary for the total dual integrality of system $$\pi $$, as well as those for the (stronger) total unimodularity of matrix A and the (weaker) integrality of polyhedron $$\{\mathbf x:A\mathbf x\ge \mathbf 1,\mathbf x\ge \mathbf 0\}$$. These necessary conditions are shown to be sufficient when restricted to planar graphs. We prove that the three notions of integrality coincide, and are commonly characterized by excluding odd pseudo-wheels from the planar graphs.

Xujin Chen, Zhuo Diao, Xiaodong Hu, Zhongzheng Tang

Time-Optimal Broadcasting of Multiple Messages in 1-in Port Model

In the 1-in port model, every vertex of a synchronous network can receive each time unit at most one message. We consider simultaneous broadcasting of multiple messages from the same source in such networks with an additional restriction that every received message can be sent out to neighbors only in the next time unit and never to already informed vertex. We use a general concept of level-disjoint partitions developed for this scenario. Here we introduce a subgraph extension technique for efficient spreading information within this concept. Surprisingly, this approach with so called biwheels leads to simultaneous broadcasting of optimal number of messages on a wide class of graphs in optimal time. In particular, we provide tight results for bipartite tori, meshes, hypercubes. Several problems and conjectures are proposed.

Petr Gregor, Riste Škrekovski, Vida Vukašinović

Fast Searching on Complete k-partite Graphs

Research on graph searching has recently gained interest in computer science, mathematics, and physics. This paper studies fast searching of a fugitive in a graph, a model that was introduced by Dyer, Yang and Yaşar in 2008. We provide lower bounds and upper bounds on the fast search number (i.e., the minimum number of searchers required for capturing the fugitive) of complete k-partite graphs. We also investigate some special classes of complete k-partite graphs, such as complete bipartite graphs and complete split graphs. We solve the open problem of determining the fast search number of complete bipartite graphs, and present upper and lower bounds on the fast search number of complete split graphs.

Yuan Xue, Boting Yang, Farong Zhong, Sandra Zilles

Cliques in Regular Graphs and the Core-Periphery Problem in Social Networks

The existence of a densely knit core surrounded by a loosely connected periphery is a common macro-structural feature of social networks. Formally, the CorePeriphery problem is to partition the nodes of an undirected graph $$G=(V,E)$$ such that a subset $$X\subset V$$, the core, induces a dense subgraph, and its complement $$V\!\setminus \!X$$, the periphery, induces a sparse subgraph. Split graphs represent the ideal case in which the core induces a clique and the periphery forms an independent set. The number of missing and superfluous edges in the core and the periphery, respectively, can be minimized in linear time via edit distance to the closest split graph.We show that the CorePeriphery becomes intractable for standard notions of density other than the absolute number of misclassified pairs. Our main tool is a regularization procedure that transforms a given graph with maximum degree d into a d-regular graph with the same clique number by adding at most $$d\cdot n$$ new nodes. This is of independent interest because it implies that finding a maximum clique in a regular graph is NP-hard to approximate to within a factor of $$n^{1/2-\varepsilon }$$ for all $$\varepsilon >0$$.

Ulrik Brandes, Eugenia Holm, Andreas Karrenbauer

Constant Factor Approximation for the Weighted Partial Degree Bounded Edge Packing Problem

In the partial degree bounded edge packing problem (PDBEP), the input is an undirected graph $$G=(V,E)$$ with capacity $$c_v\in {\mathbb {N}}$$ on each vertex. The objective is to find a feasible subgraph $$G'=(V,E')$$ maximizing $$|E'|$$, where $$G'$$ is said to be feasible if for each $$e=\{u,v\}\in E'$$, $$\deg _{G'}(u)\le c_u$$ or $$\deg _{G'}(v)\le c_v$$. In the weighted version of the problem, additionally each edge $$e\in E$$ has a weight w(e) and we want to find a feasible subgraph $$G'=(V,E')$$ maximizing $$\sum _{e\in E'} w(e)$$. The problem is already NP-hard if $$c_v = 1$$ for all $$v\in V$$ [Zhang, FAW-AAIM 2012].In this paper, we introduce a generalization of the PDBEP problem. We let the edges have weights as well as demands, and we present the first constant-factor approximation algorithms for this problem. Our results imply the first constant-factor approximation algorithm for the weighted PDBEP problem, improving the result of Aurora et al. [FAW-AAIM 2013] who presented an $$O(\log n)$$-approximation for the weighted case.We also present a PTAS for H-minor free graphs, if the demands on the edges are bounded above by a constant, and we show that the problem is APX-hard even for cubic graphs and bounded degree bipartite graphs with $$c_v = 1, \; \forall v\in V$$.

Pawan Aurora, Monalisa Jena, Rajiv Raman

An Introduction to Coding Sequences of Graphs

In this paper, we introduce a new representation of simple undirected graphs in terms of set of vectors in finite dimensional vector spaces over $$\mathbb {Z}_2$$ which satisfy consecutive 1’s property, called a coding sequence of a graph G. Among all coding sequences we identify the one which is unique for a class of isomorphic graphs, called the code of a graph. We characterize several classes of graphs in terms of coding sequences. It is shown that a graph G with n vertices is a tree if and only if any coding sequence of G is a basis of the vector space $$\mathbb {Z}_2^{n-1}$$ over $$\mathbb {Z}_2$$.Moreover, considering coding sequences as binary matroids, we obtain a characterization for simple graphic matroids. Introducing concepts of segment binary matroid and strong isomorphisms we show that two simple undirected graphs are isomorphic if and only if their canonical sequences are strongly isomorphic simple segment binary matroids.AMS Subject Classifications: 05C62, 05C50, 05B35.

Shamik Ghosh, Raibatak Sen Gupta, M. K. Sen

Minimum Eccentricity Shortest Path Problem: An Approximation Algorithm and Relation with the k-Laminarity Problem

The Minimum Eccentricity Shortest Path (MESP) Problem consists in determining a shortest path (a path whose length is the distance between its extremities) of minimum eccentricity in a graph. It was introduced by Dragan and Leitert [9] who described a linear-time algorithm which is an 8-approximation of the problem. In this paper, we study deeper the double-BFS procedure used in that algorithm and extend it to obtain a linear-time 3-approximation algorithm. We moreover study the link between the MESP problem and the notion of laminarity, introduced by Völkel et al. [12], corresponding to its restriction to a diameter (i.e. a shortest path of maximum length), and show tight bounds between MESP and laminarity parameters.

Étienne Birmelé, Fabien de Montgolfier, Léo Planche

On the Complexity of Extracting Subtree with Keeping Distinguishability

We consider the problem of subtree extraction with guarantee of preserving distinguishability. Given a query q and a tree T, evaluating q on T will output q(T) which is a set of nodes of T. For two nodes a and b in T, they can be distinguished by some query q in T, iff exactly one of them belongs to q(T). Then, given a tree T, a query class $$\mathcal {L}$$, and two disjoint node sets A and B of T, a subtree $$T'$$ of T is called preserving distinguishability of T, iff (1) $$T'$$ contains all nodes in $$A\cup B$$, (2) for any node pair $$(a,b)\in {A\times B}$$, if a and b can be distinguished by some query in $$\mathcal {L}$$ in T, they can also be distinguished by some query (not necessarily the same one) in $$\mathcal {L}$$ in $$T'$$, and (3) for any node pair $$(a,b)\in {A\times B}$$ and a query $$q\in \mathcal {L}$$, if a and b can be distinguished by q in $$T'$$, they can also be distinguished by q in T. The subtree extraction problem considered by this paper is to determine whether there is a small enough subtree $$T'$$ of T, such that for query class $$\mathcal {L}$$ and node sets A and B, $$T'$$ preserves the distinguishability of T. In this paper, as an initial attempt of investigating this problem, fixing $$\mathcal {L}$$ to be a specific part of tree pattern queries (introduced later), the subtree extraction problem is shown to be NP-complete.

Xianmin Liu, Zhipeng Cai, Dongjing Miao, Jianzhong Li

Safe Sets in Graphs: Graph Classes and Structural Parameters

A safe set of a graph $$G=(V,E)$$ is a non-empty subset S of V such that for every component A of G[S] and every component B of $$G[V \setminus S]$$, we have $$|A| \ge |B|$$ whenever there exists an edge of G between A and B. In this paper, we show that a minimum safe set can be found in polynomial time for trees. We then further extend the result and present polynomial-time algorithms for graphs of bounded treewidth, and also for interval graphs. We also study the parameterized complexity of the problem. We show that the problem is fixed-parameter tractable when parameterized by the solution size. Furthermore, we show that this parameter lies between tree-depth and vertex cover number.

Raquel Águeda, Nathann Cohen, Shinya Fujita, Sylvain Legay, Yannis Manoussakis, Yasuko Matsui, Leandro Montero, Reza Naserasr, Yota Otachi, Tadashi Sakuma, Zsolt Tuza, Renyu Xu

On Local Structures of Cubicity 2 Graphs

A 2-stab unit interval graph (2SUIG) is an axes-parallel unit square intersection graph where the unit squares intersect either of the two fixed lines parallel to the X-axis, distance $$1 + \epsilon $$ ($$0< \epsilon < 1$$) apart. This family of graphs allow us to study local structures of unit square intersection graphs, that is, graphs with cubicity 2. The complexity of determining whether a tree has cubicity 2 is unknown while the graph recognition problem for unit square intersection graph is known to be NP-hard. We present a linear time algorithm for recognizing trees that admit a 2SUIG representation.

Sujoy Bhore, Dibyayan Chakraborty, Sandip Das, Sagnik Sen

Approximability of the Distance Independent Set Problem on Regular Graphs and Planar Graphs

This paper studies generalized variants of the maximum independent set problem, called the Maximum Distance-dIndependent Set problem ($$\mathsf{MaxD}d\mathsf{IS}$$ for short). For an integer $$d \ge 2$$, a distance-d independent set of an unweighted graph $$G = (V, E)$$ is a subset $$S \subseteq V$$ of vertices such that for any pair of vertices $$u, v \in S$$, the number of edges in any path between u and v is at least d in G. Given an unweighted graph G, the goal of $$\mathsf{MaxD}d\mathsf{IS}$$ is to find a maximum-cardinality distance-d independent set of G. In this paper, we analyze the (in)approximability of the problem on r-regular graphs ($$r\ge 3$$) and planar graphs, as follows: (1) For every fixed integers $$d\ge 3$$ and $$r\ge 3$$, $$\mathsf{MaxD}d\mathsf{IS}$$ on r-regular graphs is APX-hard. (2) We design polynomial-time $$O(r^{d-1})$$-approximation and $$O(r^{d-2}/d)$$-approximation algorithms for $$\mathsf{MaxD}d\mathsf{IS}$$ on r-regular graphs. (3) We sharpen the above $$O(r^{d-2}/d)$$-approximation algorithms when restricted to $$d=r=3$$, and give a polynomial-time 2-approximation algorithm for MaxD3IS on cubic graphs. (4) Finally, we show that $$\mathsf{MaxD}d\mathsf{IS}$$ admits a polynomial-time approximation scheme (PTAS) for planar graphs.

Hiroshi Eto, Takehiro Ito, Zhilong Liu, Eiji Miyano

Algorithmic Aspects of Disjunctive Total Domination in Graphs

For a graph $$G=(V,E)$$, $$D\subseteq V$$ is a dominating set if every vertex in $$V\!\setminus \!D$$ has a neighbor in D. If every vertex in V has to be adjacent to a vertex of D, then D is called a total dominating set of G. The (total) domination problem on G is to find a (total) dominating set D of the minimum cardinality. The (total) domination problem is well-studied. Recently, the following variant is proposed. Vertex subset D is a disjunctive total dominating set if every vertex of V is adjacent to a vertex of D or has at least two vertices in D at distance 2 from it. The disjunctive total domination problem on G is to find a disjunctive total dominating set D of the minimum cardinality. For the complexity issue, the only known result is that the disjunctive total domination problem is NP-hard on general graphs. In this paper, by using a minimum-cost flow algorithm as a subroutine, we show that the disjunctive total domination problem on trees can be solved in polynomial time. This is the first polynomial-time algorithm for the problem on a special class of graphs. Besides, we show that the problem remains NP-hard on bipartite graphs and planar graphs.

Chin-Fu Lin, Sheng-Lung Peng

Instance Guaranteed Ratio on Greedy Heuristic for Genome Scaffolding

The Scaffolding problem in bioinformatics, aims to complete the contig assembly process by determining the relative position and orientation of these contigs. Modeled as a combinatorial optimization problem in a graph named scaffold graph, this problem is $$\mathcal {NP}$$-hard and its exact resolution is generally impossible on large instances. Hence, heuristics like polynomial-time approximation algorithms remain the only possibility to propose a solution. In general, even in the case where we know a constant guaranteed approximation ratio, it is impossible to know if the solution proposed by the algorithm is close to the optimal, or close to the bound defined by this ratio. In this paper we present a measure, associated to a greedy algorithm, determining an upper bound on the score of the optimal solution. This measure, depending on the instance, guarantees a – non constant – ratio for the greedy algorithm on this instance. We prove that this measure is a fine upper bound on optimal score, we perform experiments on real instances and show that the greedy algorithm yields near from optimal solutions.

Clément Dallard, Mathias Weller, Annie Chateau, Rodolphe Giroudeau

Geometric Optimization


Performing Multicut on Walkable Environments

Obtaining a Minimally Connected Multi-layered Environment from a Walkable Environment

A multi-layered environment is a representation of the walkable space in a 3D virtual environment that comprises a set of two-dimensional layers together with the locations where the different layers touch, which are called connections. This representation can be used for crowd simulations, e.g. to determine evacuation times in complex buildings. Since the execution times of many algorithms depend on the number of connections, we will study multi-layered environments with a minimal number of connections. We show how finding a minimally connected multi-layered environment can be formulated as an instance of the multicut problem. We will prove that finding a minimally connected multi-layered environment is an NP-Hard problem. Lastly, we will present techniques which shrink the size of the underlying graph by removing redundant information. These techniques decrease the input size for algorithms that use this representation for finding multi-layered environments.

Arne Hillebrand, Marjan van den Akker, Roland Geraerts, Han Hoogeveen

Minimum Weight Polygon Triangulation Problem in Sub-Cubic Time Bound

We break the long standing cubic time bound of $$O(n^3)$$ for the Minimum Weight Polygon Triangulation problem by showing that the well known dynamic programming algorithm, reported independently by Gilbert and Klincsek, can be optimized with a faster algorithm for the $$(min,+)$$-product using look-up tables. In doing so, we also show that the well known Floyd-Warshall algorithm can be optimized in a similar manner to achieve a sub-cubic time bound for the All Pairs Shortest Paths problem without having to resort to recursion in the semi-ring theory.

Sung Eun Bae, Tong-Wook Shinn, Tadao Takaoka

The Mixed Center Location Problem

This paper studies a new version of the location problem called the mixedcenterlocation problem. Let P be a set of n points in the plane. We first consider the mixed 2-center problem where one of the centers must be in P and solve it in $$O(n^2\log n)$$ time. Next we consider the mixed k-center problem where m of the centers are in P. Motivated by two practical constraints, we propose two variations of the problem. We present an exact algorithm, a 2-approximation algorithm and a heuristic algorithm solving the mixed k-center problem. The time complexity of the exact algorithm is $$O(n^{m+O(\sqrt{k-m})})$$.

Yi Xu, Jigen Peng, Yinfeng Xu

Constrained Light Deployment for Reducing Energy Consumption in Buildings

Lighting systems account for a major part of the energy consumed by large commercial buildings. This paper aims at reducing this energy consumption by defining the Contrained Light Deployment Problem. This new problem is related to the classical Art Gallery Problem (AGP) in computational geometry. In contrast to AGP, which asks for the minimum number of guards to monitor a polygonal area, our problem, CLDP, poses a new challenging requirement: not only must each point p have an unobstructed line-of-sight to a light source, but also, the combined illuminance at p from all light sources must exceed some given threshold value. We provide evidence that our new problem is NP-hard, based on known results for AGP. Then we propose fast heuristics for floor plans shaped like orthogonal polygons, with and without holes. Our problem formulation allows lights to be placed internally, not only at vertices. Our algorithm, which combines ideas from computational geometry, clustering and binary search, computes a set of light placements that satisfies the illumination requirement. The algorithm seeks a light set of minimum size by an iterative binary search procedure that progressively tightens upper and lower bounds.

Huamei Tian, Kui Wu, Sue Whitesides, Cuiying Feng

On the 2-Center Problem Under Convex Polyhedral Distance Function

Metrics or distance functions generalizing the Euclidean metric (and even $$L_p$$-metrics) have been used in Computational Geometry long ago; for example, convex distance functions appeared in the first Symposium on Computational geometry [2]. Very recently Das et al. [5] studied the 1-center problem under a convex polyhedral distance function where the unit ball of the distance function is a convex polytope. In this paper we develop algorithms for the 2-center problem under a convex polyhedral distance function in $$\mathbb {R}^d,d=2,3$$. We show that the 2-center can be computed in $$O(n\log m)$$ time for the plane and in $$O(nm\log n)$$ time for $$d=3$$. We show that the problem of for computing the 2-center in $$\mathbb {R}^3$$ has an $$\varOmega (n\log n)$$ lower bound.

Sergey Bereg

Algorithms for Colourful Simplicial Depth and Medians in the Plane

The colourful simplicial depth (CSD) of a point relative to a configuration $$P=(P^1, P^2, \ldots , P^k)$$ of n points in k colour classes is exactly the number of closed simplices (triangles) with vertices from 3 different colour classes that contain $$x$$ in their convex hull. We consider the problems of efficiently computing the colourful simplicial depth of a point x, and of finding a point in , called a median, that maximizes colourful simplicial depth.For computing the colourful simplicial depth of $$x$$, our algorithm runs in time $$O\left( n \log {n} + k n \right) $$ in general, and O(kn) if the points are sorted around $$x$$. For finding the colourful median, we get a time of $$O(n^4)$$. For comparison, the running times of the best known algorithm for the monochrome version of these problems are $$O\left( n \log {n} \right) $$ in general, improving to O(n) if the points are sorted around $$x$$ for monochrome depth, and $$O(n^4)$$ for finding a monochrome median.

Olga Zasenko, Tamon Stephen

Realizability of Graphs as Triangle Cover Contact Graphs

Let $$S=\{p_1,p_2,\ldots ,p_n\}$$ be a set of pairwise disjoint geometric objects of some type and let $$C=\{c_1,c_2,\ldots ,c_n\}$$ be a set of closed objects of some type with the property that each element in C covers exactly one element in S and any two elements in C can intersect only on their boundaries. We call an element in S a seed and an element in C a cover. A cover contact graph (CCG) consists of a set of vertices and a set of edges where each of the vertex corresponds to each of the covers and each edge corresponds to a connection between two covers if and only if they touch at their boundaries. A triangle cover contact graph (TCCG) is a cover contact graph whose cover elements are triangles. In this paper, we show that every Halin graph has a realization as a TCCG on a given set of collinear seeds. We introduce a new class of graphs which we call super-Halin graphs. We also show that the classes super-Halin graphs, cubic planar Hamiltonian graphs and $$a\times b$$ grid graphs have realizations as TCCGs on collinear seeds. We also show that every complete graph has a realization as a TCCG on any given set of seeds. Note that only trees and cycles are known to be realizable as CCGs and outerplanar graphs are known to be realizable as TCCGs.

Shaheena Sultana, Md. Saidur Rahman

A Quadratic Time Exact Algorithm for Continuous Connected 2-Facility Location Problem in Trees (Extended Abstract)

This paper studies the continuous connected 2-facility location problem (CC2FLP) in trees. Let $$T = (V, E, c, d, \ell , \mu )$$ be an undirected rooted tree, where each node $$v \in V$$ has a weight $$d(v) \ge 0$$ denoting the demand amount of v as well as a weight $$\ell (v) \ge 0$$ denoting the cost of opening a facility at v, and each edge $$e \in E$$ has a weight $$c(e) \ge 0$$ denoting the cost on e as well as is associated with a function $$\mu (e,t) \ge 0$$ denoting the cost of opening a facility at a point x(e, t) on e where t is a continuous variable on e. Given a subset $$\mathcal {D} \subseteq V$$ of clients, and a subset $$\mathcal {F} \subseteq \mathcal {P}(T)$$ of continuum points admitting facilities where $$\mathcal {P}(T)$$ is the set of all the points on edges of T, when two facilities are installed at a pair of continuum points $$x_1$$ and $$x_2$$ in $$\mathcal {F}$$, the total cost involved by CC2FLP includes three parts: the cost of opening two facilities at $$x_1$$ and $$x_2$$, K times the cost of connecting $$x_1$$ and $$x_2$$, and the cost of all the clients in $$\mathcal {D}$$ connecting to some facility. The objective is to open two facilities at a pair of continuum points in $$\mathcal {F}$$ to minimize the total cost, for a given input parameter $$K \ge 1$$. This paper considers the case of $$\mathcal {D} = V$$ and $$\mathcal {F} = \mathcal {P}(T)$$. We first study the discrete version of CC2FLP, named the discrete connected 2-facility location problem (DC2FLP), where two facilities are restricted to the nodes of T, and devise a quadratic time edge-splitting algorithm for DC2FLP. Furthermore, we prove CC2FLP is almost equivalent to DC2FLP in trees, and develop a quadratic time exact algorithm based on the edge-splitting algorithm.

Wei Ding, Ke Qiu

Complexity and Data Structure


On the (Parameterized) Complexity of Recognizing Well-Covered -graphs

An $$(r, \ell )$$-partition of a graph G is a partition of its vertex set into r independent sets and $$\ell $$ cliques. A graph is $$(r, \ell )$$ if it admits an $$(r, \ell )$$-partition. A graph is well-covered if every maximal independent set is also maximum. A graph is $$(r,\ell )$$-well-covered if it is both $$(r,\ell )$$ and well-covered. In this paper we consider two different decision problems. In the $$(r,\ell )$$-Well-Covered Graph problem ($$(r,\ell )$$wcg for short), we are given a graph G, and the question is whether G is an $$(r,\ell )$$-well-covered graph. In the Well-Covered$$(r,\ell )$$-Graph problem (wc$$(r,\ell )$$g for short), we are given an $$(r,\ell )$$-graph G together with an $$(r,\ell )$$-partition of V(G) into r independent sets and $$\ell $$ cliques, and the question is whether G is well-covered. We classify most of these problems into P, coNP-complete, NP-complete, NP-hard, or coNP-hard. Only the cases wc(r, 0)g for $$r\ge 3$$ remain open. In addition, we consider the parameterized complexity of these problems for several choices of parameters, such as the size $$\alpha $$ of a maximum independent set of the input graph, its neighborhood diversity, or the number $$\ell $$ of cliques in an $$(r, \ell )$$-partition. In particular, we show that the parameterized problem of deciding whether a general graph is well-covered parameterized by $$\alpha $$ can be reduced to the wc$$(0,\ell )$$g problem parameterized by $$\ell $$, and we prove that this latter problem is in XP but does not admit polynomial kernels unless $$\mathsf{coNP} \subseteq \mathsf{NP} / \mathsf{poly}$$.

Sancrey Rodrigues Alves, Konrad K. Dabrowski, Luerbio Faria, Sulamita Klein, Ignasi Sau, Uéverton dos Santos Souza

Algorithmic Analysis for Ridesharing of Personal Vehicles

The ridesharing problem is to share personal vehicles by individuals (participants) with similar itineraries. A trip in the ridesharing problem is a participant and his/her itinerary. To realize a trip is to deliver the participant to his/her destination by a vehicle satisfying the itinerary requirement. We study two optimization problems in ridesharing: minimize the number of vehicles and minimize the total travel distance of vehicles to realize all trips. The minimization problems are complex and NP-hard because of many parameters. We simplify the problems by considering only the source, destination, vehicle capacity, detour distance and preferred path parameters. We prove that the simplified minimization problems are still NP-hard while a further simplified variant is polynomial time solvable. These suggest a boundary between the NP-hard and polynomial time solvable cases.

Qian-Ping Gu, Jiajian Leo Liang, Guochuan Zhang

On the Complexity of Bounded Deletion Propagation

Deletion propagation problem is a class of view update problem in relational databases [1]. Given a source database D, a monotone relational algebraic query Q, the view V generated by the query Q(D) and an update on view $$\varDelta V$$, deletion propagation is to find a side effect free update $$\varDelta D$$ on database D such that $$Q(D{\setminus }\varDelta D)=V{\setminus }\varDelta V$$. In general, the database updated may be very distant from the original database. In this paper, we propose a new approach, bounded version deletion propagation problem ($$\textit{b}\text {-}\mathsf{dp}$$ for short), where number of tuples deleted ‘$$|\varDelta D|$$’ is bounded by constant b, in which it aims to find the view side-effect free and bounded $$\varDelta D$$, then analyze its computational complexity. Our results show that in many cases both the data and combined complexity drop, even for functional dependency restricted version deletion propagation.

Dongjing Miao, Yingshu Li, Xianmin Liu, Jianzhong Li

On Residual Approximation in Solution Extension Problems

The solution extension variant of a problem consists in, being given an instance and a partial solution, finding the best solution comprising the given partial solution. Many problems have been studied with a similar approach. For instance the Precoloring Extension problem, the clustered variant of the Travelling Salesman problem, or the General Routing Problem are in a way typical examples of solution extension variant problems. Motivated by practical applications of such variants, this work aims to explore different aspects around extension on classical optimization problems. We define residue-approximations as algorithms whose performance ratio on the non-prescribed part can be bounded, and corresponding complexity classes. Using residue-approximation, we classify several problems according to their residue-approximability.

Mathias Weller, Annie Chateau, Rodolphe Giroudeau, Jean-Claude König, Valentin Pollet

On the Parameterized Parallel Complexity and the Vertex Cover Problem

Efficiently parallelizable parameterized problems have been classified as being either in the class FPP (fixed-parameter parallelizable) or the class PNC (parameterized analog of NC), which contains FPP as a subclass. In this paper, we propose a more restrictive class of parallelizable parameterized problems called fixed-parameter parallel-tractable (FPPT). For a problem to be in FPPT, it should possess an efficient parallel algorithm not only from a theoretical standpoint but in practice as well. The primary distinction between FPPT and FPP is the parallel processor utilization, which is bounded by a polynomial function in the case of FPPT. We initiate the study of FPPT with the well-known k-vertex cover problem. In particular, we present a parallel algorithm that outperforms the best known parallel algorithm for this problem: using $$\mathcal {O}(m)$$ instead of $$\mathcal {O}(n^2)$$ parallel processors, the running time improves from $$4\log n + \mathcal {O}(k^k)$$ to $$\mathcal {O}(k\cdot \log ^3 n)$$, where m is the number of edges, n is the number of vertices of the input graph, and k is an upper bound of the size of the sought vertex cover. We also note that a few P-complete problems fall into FPPT including the monotone circuit value problem (MCV) when the underlying graphs are bounded by a constant Euler genus.

Faisal N. Abu-Khzam, Shouwei Li, Christine Markarian, Friedhelm Meyer auf der Heide, Pavel Podlipyan

A Linear Potential Function for Pairing Heaps

We present the first potential function for pairing heaps with linear range. This implies that the runtime of a short sequence of operations is faster than previously known. It is also simpler than the only other potential function known to give constant amortized time for insertion.

John Iacono, Mark Yagnatinsky

Amortized Efficiency of Ranking and Unranking Left-Child Sequences in Lexicographic Order

A new type of sequences called left-child sequences (LC-sequences for short) was recently introduced by Wu et al. [19] for representing binary trees. In particular, they pointed out that such sequences have a natural interpretation from the view point of data structure and gave a characterization of them. Based on such a characterization, there is an algorithm to generate all LC-sequences of binary trees with n internal nodes in lexicographic order. In this paper, we extend our study to the ranking and unranking problems. By integrating a measure called “left distances” introduced by Mäkinen [8] to represent binary trees, we develop efficient ranking and unranking algorithms for LC-sequences in lexicographic order. With a help of aggregate analysis, we show that both ranking and unranking algorithms can be run in amortized cost of $$\mathcal {O}(n)$$ time and space.

Kung-Jui Pai, Ro-Yu Wu, Jou-Ming Chang, Shun-Chieh Chang

Combinatorial Optimization


Optimal Speed Scaling with a Solar Cell

(Extended Abstract)

We consider the setting of a sensor that consists of a speed-scalable processor, a battery, and a solar cell that harvests energy from its environment at a time-invariant recharge rate. The processor must process a collection of jobs of various sizes. Jobs arrive at different times and have different deadlines. The objective is to minimize the recharge rate, which is the rate at which the device has to harvest energy in order to feasibly schedule all jobs. The main result is a polynomial-time combinatorial algorithm for processors with a natural set of discrete speed/power pairs.

Neal Barcelo, Peter Kling, Michael Nugent, Kirk Pruhs

An Approximation Algorithm for the k-Median Problem with Uniform Penalties via Pseudo-Solutions

We present a $$(1+ \sqrt{3} + \epsilon )$$-approximation algorithm for the k-median problem with uniform penalties in polynomial time, extending the recent result by Li and Svensson for the classical k-median problem without penalties. One important difference of this work from that of Li and Svensson is a new definition of sparse instance to exploit the combinatorial structure of our problem.

Chenchen Wu, Donglei Du, Dachuan Xu

On-Line Pattern Matching on Uncertain Sequences and Applications

We study the fundamental problem of pattern matching in the case where the string data is weighted: for every position of the string and every letter of the alphabet a probability of occurrence for this letter at this position is given. Sequences of this type are commonly used to represent uncertain data. They are of particular interest in computational molecular biology as they can represent different kind of ambiguities in DNA sequences: distributions of SNPs in genomes populations; position frequency matrices of DNA binding profiles; or even sequencing-related uncertainties. A weighted string may thus represent many different strings, each with probability of occurrence equal to the product of probabilities of its letters at subsequent positions. In this article, we present new average-case results on pattern matching on weighted strings and show how they are applied effectively in several biological contexts. A free open-source implementation of our algorithms is made available.

Carl Barton, Chang Liu, Solon P. Pissis

Scheduling with Interjob Communication on Parallel Processors

Consider a scheduling problem in which a set of jobs with interjob communication, canonically represented by a weighted tree, needs to be scheduled on m parallel processors interconnected by a shared communication channel. In each time step, we may allow any processed job to use a certain capacity of the channel in order to satisfy (parts of) its communication demands to adjacent jobs processed in parallel. The goal is to find a schedule with minimum length in which communication demands of all jobs are satisfied.We show that this problem is NP-hard in the strong sense even if the number of processors and the maximum degree of the underlying tree is constant. Consequently, we design and analyze simple approximation algorithms with asymptotic approximation ratio in case of paths and a ratio of in case of arbitrary trees.

Jürgen König, Alexander Mäcker, Friedhelm Meyer auf der Heide, Sören Riechers

Cost-Efficient Scheduling on Machines from the Cloud

We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost.Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of $$\beta $$ if, for all jobs, the difference between the job’s release time and the latest point in time at which it needs to be started is at least $$\beta $$. While for $$\beta < s$$ no finite competitiveness is possible, our main result is an -competitive online algorithm for $$\beta = (1+\varepsilon )s$$ with , where s and c denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of .

Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide, Sören Riechers

Strategic Online Facility Location

In this paper we consider a strategic variant of the online facility location problem. Given is a graph in which each node serves two roles: it is a strategic client stating requests as well as a potential location for a facility. In each time step one client states a request which induces private costs equal to the distance to the closest facility. Before serving, the clients may collectively decide to open new facilities, sharing the corresponding price. Instead of optimizing the global costs, each client acts selfishly. The prices of new facilities vary between nodes and also change over time, but are always bounded by some fixed value $$\alpha $$. Both the requests as well as the facility prices are given by an online sequence and are not known in advance.We characterize the optimal strategies of the clients and analyze their overall performance in comparison to a centralized offline solution. If all players optimize their own competitiveness, the global performance of the system is $$\mathcal {O}(\sqrt{\alpha }\cdot \alpha )$$ times worse than the offline optimum. A restriction to a natural subclass of strategies improves this result to $$\mathcal {O}(\alpha )$$. We also show that for fixed facility costs, we can find strategies such that this bound further improves to $$\mathcal {O}(\sqrt{\alpha })$$.

Maximilian Drees, Björn Feldkord, Alexander Skopalik

An Efficient PTAS for Parallel Machine Scheduling with Capacity Constraints

In this paper, we consider the classical scheduling problem on parallel machines with capacity constraints. We are given m identical machines, where each machine k can process up to $$c_k$$ jobs. The goal is to assign the $$n\le \sum _{k=1}^{m}c_k$$ independent jobs on the machines subject to the capacity constraints such that the makespan is minimized. This problem is a generalization of the c-partition problem where $$c_k=c$$ for each machine. The c-partition problem is strongly NP-hard for $$c\ge 3$$ and the best known approximation algorithm of which has a performance ratio of 4 / 3 due to Babel et al. [2]. For the general problem where machines could have different capacities, the best known result is a 1.5-approximation algorithm with running time $$O(n\log n+m^2n)$$ [14]. In this paper, we improve the previous result substantially by establishing an efficient polynomial time approximation scheme (EPTAS). The key idea is to establish a non-standard ILP (Integer Linear Programming) formulation for the scheduling problem, where a set of crucial constraints (called proportional constraints) is introduced. Such constraints, along with a greedy rounding technique, allow us to derive an integer solution from a relaxed fractional one without violating constraints.

Lin Chen, Klaus Jansen, Wenchang Luo, Guochuan Zhang

A Pseudo-Polynomial Time Algorithm for Solving the Knapsack Problem in Polynomial Space

It is well-known that the knapsack problem is NP-complete and can be solved in pseudo-polynomial time if we use dynamic programming. However, such dynamic programming approach requires pseudo-polynomial space that makes the resultant algorithm impractical since the computer memory is exhausted before the running time gets too long to tolerate. To remedy this, recently, Lokshtanov and Nederlof presented the first algorithm to solve the knapsack problem in pseudo-polynomial time and polynomial space. This paper presents another algorithm for solving the knapsack problem in pseudo-polynomial time and polynomial space.

Noriyuki Fujimoto

Game Theory


An Incentive Mechanism for Selfish Bin Covering

In this paper, we consider the selfish bin covering problems, which can be viewed as the bin covering problems in game theoretic settings. Our main contribution is an incentive mechanism with better price of anarchy. Under this mechanism, for any instance with a Nash equilibrium (NE), we show that price of anarchy is 2/3. For the cases that the NE does not exist, we propose a concept of modified NE, named M-NE, which can be obtained in finite steps from any initial state. We further show that for M-NE, the price of anarchy is 1/2 and the price of stability is 1.

Weian Li, Qizhi Fang, Wenjing Liu

Congestion Games with Mixed Objectives

We study a new class of games which generalizes congestion games and its bottleneck variant. We introduce congestion games with mixed objectives to model network scenarios in which players seek to optimize for latency and bandwidths alike. We characterize the existence of pure Nash equilibria (PNE) and the convergence of improvement dynamics. For games that do not possess PNE we give bounds on the approximation ratio of approximate pure Nash equilibria.

Matthias Feldotto, Lennart Leder, Alexander Skopalik

An Optimal Strategy for Static Black-Peg Mastermind with Two Pegs

Mastermind is a famous two-player game which has attracted much attention in literature within the last years. In this work we investigate the static (also called non-adaptive) variant of Mastermind. The principal rule is that the codemaker has to choose a secret consisting of p pegs and c colors for each peg and the codebreaker may give a number of guesses at once, where for each guess he receives information from the codemaker. Using this information he has a final guess for the correct secret. The aim of the game is to minimize the number of guesses. Whereas Goddard has investigated the static version of original Mastermind in 2003, we do such an investigation of its black-peg variant, where the received information consists only of a number of black pegs which corresponds to the number of pegs matching in the corresponding question and the secret. As main result we present a strategy for this game for $$p=2$$ pegs and arbitrarily many colors $$ c\ge 3 $$ colors and prove its feasibility and optimality. Furthermore, by computer search we found optimal strategies for 9 other pairs (p, c).

Gerold Jäger



The Incentive Ratio in Exchange Economies

The incentive ratio measures the utility gains from strategic behaviour. Without any restrictions on the setup, ratios for linear, Leontief and Cobb–Douglas exchange markets are unbounded, showing that manipulating the equilibrium is a worthwhile endeavour, even if it is computationally challenging. Such unbounded improvements can be achieved even if agents only misreport their utility functions. This provides a sharp contrast with previous results from Fisher markets. When the Cobb–Douglas setup is more restrictive, the maximum utility gain is bounded by the number of commodities. By means of an example, we show that it is possible to exceed a known upper bound for Fisher markets in exchange economies.

Ido Polak

w-Centroids and Least (w, l)-Central Subtrees in Weighted Trees

Let T be a weighted tree with a positive number w(v) associated with each of vertices and a positive number l(e) associated with each of its edges. In this paper we show that each least (w, l)-central subtree of a weighted tree either contains a vertex of the w-centroid or is adjacent to a vertex of the w-centroid. Also, we show that any two least (w, l)-central subtrees of a weighted tree either have a nonempty intersection or are adjacent.

Erfang Shan, Liying Kang

Solving Dynamic Vehicle Routing Problem with Soft Time Window by iLNS and hPSO

Vehicle Routing Problem (VRP) is widely studied under the real logistics environments. For the reason that customers’ demands could appear dynamically and need to be served within fuzzy time windows, the dynamic vehicle routing problem with soft time windows (DVRPSTW) is studied in this paper. We use the improved large neighborhood search algorithm(iLNS) and the hybrid Particle Swarm Optimization(hPSO) to solve the problem. The performance of both algorithms comparing with benchmarks shows that our methods can solve DVRPSTW efficiently with more customers.

Xiaohan He, Xiaoli Zeng, Liang Song, Hejiao Huang, Hongwei Du

Convex Independence in Permutation Graphs

A set C of vertices of a graph is $$P_3$$-convex if every vertex outside C has at most one neighbor in C. The convex hull $$\sigma (A)$$ of a set A is the smallest $$P_3$$-convex set that contains A. A set M is convexly independent if for every vertex $$x \in M$$, $$x \notin \sigma (M-x)$$. We show that the maximal number of vertices that a convexly independent set in a permutation graph can have, can be computed in polynomial time. (Due to space limit, the missing proofs are presented in the full paper. Please see or

Wing-Kai Hon, Ton Kloks, Fu-Hong Liu, Hsiang-Hsuan Liu

The Connected p-Center Problem on Cactus Graphs

In this paper, we study a variant of the p-center problem on cactus graphs in which the p-center is asked to be connected, and this problem is called the connectedp-centerproblem. For the connected p-center problem on cactus graphs, we propose an dynamic programming algorithm and show that the time complexity is $$O(n^2p^2)$$, where n is number of vertices.

Chunsong Bai, Liying Kang, Erfang Shan

Comparison of Quadratic Convex Reformulations to Solve the Quadratic Assignment Problem

We consider the (QAP) that consists in minimizing a quadratic function subject to assignment constraints where the variables are binary. In this paper, we build two families of equivalent quadratic convex formulations of (QAP). The continuous relaxation of each equivalent formulation is then a convex problem and can be used within a B&B. In this work, we focus on finding the “best” equivalent formulation within each family, and we prove that it can be computed using semidefinite programming. Finally, we get two convex formulations of (QAP) that differ from their sizes and from the tightness of their continuous relaxation bound. We present computational experiments that prove the practical usefulness of using quadratic convex formulation to solve instances of (QAP) of medium sizes.

Sourour Elloumi, Amélie Lambert

Using Unified Model Checking to Verify Heaps

This paper addresses the problem of verifying heap evolution properties of pointer programs. To this end, a new unified model checking approach with MSVL (Modeling, Simulation and Verification Language) and PPTL$$^{\tiny \text{ SL }}$$ is presented. The former is an executable subset of PTL (Projection Temporal Logic) while the latter is an extension of PPTL (Propositional Projection Temporal Logic) with separation logic. MSVL is used to model pointer programs, and PPTL$$^{\tiny \text{ SL }}$$ to specify heap evolution properties. In addition, we implement a prototype in order to demonstrate our approach.

Xu Lu, Zhenhua Duan, Cong Tian

A Filtering Heuristic for the Computation of Minimum-Volume Enclosing Ellipsoids

We study heuristics to accelerate existing state-of-the-art algorithms for the minimum-volume enclosing ellipsoid problem. We propose a new filtering heuristic that can significantly reduce the number of distance computations performed in algorithms derived from Khachiyan’s first-order algorithm. Our experiments indicate that in high dimensions, the filtering heuristic is more effective than the elimination heuristic proposed by Harman and Pronzato. In lower dimensions, the elimination heuristic is superior.

Linus Källberg, Thomas Larsson

Relaxations of Discrete Sets with Semicontinuous Variables

A variable is said semicontinuous if its domain is given by the union of two disjoint nonempty closed intervals. As such, they can be regarded as relaxations of binary or integrality constraints appearing in combinatorial optimization problems. For knapsacks and a class of single-node flow sets, we consider relaxations involving unbounded semicontinuous variables. We analyze the complexity of linear optimization over such relaxations and provide descriptions of their convex hulls in terms of linear inequalities and extended formulations.

Gustavo Angulo

Unfolding the Core Structure of the Reciprocal Graph of a Massive Online Social Network

Google+ (G+ in short) is a directed online social network where nodes have either reciprocal (bidirectional) edges or parasocial (one-way) edges. As reciprocal edges represent strong social ties, we study the core structure of the subgraph formed by them, referred to as the reciprocal network of G+. We develop an effective three-step procedure to hierarchically extract and unfold the core structure of this reciprocal network. This procedure builds up and generalizes ideas from the existing k-shell decomposition and clique percolation approaches, and produces higher-level representations of the core structure of the G+ reciprocal network. Our analysis shows that there are seven subgraphs (“communities”) comprising of dense clusters of cliques lying at the center of the core structure of the G+ reciprocal network, through which other communities of cliques are richly connected. Together they form the core to which “peripheral” sparse subgraphs are attached.

Braulio Dumba, Zhi-Li Zhang

Tackling Common Due Window Problem with a Two-Layered Approach

This work presents a polynomial algorithm to optimize any given job sequence for the Common Due-Window (CDW) Problem. The CDW problem comprises of scheduling and sequencing a set of jobs against a due-window to minimize the total weighted earliness/tardiness penalty. This due-window is defined by the left and right common due-dates. Jobs that finish before (after) the left (right) due-date are termed as early (tardy) jobs. We present an exact polynomial algorithm for optimally scheduling a given fixed job sequence for a single machine with the runtime complexity of O(n), where n is the number of jobs. The linear algorithm and a heuristic based on the V-shaped property are then incorporated with a modified Simulated Annealing (SA) algorithm to obtain the optimal/near-optimal solutions. We carry out computational experiments to demonstrate the utility of our approach over the benchmark instances and previous work on this problem.

Abhishek Awasthi, Jörg Lässig, Thomas Weise, Oliver Kramer

A Polynomial Time Solution for Permutation Scaffold Filling

Recently, the scaffold filling problem has attracted a lot of attention due to its potential applications in processing incomplete genomic data. However, almost all the current research assumes that a scaffold is given as an incomplete sequence (i.e., missing genes can be inserted anywhere in the incomplete sequence). This differs significantly from most of the real genomic dataset, where a scaffold is given as a sequence of contigs. We show in this paper that when a scaffold is given as a sequence of contigs, and when the genome contains no duplication of genes, the corresponding scaffold filling problem, with the objective being maximizing the number of adjacencies between the filled scaffold and a complete reference genome, is polynomially solvable.

Nan Liu, Peng Zou, Binhai Zhu


Additional information

Premium Partner

    Image Credits