Skip to main content

Über dieses Buch

This Festschrift is in honor of Ker-I Ko, Professor in the Stony Brook University, USA. Ker-I Ko was one of the founding fathers of computational complexity over real numbers and analysis. He and Harvey Friedman devised a theoretical model for real number computations by extending the computation of Turing machines. He contributed significantly to advancing the theory of structural complexity, especially on polynomial-time isomorphism, instance complexity, and relativization of polynomial-time hierarchy. Ker-I also made many contributions to approximation algorithm theory of combinatorial optimization problems. This volume contains 17 contributions in the area of complexity and approximation. Those articles are authored by researchers over the world, including North America, Europe and Asia. Most of them are co-authors, colleagues, friends, and students of Ker-I Ko.



In Memoriam: Ker-I Ko (1950–2018)

Ker-I Ko was a colleague and a friend, and our friendships began from the mid 1980’s. Ker-I passed away peacefully due to lung failure on the 13th of December in 2018 at a hospital in New York, with his wife Mindy Pien and all three children by his side. His passing is a great loss to the theoretical computer science community.
Ding-Zhu Du, Jie Wang

Ker-I Ko and the Study of Resource-Bounded Kolmogorov Complexity

Ker-I Ko was among the first people to recognize the importance of resource-bounded Kolmogorov complexity as a tool for better understanding the structure of complexity classes. In this brief informal reminiscence, I review the milieu of the early 1980’s that caused an up-welling of interest in resource-bounded Kolmogorov complexity, and then I discuss some more recent work that sheds additional light on the questions related to Kolmogorov complexity that Ko grappled with in the 1980’s and 1990’s.
In particular, I include a detailed discussion of Ko’s work on the question of whether it is \({\mathsf{NP}}\)-hard to determine the time-bounded Kolmogorov complexity of a given string. This problem is closely connected with the Minimum Circuit Size Problem (\({\mathsf{MCSP}}\)), which is central to several contemporary investigations in computational complexity theory.
Eric Allender

The Power of Self-Reducibility: Selectivity, Information, and Approximation

This chapter provides a hands-on tutorial on the important technique known as self-reducibility. Through a series of “Challenge Problems” that are theorems that the reader will—after being given definitions and tools—try to prove, the tutorial will ask the reader not to read proofs that use self-reducibility, but rather to discover proofs that use self-reducibility. In particular, the chapter will seek to guide the reader to the discovery of proofs of four interesting theorems—whose focus areas range from selectivity to information to approximation—from the literature, whose proofs draw on self-reducibility.
The chapter’s goal is to allow interested readers to add self-reducibility to their collection of proof tools. The chapter simultaneously has a related but different goal, namely, to provide a “lesson plan” (and a coordinated set of slides is available online to support this use [13]) for a lecture to a two-lecture series that can be given to undergraduate students—even those with no background other than basic discrete mathematics and an understanding of what polynomial-time computation is—to immerse them in hands-on proving, and by doing that, to serve as an invitation to them to take courses on Models of Computation or Complexity Theory.
Lane A. Hemaspaandra

Who Asked Us? How the Theory of Computing Answers Questions about Analysis

Algorithmic fractal dimensions—constructs of computability theory—have recently been used to answer open questions in classical geometric measure theory, questions of mathematical analysis whose statements do not involve computability theory or logic. We survey these developments and the prospects for future such results.
Jack H. Lutz, Neil Lutz

Promise Problems on Probability Distributions

We consider probability distributions which are associated with the running time of probabilistic algorithms, given for algorithmic processing in symbolic form. The considered decision (also counting) problems deal with the question whether a complete restart of the underlying probabilistic algorithm after some number of steps t gives an advantage. Since deciding whether a given symbolic formula indeed represents a probability distribution (either as probability mass function or as cumulative distribution function) is itself a difficult problem to decide, we discuss the issue in terms of promise problems.
Jan-Hendrik Lorenz, Uwe Schöning

On Nonadaptive Reductions to the Set of Random Strings and Its Dense Subsets

We explain our recent results [21] on the computational power of an arbitrary distinguisher for (not necessarily computable) hitting set generators. This work is motivated by the desire of showing the limits of black-box reductions to some distributional \(\mathsf {NP}\) problem. We show that a black-box nonadaptive randomized reduction to any distinguisher for (not only polynomial-time but also) exponential-time computable hitting set generators can be simulated in \(\mathsf {AM}\cap \mathsf {co} \mathsf {AM}\); we also show an upper bound of \(\mathsf {S}_2^\mathsf {NP}\) even if there is no computational bound on a hitting set generator. These results provide additional evidence that the recent worst-case to average-case reductions within \(\mathsf {NP}\) shown by Hirahara (2018, FOCS) are inherently non-black-box. (We omit all detailed arguments and proofs, which can be found in [21].)
Shuichi Hirahara, Osamu Watanabe

Computability of the Solutions to Navier-Stokes Equations via Effective Approximation

As one of the seven open problems in the addendum to their 1989 book Computability in Analysis and Physics, Pour-El and Richards proposed “... the recursion theoretic study of particular nonlinear problems of classical importance. Examples are the Navier-Stokes equation, the KdV equation, and the complex of problems associated with Feigenbaum’s constant.” In this paper, we approach the question of whether the Navier-Stokes Equation admits recursive solutions in the sense of Weihrauch’s Type-2 Theory of Effectivity. A natural encoding (“representation”) is constructed for the space of divergence-free vector fields on 2-dimensional open square \(\varOmega = (-1, 1)^2\). This representation is shown to render first the mild solution to the Stokes Dirichlet problem and then a strong local solution to the nonlinear inhomogeneous incompressible Navier-Stokes initial value problem uniformly computable. Based on classical approaches, the proofs make use of many subtle and intricate estimates which are developed in the paper for establishing the computability results.
Shu-Ming Sun, Ning Zhong, Martin Ziegler

AutoOverview: A Framework for Generating Structured Overviews over Many Documents

This article is an exposition of a recent study on automatic generation of a structured overview (SOV) over a very large corpus of documents, where an SOV is organized as sections and subsections according to the latent hierarchy of topics contained in the documents. We present a new framework called AutoOverview that includes and extends our previous scheme called NDORGS (best paper runner-up in ACM DocEng’2019) [47]. Different from the standard NLP task of generating a coherent summary typically over a handful of documents, AutoOverview needs to balance between two competitive objectives of accuracy and efficiency over thousands of documents. It incorporates hierarchical topic clustering, single-document summarization, multiple-document summarization, title generation, and other text mining techniques into a single platform. To assess the quality of an SOV generated over many documents, while it is possible to rely on human annotators to judge its readability, the sheer size of the inputs would make it formidable for human judges to determine if an SOV has covered all major points contained in the original texts. To overcome this obstacle, we present a text mining mechanism to evaluate topic coverage of the SOV against the topics contained in the original documents. We use multi-attribute decision making to help determine a suitable suite of algorithms to implement AutoOverview and the values of parameters for achieving a satisfactory SOV with respect to both accuracy and efficiency. We use NDORGS as an implementation example to address these issues and present evaluation results over a corpus of over 2,000 classified news articles and a corpus of over 5,000 unclassified news articles in a span of 10 years obtained from a search of the same keyword.
Jie Wang

Better Upper Bounds for Searching on a Line with Byzantine Robots

Searching on a line with Byzantine robots was first posed by Czyzowicz et al. in [13]: Suppose there are n robots searching on an infinite line to find a target which is unknown to the robots. At the beginning all robots stay at the origin and then they can start to search with maximum speed 1. Unfortunately, f of them are Byzantine fault, which means that they may ignore the target when passing it or lie that they find the target. Therefore, the target is found if at least \(f+1\) robots claim that they find the target at the same location. The aim is to design a parallel algorithm to minimize the competitive ratio S(nf), the ratio between the time of finding the target and the distance from origin to the target in the worst case by n robots among which f are Byzantine fault.
In this paper, our main contribution is a new algorithm framework for solving the Byzantine robot searching problem with (nf) sufficiently large. Under this framework, we design two specific algorithms to improve the previous upper bounds in [13] when \(f/n\in (0.358,0.382)\cup (0.413,0.5)\). Besides, we also improve the upper bound of S(nf) for some small (nf). Specifically, we improve the upper bound of S(6, 2) from 4 to 3.682, and the upper bound of S(3, 1) from 9 to 8.53.
Xiaoming Sun, Yuan Sun, Jialin Zhang

A Survey on Double Greedy Algorithms for Maximizing Non-monotone Submodular Functions

Maximizing non-monotone submodular functions is one of the most important problems in submodular optimization. A breakthrough work on the problem is the double-greedy technique introduced by Buchbinder et al. [7]. Prior work has shown that this technique is very effective. This paper surveys on double-greedy algorithms for maximizing non-monotone submodular functions from discrete domains of sets and integer lattices to continuous domains.
Qingqin Nong, Suning Gong, Qizhi Fang, Dingzhu Du

Sequential Location Game on Continuous Directional Star Networks

We consider a sequential location game on a continuous directional star network, where a finite number of players (facilities) sequentially choose their locations to serve their consumers who are uniformly and continuously distributed in the network. Each consumer patronizes all the closest locations that have been chosen, bringing them equal shares of payoff. In turn, each location distributes the total payoff it receives evenly to every player choosing it. We study hierarchical Stackelberg equilibria (HSE), a.k.a, subgame perfect equilibria of the game, under which every player chooses a location to maximize its payoff. We establish a universal lower bound for payoff to a player under any HSE outcome. The lower bound is then strengthened with better estimations, and some HSE outcomes are explicitly presented, provided that the number of players and the network parameters satisfy certain relations.
Xujin Chen, Xiaodong Hu, Mengqi Zhang

Core Decomposition, Maintenance and Applications

Structures of large graphs have attracted much attention in recent years, including k-clique, k-core, k-truss, k-club, to name just a few. These structures can help detect the most cohesive or most influential subgraphs of social networks and other massive graphs. In this survey, we summarize the research on k-core, which is the maximal connected subgraph of a graph and the degree for each vertex is equal to or greater than k. We will address the core decomposition problem, the core maintenance problem, and a few applications of k-core.
Feiteng Zhang, Bin Liu, Qizhi Fang

Active and Busy Time Scheduling Problem: A Survey

We present an overview of recent research on the busy time and active time scheduling model, which has its applications in energy efficient scheduling for cloud computing systems, optical network design and computer memories. The major feature of this type of scheduling problems is to aggregate job execution into as few time slots as possible to save energy. The difference between busy time and active time is that the former refers to multiple machines while the latter refers to a single machine. After summarizing the previous results on this topic, we propose a few potential future directions for each model.
Vincent Chau, Minming Li

A Note on the Position Value for Hypergraph Communication Situations

The position value Meessen [3] is an important allocation rule for communication situations. The axiomatic characterization of the position value for arbitrary graph communication situations were given by Slikker [8]. However, the characterization of the position value for arbitrary hypergraph communication situations remains an open problem. This note provides an axiomatic characterization of the position value for arbitrary hypergraph communication situations by employing component efficiency and partial balanced hyperlink contributions given in Shan et al. [6].
Erfang Shan, Jilei Shi, Wenrong Lv

An Efficient Approximation Algorithm for the Steiner Tree Problem

Given an arbitrary weighted graph, the Steiner tree problem seeks a minimum-cost tree spanning a given subset of the vertices (terminals). Byrka et al. proposed an interactive method that achieves an approximation ratio of \(1.3863+\epsilon \). Moreover, Goemans et al. shown that it is possible to achieve the same approximation guarantee while only solving hypergraphic LP relaxation once. However, solving hypergraphic LP relaxation is time consuming. This article presents an efficient two-phase heuristic in greedy strategy that achieves an approximation ratio of 1.4295.
Chi-Yeh Chen, Sun-Yuan Hsieh

A Review for Submodular Optimization on Machine Scheduling Problems

This paper provides a review of recent results on machine scheduling problems solved by methods of submodular optimization. We present some basic definitions of submodular functions and their connection to scheduling models. Based on the classification of problem features, we conclude different scheduling models, applications of these scheduling scenarios, approaches of submodular optimization, and the performance of corresponding algorithms. It is shown that the use of these submodular optimization methodologies yields fast and efficient algorithms for specific scheduling models such as controllable processing time, unreliable job processing, and common operation scheduling. By identifying the trends in this field, we discuss some potential directions for future research.
Siwen Liu

Edge Computing Integrated with Blockchain Technologies

With the rapid increasing of the number of devices connected to the Internet of Things (IoTs), the traditional centralized cloud computing system is unable to satisfy the Quality of Service (QoS) for many applications, especially for areas with real-time, reliability and security. The edge computing as an extension of the cloud computing is introduced, which lies in its ability to transfer the sensitive data from cloud to the edge for increasing network security and to realize high frequency interaction and real-time transmission of data. However, that the edge servers maintain sensitive privacy information generates many important security issues for the edge computing network. Moreover, the data produced by IoT devices are separated into many parts and stored in different edges servers that are located in different locations, which is hard to guarantee data integrity due to data loss and incorrect data storage in edge servers. As the emergence of blockchain technologies, the various security problems and data integrity of the edge computing can be addressed by integrating blockchain technologies. In this paper, we present a comprehensive overview of edge computing integrated with blockchain technologies. Firstly, the blockchain technologies and the architecture of the edge computing are introduced. Secondly, the motivations and architecture of the edge computing integrated with blockchain are introduced. Thirdly, the related works about the edge computing integrated with blockchain that have been investigated are introduced. Finally, the research challenges are discussed.
Chuanwen Luo, Liya Xu, Deying Li, Weili Wu


Weitere Informationen

Premium Partner