Skip to main content

2021 | Buch

Computer Science – Theory and Applications

16th International Computer Science Symposium in Russia, CSR 2021, Sochi, Russia, June 28–July 2, 2021, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 16th International Computer Science Symposium in Russia, CSR 2021, held in Sochi, Russia, in June/July 2021.

The 28 full papers were carefully reviewed and selected from 68 submissions. The papers cover a broad range of topics, such as formal languages and automata theory, geometry and discrete structures; theory and algorithms for application domains and much more.

Inhaltsverzeichnis

Frontmatter
Computational Complexity of Multi-player Evolutionarily Stable Strategies
Abstract
In this paper we study the computational complexity of computing an evolutionary stable strategy (ESS) in multi-player symmetric games. For two-player games, deciding existence of an ESS is complete for \(\mathrm {\Sigma }^\mathrm {p}_2\), the second level of the polynomial time hierarchy. We show that deciding existence of an ESS of a multi-player game is closely connected to the second level of the real polynomial time hierarchy. Namely, we show that the problem is hard for a complexity class we denote as \(\exists ^\mathrm {D}\cdot \forall \mathbb {R}\) and is a member of \(\exists \forall \mathbb {R}\), where the former class restrict the latter by having the existentially quantified variables be Boolean rather then real-valued. As a special case of our results it follows that deciding whether a given strategy is an ESS is complete for \(\forall \mathbb {R}\).
Manon Blanc, Kristoffer Arnsfelt Hansen
Injective Colouring for H-Free Graphs
Abstract
A function \(c:V(G)\rightarrow \{1,2,\ldots ,k\}\) is a k-colouring of a graph G if \(c(u)\ne c(v)\) whenever u and v are adjacent. If any two colour classes induce the disjoint union of vertices and edges, then c is called injective. Injective colourings are also known as L(1, 1)-labellings and distance 2-colourings. The corresponding decision problem is denoted Injective Colouring. A graph is H-free if it does not contain H as an induced subgraph. We prove a dichotomy for Injective Colouring for graphs with bounded independence number. Then, by combining known with further new results, we determine the complexity of Injective Colouring on H-free graphs for every H except for one missing case.
Jan Bok, Nikola Jedličková, Barnaby Martin, Daniël Paulusma, Siani Smith
Variants of the Determinant Polynomial and the -Completeness
Abstract
The determinant is a canonical \(\textsf {VBP}\)-complete polynomial in the algebraic complexity setting. In this work, we introduce two variants of the determinant polynomial which we call \(\mathtt{StackDet}_n(X)\) and \(\mathtt{CountDet}_n(X)\) and show that they are \(\textsf {VP}\) and \(\textsf {VNP}\) complete respectively under p-projections. The definitions of the polynomials are inspired by a combinatorial characterisation of the determinant developed by Mahajan and Vinay (SODA 1997). We extend the combinatorial object in their work, namely clow sequences, by introducing additional edge labels on the edges of the underlying graph. The idea of using edge labels is inspired by the work of Mengel (MFCS 2013).
Prasad Chaugule, Nutan Limaye, Shourya Pandey
Dynamic Complexity of Expansion
Abstract
Dynamic Complexity was introduced by Immerman and Patnaik [25] (see also [14]). It has seen a resurgence of interest in the recent past, see [2, 4, 812, 24, 26, 30, 31] for some representative examples. Use of linear algebra has been a notable feature of some of these papers. We extend this theme to show that the gap version of spectral expansion in bounded degree graphs can be maintained in the class \({\mathsf{DynAC}^0}\) (also known as \(\mathsf {DynFO}\), for domain independent queries) under batch changes (insertions and deletions) of \(O(\frac{\log {n}}{\log {\log {n}}})\) many edges.
The spectral graph theoretic material of this work is based on the paper by Kale-Seshadhri [23]. Our primary technical contribution is to maintain up to logarithmic powers of the transition matrix of a bounded degree undirected graph in \({\mathsf{DynAC}^0}\).
Samir Datta, Anuj Tawari, Yadu Vasudev
Real -Conjecture for Sum-of-Squares: A Unified Approach to Lower Bound and Derandomization
Abstract
Koiran’s real \(\tau \)-conjecture asserts that if a non-zero real univariate polynomial f can be written as \(\sum _{i=1}^{k}\prod _{j=1}^m\,f_{ij}\), where each \(f_{ij}\) contains at most t monomials, then the number of distinct real roots of f is polynomially bounded in kmt. Assuming the conjecture with parameter \(m=\omega (1)\), one can show that \({\mathsf {VP}}\ne {\mathsf {VNP}}\) (i.e. symbolic permanent requires superpolynomial-size circuit). In this paper, we propose a \(\tau \)-conjecture for sum-of-squares (SOS) model (equivalently, \(m=2\)).
For a univariate polynomial f, we study the sum-of-squares representation (SOS), i.e. \(f = \sum _{i\in [s]} c_i f_i^2\) , where \(c_i\) are field elements and the \(f_i\)’s are univariate polynomials. The size of the representation is the number of monomials that appear across the \(f_i\)’s. Its minimum is the support-sum S(f) of f. We conjecture that any real univariate f can have at most O(S(f))-many real roots. A random polynomial satisfies this property. We connect this conjecture with two central open questions in algebraic complexity– matrix rigidity and \({\mathsf {VP}}\) vs. \({\mathsf {VNP}}\).
The concept of matrix rigidity was introduced by Valiant (MFCS 1977) and independently by Grigoriev (1976) in the context of computing linear transformations. A matrix is rigid if it is far (in terms of Hamming distance) from any low rank matrix. We know that rigid matrices exist, yet their explicit construction is still a major open question. Here, we show that SOS-\(\tau \)-conjecture implies construction of such matrices. Moreover, the conjecture also implies the famous Valiant’s hypothesis (Valiant, STOC 1979) that \({\mathsf {VNP}}\) is exponentially harder than \({\mathsf {VP}}\). Thus, this new conjecture implies both the fundamental problems by Valiant.
Furthermore, strengthening the conjecture to sum-of-cubes (SOC) implies that blackbox-PIT (Polynomial Identity Testing) is in \({\mathsf {P}}\). This is the first time a \(\tau \)-conjecture has been shown to give a polynomial-time PIT. We also establish some special cases of this conjecture, and prove tight lower bounds for restricted depth-2 models.
Pranjal Dutta
Dichotomy Result on 3-Regular Bipartite Non-negative Functions
Abstract
We prove a complexity dichotomy theorem for a class of Holant problems on 3-regular bipartite graphs. Given an arbitrary nonnegative weighted symmetric constraint function \(f = [x_0, x_1, x_2, x_3]\), we prove that the bipartite Holant problem \({\text {Holant}}\left( f\mid (=_3)\right) \) is either computable in polynomial time or #P-hard. The dichotomy criterion on f is explicit.
Austen Z. Fan, Jin-Yi Cai
Upper Bounds on Communication in Terms of Approximate Rank
Abstract
We show that any Boolean function with approximate rank r can be computed by bounded error quantum protocols without prior entanglement of complexity \(O( \sqrt{r} \log r)\). In addition, we show that any Boolean function with approximate rank r and discrepancy \(\delta \) can be computed by deterministic protocols of complexity O(r), and private coin bounded error randomized protocols of complexity \(O((\frac{1}{\delta })^2 + \log r)\). Our deterministic upper bound in terms of approximate rank is tight up to constant factors, and the dependence on discrepancy in our randomized upper bound is tight up to taking square-roots. Our results can be used to obtain lower bounds on approximate rank. We also obtain a strengthening of Newman’s theorem with respect to approximate rank.
Anna Gál, Ridwan Syed
Approximation Schemes for Multiperiod Binary Knapsack Problems
Abstract
An instance of the multiperiod binary knapsack problem (MPBKP) is given by a horizon length T, a non-decreasing vector of knapsack sizes \((c_1, \ldots , c_T)\) where \(c_t\) denotes the cumulative size for periods \(1,\ldots ,t\), and a list of n items. Each item is a triple (rqd) where r denotes the reward or value of the item, q its size, and d denotes its time index (or, deadline). The goal is to choose, for each deadline t, which items to include to maximize the total reward, subject to the constraints that for all \(t=1,\ldots ,T\), the total size of selected items with deadlines at most t does not exceed the cumulative capacity of the knapsack up to time t. We also consider the multiperiod binary knapsack problem with soft capacity constraints (MPBKP-S) where the capacity constraints are allowed to be violated by paying a penalty that is linear in the violation. The goal of MPBKP-S is to maximize the total profit, which is the total reward of the selected items less the total penalty. Finally, we consider the multiperiod binary knapsack problem with soft stochastic capacity constraints (MPBKP-SS), where the non-decreasing vector of knapsack sizes \((c_1, \ldots , c_T)\) follows an arbitrary joint distribution with the set of possible sample paths (realizations) of knapsack sizes and the probability of each sample path given to the algorithm.
For MPBKP, we exhibit a fully polynomial-time approximation scheme with runtime \(\tilde{\mathcal {O}}\left( \min \left\{ n+\frac{T^{3.25}}{\epsilon ^{2.25}},n+\frac{T^{2}}{\epsilon ^{3}},\frac{nT}{\epsilon ^2},\frac{n^2}{\epsilon }\right\} \right) \) that achieves \((1+\epsilon )\) approximation; for MPBKP-S, the \((1+\epsilon )\) approximation can be achieved in \(\mathcal {O}\left( \frac{n\log n}{\epsilon }\cdot \min \left\{ \frac{T}{\epsilon },n\right\} \right) \). To the best of our knowledge, our algorithms are the first FPTAS for any multiperiod version of the Knapsack problem since its study began in 1980s. For MPBKP-SS, we prove that a natural greedy algorithm is a 2-approximation when all items have the same size. Our algorithms also provide insights on how other multiperiod versions of the knapsack problem may be approximated.
Zuguang Gao, John R. Birge, Varun Gupta
Limitations of Sums of Bounded Read Formulas and ABPs
Abstract
Proving super-polynomial size lower bounds for various classes of arithmetic circuits computing explicit polynomials is a very important and challenging task in algebraic complexity theory. We study representation of polynomials as sums of weaker models such as read once formulas (ROFs) and read once oblivious algebraic branching programs (ROABPs). We prove:
(1)
An exponential separation between sum of ROFs and read-k formulas for some constant k.
 
(2)
A sub-exponential separation between sum of ROABPs and syntactic multilinear ABPs.
 
Our results are based on analysis of the partial derivative matrix under different distributions. These results highlight richness of bounded read restrictions in arithmetic formulas and ABPs.
Finally, we consider a generalization of multilinear ROABPs known as strict-interval ABPs defined in [Ramya-Rao, MFCS2019]. We show that strict-interval ABPs are equivalent to ROABPs up to a polynomial blow up in size. In contrast, we show that interval formulas are different from ROFs and also admit depth reduction which is not known in the case of strict-interval ABPs.
Purnata Ghosal, B. V. Raghavendra Rao
On the Computational Complexity of Reaction Systems, Revisited
Abstract
We study the computational complexity of some important problems on reaction systems (RSs), a biologically motivated model introduced by Ehrenfeucht and Rozenberg in [7], that were overseen in the literature. To this end we focus on the complexity of (i) equivalence and multi-step simulation properties, (ii) special structural and behavioural RS properties such as, e.g., isotonicity, antitonicity, etc., and minimality with respect to reactant and/or inhibitor sets, and (iii) threshold properties. The complexities vary from deterministic polynomial time solvability to coNP- and PSPACE-completeness. Finally, as a side result on the complexity of threshold problems we improve the previously known threshold values for the no-concurrency, the comparability, and the redundancy property studied in [2].
Markus Holzer, Christian Rauch
Average-Case Rigidity Lower Bounds
Abstract
It is shown that there exists \(f : \{0,1\}^{n/2} \times \{0,1\}^{n/2} \rightarrow \{0,1\}\) in E\(^\mathbf {NP}\) such that for every \(2^{n/2} \times 2^{n/2}\) matrix M of rank \(\le \rho \) we have \(\mathbb {P}_{x,y}[f(x,y)\ne M_{x,y}] \ge 1/2-2^{-\varOmega (k)}\), whenever \(\log \rho \le \delta n/k(\log n + k)\) for a sufficiently small \(\delta > 0\), and n is large enough. This generalizes recent results which bound below the probability by \(1/2-\varOmega (1)\) or apply to constant-depth circuits.
Xuangui Huang, Emanuele Viola
Analysis of an Efficient Reduction Algorithm for Random Regular Expressions Based on Universality Detection
Abstract
In this article we study a very simple linear reduction algorithm that is specific to regular expressions. It aims to detect, in a bottom-up fashion, universal subtrees in regular expressions trees, and replace them by the smallest equivalent to \(\varSigma ^*\). Of course, this does not detect every universal subtree, as the universality problem is PSPACE-complete. However, we prove that for the uniform random tree model, this simple algorithm detects a large proportion of universal trees. Furthermore, we prove that on average this algorithm reduces uniform regular expressions to a constant size that is very small, and that can be computed efficiently. For example, for two letters the limit expected size is \(\sim \)77.8. Our theoretical constants are backed-up by the experimental evidence. This confirms the phenomena reported in [13], and further it completely discards the usefulness of the uniform distribution on regular expressions.
Florent Koechlin, Pablo Rotondo
Bit-Complexity of Solving Systems of Linear Evolutionary Partial Differential Equations
Abstract
Finite Elements are a common method for solving differential equations via discretization. Under suitable hypotheses, the solution \(\mathbf {u}=\mathbf {u}(t,\vec x)\) of a well-posed initial/boundary-value problem for a linear evolutionary system of PDEs is approximated up to absolute error \(1/2^n\) by repeatedly (exponentially often in n) multiplying a matrix \(\mathbf {A}_n\) to the vector from the previous time step, starting with the initial condition \(\mathbf {u}(0)\), approximated by the spatial grid vector \(\mathbf {u}(0)_n\). The dimension of the matrix \(A_n\) is exponential in n, which is the number of the bits of the output.
We investigate the bit-cost of computing exponential powers and inner products \(\mathbf {A}_n^K\cdot \mathbf {u}(0)_n\), \(K\sim 2^{\mathcal {O}(n)}\), of matrices and vectors of exponential dimension for various classes of such difference schemes \(\mathbf {A}_n\). Non-uniformly fixing any polynomial-time computable initial condition and focusing on single but arbitrary entries (instead of the entire vector/matrix) allows to improve naïve exponential sequential runtime EXP: Closer inspection shows that, given any time \(0\le t\le 1\) and space \(\vec x\in [0;1]^d\), the computational cost of evaluating the solution \(\mathbf {u}(t,\vec x)\) corresponds to the discrete class PSPACE.
Many partial differential equations, including the Heat Equation, admit difference schemes that are (tensor products of constantly many) circulant matrices of constant bandwidth; and for these we show exponential matrix powering, and PDE solution computable in #P. This is achieved by calculating individual coefficients of the matrix’ multivariate companion polynomial’s powers using Cauchy’s Differentiation Theorem; and shown optimal for the Heat Equation. Exponentially powering twoband circulant matrices is established even feasible in P; and under additional conditions, also the solution to certain linear PDEs becomes computable in P.
Ivan Koswara, Gleb Pogudin, Svetlana Selivanova, Martin Ziegler
A Secure Three-Input AND Protocol with a Standard Deck of Minimal Cards
Abstract
Card-based protocols are used to perform cryptographic tasks such as secure multiparty computation using a deck of physical cards. While most of the existing protocols use a two-colored deck consisting of red cards and black cards, Niemi and Renvall in 1999 constructed protocols for securely computing two-input Boolean functions (such as secure logical AND and XOR computations) using a commonly available standard deck of playing cards. Since this initial investigation, two-input protocols with fewer cards and/or shuffles have been designed, and by combining them, one can perform a secure computation of any Boolean circuit. In this paper, we directly construct a simple card-based protocol for the three-input AND computation. Our three-input AND protocol requires fewer cards and shuffles compared to that required when applying any existing two-input AND protocol twice to perform the three-input AND computation. Our protocol is unique in the sense that it is card minimal if we use two cards to encode a single bit.
Hiroto Koyama, Daiki Miyahara, Takaaki Mizuki, Hideaki Sone
Upper Bound for Torus Polynomials
Abstract
We prove that all functions that have low degree torus polynomials approximating them with small error also have \(\mathsf {MidBit}^+\) circuits computing them. This serves as a partial converse to the result that all \(\mathsf {ACC}\) functions have low degree torus polynomials approximating them with small error, by Bhrushundi, Hosseini, Lovett and Rao (ITCS 2019).
Vaibhav Krishan
A PCP of Proximity for Real Algebraic Polynomials
Abstract
Let \(f:F_q^k \mapsto {\mathbb R}\) be a function given via its table of values, where \(F_q:=\{0,1,\ldots ,q-1\} \subset {\mathbb R}, k,q \in {\mathbb N}.\) We design a randomised verification procedure in the BSS model of computation that verifies if f is close to an algebraic polynomial of maximal degree \(d \in {\mathbb N}\) in each of its variables. If f is such an algebraic polynomial there exists a proof certificate that the verifier will accept surely. If f has at least distance \(\epsilon > 0\) to the set of max-degree algebraic polynomials on \(F_q^k\), the verifier will reject any proof with probability at least \(\frac{1}{2}\) for large enough q. The verification procedure establishes a real number PCP of proximity, i.e., it has access to both the values of f and the additional proof certificate via oracle calls. It uses \(O(k\log {q})\) random bits and reads O(1) many components of both f and the additional proof string, which is of length \(O((kq)^{O(k)}).\) The paper is a contribution to the not yet much developed area of designing PCPs of proximity in real number complexity theory.
Klaus Meer
Predictions and Algorithmic Statistics for Infinite Sequences
Abstract
We combine Solomonoff’s approach to universal prediction with algorithmic statistics and suggest to use the computable measure that provides the best “explanation” for the observed data (in the sense of algorithmic statistics) for prediction. In this way we keep the expected sum of squares of prediction errors bounded (as it was for the Solomonoff’s predictor) and, moreover, guarantee that the sum of squares of prediction errors is bounded along any Martin-Löf random sequence.
Alexey Milovanov
Lower Bounds and Hardness Magnification for Sublinear-Time Shrinking Cellular Automata
Abstract
The minimum circuit size problem (MCSP) is a string compression problem with a parameter s in which, given the truth table of a Boolean function over inputs of length n, one must answer whether it can be computed by a Boolean circuit of size at most \(s(n) \ge n\). Recently, McKay, Murray, and Williams (STOC, 2019) proved a hardness magnification result for MCSP involving (one-pass) streaming algorithms: For any reasonable s, if there is no \({\mathsf {poly}}(s(n))\)-space streaming algorithm with \({\mathsf {poly}}(s(n))\) update time for \({\mathsf {MCSP}}[s]\), then \(\mathsf {P}\ne \mathsf {NP}\). We prove an analogous result for the (provably) strictly less capable model of shrinking cellular automata (SCAs), which are cellular automata whose cells can spontaneously delete themselves. We show every language accepted by an SCA can also be accepted by a streaming algorithm of similar complexity, and we identify two different aspects in which SCAs are more restricted than streaming algorithms. We also show there is a language which cannot be accepted by any SCA in \(o(n / \log n)\) time, even though it admits an \(O(\log n)\)-space streaming algorithm with \(O(\log n)\) update time.
Augusto Modanese
Approximation Algorithms for Connectivity Augmentation Problems
Abstract
In Connectivity Augmentation problems we are given a graph \(H=(V,E_H)\) and an edge set E on V, and seek a min-size edge set \(J \subseteq E\) such that \(H \cup J\) has larger edge/node connectivity than H. In the Edge-Connectivity Augmentation problem we need to increase the edge-connectivity by 1. In the Block-Tree Augmentation problem H is connected and \(H \cup S\) should be 2-connected. In Leaf-to-Leaf Connectivity Augmentation problems every edge in E connects minimal deficient sets. For this version we give a simple combinatorial approximation algorithm with ratio 5/3, improving the 1.91 approximation of [6] (see also [23]), that applies for the general case. We also show by a simple proof that if the Steiner Tree problem admits approximation ratio \(\alpha \) then the general version admits approximation ratio \(1+\ln (4-x)+\epsilon \), where x is the solution to the equation \(1+\ln (4-x)=\alpha +(\alpha -1)x\). For the currently best value of \(\alpha =\ln 4+\epsilon \) [7] this gives ratio 1.942. This is slightly worse than the ratio 1.91 of [6], but has the advantage of using Steiner Tree approximation as a “black box”. In the Element Connectivity Augmentation problem we are given a graph \(G=(V,E)\), \(S \subseteq V\), and connectivity requirements \(r=\{r(u,v):u,v \in S\}\). The goal is to find a min-size set J of new edges on S such that for all \(u,v \in S\) the graph \(G \cup J\) contains r(uv) uv-paths such that no two of them have an edge or a node in \(V \setminus S\) in common. The problem is NP-hard even when \(\displaystyle r_{\max } = \max _{u,v \in S} r(u,v)=2\). We obtain ratio 3/2, improving the previous ratio 7/4 of [22]. For the case of degree bounds on S we obtain the same ratio with just \(+1\) degree violation, which is tight, since deciding whether there exists a feasible solution is NP-hard even when \(r_{\max }=2\).
Zeev Nutov
On Rooted k-Connectivity Problems in Quasi-bipartite Digraphs
Abstract
We consider the directed Rooted Subset k -Edge-Connectivity problem: given a digraph \(G=(V,E)\) with edge costs, a set \(T \subset V\) of terminals, a root node r, and an integer k, find a min-cost subgraph of G that contains k edge disjoint rt-paths for all \(t \in T\). The case when every edge of positive cost has head in T admits a polynomial time algorithm due to Frank [9], and the case when all positive cost edges are incident to r is equivalent to the k -Multicover problem. Recently, Chan et al. [2] obtained ratio \(O(\ln k \ln |T|)\) for quasi-bipartite instances, when every edge in G has an end (tail and/or head) in \(T+r\). We give a simple proof for the same ratio for a more general problem of covering an arbitrary T-intersecting supermodular set function by a minimum cost edge set, and for the case when only every positive cost edge has an end in \(T+r\).
Zeev Nutov
Input-Driven Pushdown Automata on Well-Nested Infinite Strings
Abstract
Automata operating on strings of nested brackets, known as input-driven pushdown automata, and also as visibly pushdown automata, have been studied since the 1980s. They were extended to the case of infinite strings by Alur and Madhusudan (“Visibly pushdown languages”, STOC 2004). This paper investigates the properties of these automata under the assumption that a given infinite string is always well-nested. This restriction enables a complete characterization of the corresponding \(\omega \)-languages in terms of classical \(\omega \)-regular languages and input-driven pushdown automata on finite strings. This characterization leads to a determinization construction for these automata, as well as to the first results on their topological classification.
Alexander Okhotin, Victor L. Selivanov
Large Clique is Hard on Average for Resolution
Abstract
We prove an \(\exp ({\varOmega (k^{(1-\epsilon )})})\) resolution size lower bound for the k-Clique problem on random graphs, for (roughly speaking) \(k<n^{1/3}\). Towards an optimal resolution lower bound of the problem (i.e. of type \(n^{\varOmega (k)}\)), we also extend the \(n^{\varOmega (k)}\) bound in [2] on regular resolution to a stronger model called a-irregular resolution, for \(k<n^{1/3}\). This model is interesting in that all known CNF families separating regular resolution from general [1, 24] have short proofs in it.
Shuo Pang
On Closed-Rich Words
Abstract
A word is called closed if it has a prefix which is also its suffix and there is no internal occurrences of this prefix in the word. In this paper we study the maximal number of closed factors in a word of length n. We show that it is quadratic and give lower and upper bounds for a constant.
Olga Parshina, Svetlana Puzynina
Shelah-Stupp’s and Muchnik’s Iterations Revisited
Abstract
Iteration is a model-theoretic construction that replicates a given structure in an infinite, tree-like way. There are two variants of iteration: basic iteration (a.k.a. Shelah-Stupp’s iteration), and Muchnik’s iteration. The latter has an additional unary predicate (not present in basic iteration), which makes the structure richer. These two variants lead to two hierarchies of relational structures, generated from finite structures using MSO-interpretations and either basic iteration or Muchnik’s iteration. Caucal and Knapik (2018) have shown that the two hierarchies coincide at level 1, and that every level of the latter hierarchy is closed under basic iteration (which in particular implies that the former hierarchy collapses at level 1). We prove the same results using a different, significantly simpler method.
Paweł Parys
On Separation Between the Degree of a Boolean Function and the Block Sensitivity
Abstract
In this paper we study the separation between two complexity measures: the degree of a Boolean function as a polynomial over the reals and the block sensitivity. We show that the upper bound on the largest possible separation between these two measures can be improved from \( d^2(f) \ge bs(f) \), established by Tal [19], to \( d^2(f) \ge (\sqrt{10} - 2)bs(f) \). As a corollary, we show that the similar upper bounds between some other complexity measures are not tight as well, for instance, we can improve the recent sensitivity conjecture result by Huang [10] \(s^4(f) \ge bs(f) \) to \(s^4(f) \ge (\sqrt{10} - 2)bs(f)\). Our techniques are based on the paper by Nisan and Szegedy [14] and include more detailed analysis of a symmetrization polynomial.
In our next result we show the same type of improvement for the separation between the approximate degree of a Boolean function and the block sensitivity: we show that \(\deg _{1/3}^2(f) \ge \sqrt{6/101} bs(f)\) and improve the previous result by Nisan and Szegedy [14] \( \deg _{1/3}(f) \ge \sqrt{bs(f)/6} \). In addition, we construct an example showing that the gap between the constants in the lower bound and in the known upper bound is less than 0.2.
In our last result we study the properties of a conjectured fully sensitive function on 10 variables of degree 4, existence of which would lead to improvement of the biggest known gap between these two measures. We prove that there is the only univariate polynomial that can be achieved by symmetrization of such a function by using the combination of interpolation and linear programming techniques.
Nikolay V. Proskurin
Approximation and Complexity of the Capacitated Geometric Median Problem
Abstract
In the Capacitated Geometric Median problem, we are given n points in d-dimensional real space and an integer m, the goal is to locate a new point in space (center) and choose m of the input points to minimize the sum of Euclidean distances from the center to the chosen points. We show that this problem admits an “almost exact” polynomial-time algorithm in the case of fixed d and an approximation scheme PTAS in high dimensions. On the other hand, we prove that, if the dimension of space is not fixed, Capacitated Geometric Median is strongly NP-hard and does not admit a scheme FPTAS unless P \(=\) NP.
Vladimir Shenmaier
A Generic Convolution Algorithm for Join Operations on Tree Decompositions
Abstract
The fast subset convolution algorithm by Björklund et al. (STOC 2007) has made quite an impact on parameterised complexity. Amongst the many applications are dynamic programming algorithms on tree decompositions, where the computations in so-called join nodes are a recurring example of where convolution-like operations are used. As such, several generalisations of the original fast subset convolution algorithm have been proposed, all based on concepts that strongly relate to either Möbius transforms or to Fourier transforms.
We present a new convolution generalisation that uses both Möbius transforms and Fourier transforms on the same transformation domain. This results in new faster algorithms on tree decompositions for a broad class of vertex subset problems known as the \([\sigma ,\rho ]\)-domination problems. We solve them in \(\mathcal {O}(s^{t+2} t n^2 (t\log (s)+\log (n)))\) arithmetic operations, where t is the treewidth, s is the (fixed) number of states required to represent partial solutions of the specific problem, and n is the number of vertices in the graph. This improves the previous best bound of \(\mathcal {O}( s^{t+2} (st)^{2(s-2)} n^3 )\) arithmetic operations (van Rooij, Bodlaender, Rossmanith, ESA 2009). Specifically, this removes the dependence of the degree of the polynomial on s.
Johan M. M. van Rooij
A Generalization of a Theorem of Rothschild and van Lint
Abstract
A classical result of Rothschild and van Lint asserts that if every non-zero Fourier coefficient of a Boolean function f over \(\mathbb {F}_2^{n}\) has the same absolute value, namely \(|\hat{f}(\alpha )|=1/2^k\) for every \(\alpha \) in the Fourier support of f, then f must be the indicator function of some affine subspace of dimension \(n-k\). In this paper we slightly generalize their result. Our main result shows that, roughly speaking, Boolean functions whose Fourier coefficients take values in the set \(\{-2/2^k, -1/2^k, 0, 1/2^k, 2/2^k\}\) are indicator functions of two disjoint affine subspaces of dimension \(n-k\) or four disjoint affine subspace of dimension \(n-k-1\). Our main technical tools are results from additive combinatorics which offer tight bounds on the affine span size of a subset of \(\mathbb {F}_2^{n}\) when the doubling constant of the subset is small.
Ning Xie, Shuai Xu, Yekun Xu
Backmatter
Metadaten
Titel
Computer Science – Theory and Applications
herausgegeben von
Rahul Santhanam
Dr. Daniil Musatov
Copyright-Jahr
2021
Electronic ISBN
978-3-030-79416-3
Print ISBN
978-3-030-79415-6
DOI
https://doi.org/10.1007/978-3-030-79416-3