Skip to main content
Top

2021 | Book

Connecting with Computability

17th Conference on Computability in Europe, CiE 2021, Virtual Event, Ghent, July 5–9, 2021, Proceedings

Editors: Liesbeth De Mol, Prof. Dr. Andreas Weiermann, Florin Manea, David Fernández-Duque

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the 17th Conference on Computability in Europe, CiE 2021, organized by the University of Ghent in July 2021. Due to COVID-19 pandemic the conference was held virtually.

The 48 full papers presented in this volume were carefully reviewed and selected from 50 submissions. CiE promotes the development of computability-related science, ranging over mathematics, computer science and applications in various natural and engineering sciences, such as physics and biology, as well as related fields, such as philosophy and history of computing. CiE 2021 had as its motto Connecting with Computability, a clear acknowledgement of the connecting and interdisciplinary nature of the conference series which is all the more important in a time where people are more than ever disconnected from one another due to the COVID-19 pandemic.

Table of Contents

Frontmatter
Searching for Applicable Versions of Computable Structures

We systematise notions and techniques needed for development of the computable structure theory towards applications such as symbolic algorithms and on-line algorithms. On this way, we introduce some new notions and investigate their relation to the already existing ones. In particular, in the context of polynomial-time presentability such relation turns out to depend on complexity-theoretic conjectures like P = NP.

P. E. Alaev, V. L. Selivanov
On Measure Quantifiers in First-Order Arithmetic

We study the logic obtained by endowing the language of first-order arithmetic with second-order measure quantifiers. This new kind of quantification allows us to express that the argument formula is true in a certain portion of all possible interpretations of the quantified variable. We show that first-order arithmetic with measure quantifiers is capable of formalizing simple results from probability theory and, most importantly, of representing every recursive random function. Moreover, we introduce a realizability interpretation of this logic in which programs have access to an oracle from the Cantor space.

Melissa Antonelli, Ugo Dal Lago, Paolo Pistone
Learning Languages with Decidable Hypotheses

In language learning in the limit, the most common type of hypothesis is to give an enumerator for a language, a W-index. These hypotheses have the drawback that even the membership problem is undecidable. In this paper, we use a different system which allows for naming arbitrary decidable languages, namely programs for characteristic functions (called C-indices). These indices have the drawback that it is now not decidable whether a given hypothesis is even a legal C-index.In this first analysis of learning with C-indices, we give a structured account of the learning power of various restrictions employing C-indices, also when compared with W-indices. We establish a hierarchy of learning power depending on whether C-indices are required (a) on all outputs; (b) only on outputs relevant for the class to be learned or (c) only in the limit as final, correct hypotheses. We analyze all these questions also in relation to the mode of data presentation. Finally, we also ask about the relation of semantic versus syntactic convergence and derive the map of pairwise relations for these two kinds of convergence coupled with various forms of data presentation.

Julian Berger, Maximilian Böther, Vanja Doskoč, Jonathan Gadea Harder, Nicolas Klodt, Timo Kötzing, Winfried Lötzsch, Jannik Peters, Leon Schiller, Lars Seifert, Armin Wells, Simon Wietheger
Robust Online Algorithms for Dynamic Choosing Problems

Semi-online algorithms that are allowed to perform a bounded amount of repacking achieve guaranteed good worst-case behaviour in a more realistic setting. Most of the previous works focused on minimization problems that aim to minimize some costs. In this work, we study maximization problems that aim to maximize their profit.We mostly focus on a class of problems that we call choosing problems, where a maximum profit subset of a set objects has to be maintained. Many known problems, such as Knapsack, MaximumIndependentSet and variations of these, are part of this class. We present a framework for choosing problems that allows us to transfer offline $$\alpha $$ α -approximation algorithms into $$(\alpha -\epsilon )$$ ( α - ϵ ) -competitive semi-online algorithms with amortized migration $$O(1/\epsilon )$$ O ( 1 / ϵ ) . Moreover we complement these positive results with lower bounds that show that our results are tight in the sense that no amortized migration of $$o(1/\epsilon )$$ o ( 1 / ϵ ) is possible.

Sebastian Berndt, Kilian Grage, Klaus Jansen, Lukas Johannsen, Maria Kosche
On the Degrees of Constructively Immune Sets

Xiang Li (1983) introduced what are now called constructively immune sets as an effective version of immunity. Such have been studied in relation to randomness and minimal indices, and we add another application area: numberings of the rationals. We also investigate the Turing degrees of constructively immune sets and the closely related $$\varSigma ^0_1$$ Σ 1 0 -dense sets of Ferbus-Zanda and Grigorieff (2008).

Samuel D. Birns, Bjørn Kjos-Hanssen
Fine-Grained Complexity Theory: Conditional Lower Bounds for Computational Geometry

Fine-grained complexity theory is the area of theoretical computer science that proves conditional lower bounds based on the Strong Exponential Time Hypothesis and similar conjectures. This area has been thriving in the last decade, leading to conditionally best-possible algorithms for a wide variety of problems on graphs, strings, numbers etc. This article is an introduction to fine-grained lower bounds in computational geometry, with a focus on lower bounds for polynomial-time problems based on the Orthogonal Vectors Hypothesis. Specifically, we discuss conditional lower bounds for nearest neighbor search under the Euclidean distance and Fréchet distance.

Karl Bringmann
The Lost Melody Theorem for Infinite Time Blum-Shub-Smale Machines

We consider recognizability for Infinite Time Blum-Shub-Smale machines, a model of infinitary computability introduced by Koepke and Seyfferth. In particular, we show that the lost melody theorem (originally proved for ITTMs by Hamkins and Lewis), i.e. the existence of non-computable, but recognizable real numbers, holds for ITBMs, that ITBM-recognizable real numbers are hyperarithmetic and that both ITBM-recognizable and ITBM-unrecognizable real numbers appear at every level of the constructible hierarchy below $$L_{\omega _{1}^{\text {CK}}}$$ L ω 1 CK above $$\omega ^{\omega }$$ ω ω .

Merlin Carl
Randomising Realizability

We consider a randomised version of Kleene’s realizability interpretation of intuitionistic arithmetic in which computability is replaced with randomised computability with positive probability. In particular, we show that (i) the set of randomly realizable statements is closed under intuitionistic first-order logic, but (ii) different from the set of realizable statements, that (iii) “realizability with probability 1” is the same as realizability and (iv) that the axioms of bounded Heyting’s arithmetic are randomly realizable, but some instances of the full induction scheme fail to be randomly realizable.

Merlin Carl, Lorenzo Galeotti, Robert Passmann
Restrictions of Hindman’s Theorem: An Overview

I give a short overview on bounds on the logical strength and effective content of restrictions of Hindman’s Theorem based on the family of sums for which monochromaticity is required, highlighting a number of questions I find interesting.

Lorenzo Carlucci
Complexity and Categoricity of Injection Structures Induced by Finite State Transducers

An injection structure $${\mathcal A}= (A,f)$$ A = ( A , f ) is a set A together with a one-place one-to-one function f. $${\mathcal A}$$ A is a Finite State Transducer (abbreviated FST) injection structure if A is a regular set, that is, the set of words accepted by some finite automaton, and f is realized by a deterministic finite-state transducer. We study the complexity of the character of an FST injection structure. We also examine the effective categoricity of such structures.

Richard Krogman, Douglas Cenzer
A Tale of Optimizing the Space Taken by de Bruijn Graphs

In the last decade in bioinformatics, many computational works have studied a graph data structure used to represent genomic data, the de Bruijn graph. It is closely tied to the problem of genome assembly, i.e. the reconstruction of an organism’s chromosomes using a large collection of overlapping short fragments. We start by highlighting this connection, noting that assembling genomes is a computationnally intensive task, and then focus our attention on the reduction of the space taken by de Bruijn graph data structures. This extended abstract is a retrospective centered around my own previous work in this area. It complements a recent review [10] by providing a less technical and more introductory exposition of a selection of concepts.

Rayan Chikhi
Formally Computing with the Non-computable

Church–Turing computability, which is the standard notion of computation, is based on functions for which there is an effective method for constructing their values. However, intuitionistic mathematics, as conceived by Brouwer, extends the notion of effective algorithmic constructions by also admitting constructions corresponding to human experiences of mathematical truths, which are based on temporal intuitions. In particular, the key notion of infinitely proceeding sequences of freely chosen objects, known as free choice sequences, regards functions as being constructed over time. This paper describes how free choice sequences can be embedded in an implemented formal framework, namely the constructive type theory of the Nuprl proof assistant. Some broader implications of supporting such an extended notion of computability in a formal system are then discussed, focusing on formal verification and constructive mathematics.

Liron Cohen
Mapping Monotonic Restrictions in Inductive Inference

In inductive inference we investigate computable devices (learners) learning formal languages. In this work, we focus on monotonic learners which, despite their natural motivation, exhibit peculiar behaviour. A recent study analysed the learning capabilities of strongly monotone learners in various settings. The therein unveiled differences between explanatory (syntactically converging) and behaviourally correct (semantically converging) such learners motivate our studies of monotone learners in the same settings.While the structure of the pairwise relations for monotone explanatory learning is similar to the strongly monotone case (and for similar reasons), for behaviourally correct learning a very different picture emerges. In the latter setup, we provide a self-learning class of languages showing that monotone learners, as opposed to their strongly monotone counterpart, do heavily rely on the order in which the information is given, an unusual result for behaviourally correct learners.

Vanja Doskoč, Timo Kötzing
Normal Forms for Semantically Witness-Based Learners in Inductive Inference

In inductive inference, we study learners (computable devices) inferring formal languages. In particular, we consider semantically witness-based learners, that is, learners which are required to justify each of their semantic mind changes. This natural requirement deserves special attention as it is a specialization of various important learning paradigms. As such, it has already proven to be fruitful for gaining knowledge about other types of restrictions.In this paper, we provide a thorough analysis of semantically converging, semantically witness-based learners, obtaining normal forms for them. Most notably, we show that set-driven globally semantically witness-based learners are equally powerful as their Gold-style semantically conservative counterpart. Such results are key to understanding the, yet undiscovered, mutual relation between various important learning paradigms of semantically converging learners.

Vanja Doskoč, Timo Kötzing
Walk-Preserving Transformation of Overlapped Sequence Graphs into Blunt Sequence Graphs with GetBlunted

Sequence graphs have emerged as an important tool in two distinct areas of computational genomics: genome assembly and pangenomics. However, despite this shared basis, subtly different graph formalisms have hindered the flow of methodological advances from pangenomics into genome assembly. In genome assembly, edges typically indicate overlaps between sequences, with the overlapping sequence expressed redundantly on both nodes. In pangenomics, edges indicate adjacency between sequences with no overlap—often called blunt adjacencies. Algorithms and software developed for blunt sequence graphs often do not generalize to overlapped sequence graphs. This effectively silos pangenomics methods that could otherwise benefit genome assembly. In this paper, we attempt to dismantle this silo. We have developed an algorithm that transforms an overlapped sequence graph into a blunt sequence graph that preserves walks from the original graph. Moreover, the algorithm accomplishes this while also eliminating most of the redundant representation of sequence in the overlap graph. The algorithm is available as a software tool, GetBlunted, which uses little enough time and memory to virtually guarantee that it will not be a bottleneck in any genome assembly pipeline.

Jordan M. Eizenga, Ryan Lorig-Roach, Melissa M. Meredith, Benedict Paten
On 3SUM-hard Problems in the Decision Tree Model

We describe subquadratic algorithms, in the algebraic decision-tree model of computation, for detecting whether there exists a triple of points, belonging to three respective sets A, B, and C of points in the plane, that satisfy a pair of polynomial equations. In particular, this has an application to detect collinearity among three sets A, B, C of n points each, in the complex plane, when each of the sets A, B, C lies on some constant-degree algebraic curve. In another development, we present a subquadratic algorithm, in the algebraic decision-tree model, for the following problem: Given a pair of sets A, B each consisting of n pairwise disjoint line segments in the plane, and a third set C of arbitrary line segments in the plane, determine whether $$A\times B\times C$$ A × B × C contains a triple of concurrent segments. This is one of four 3sum-hard geometric problems recently studied by Chan (2020). The results reported in this extended abstract are based on the recent studies of the author with Aronov and Sharir (2020, 2021).

Esther Ezra
Limitwise Monotonic Spectra and Their Generalizations

The current work studies the limitwise monotonic spectra introduced by Downey, Kach and Turetsky [6]. In the first part of the paper, we study the limitwise monotonic spectra of subsets and sequences of subsets of $$\mathbb N$$ N . In particular, we study their measure-theoretical and topological properties. Then we generalize them to the spectra of subsets and sequences of subsets of $$\mathbb R$$ R and provide some new degree spectra of structures.

Marat Faizrahmanov
On False Heine/Borel Compactness Principles in Proof Mining

The use of certain false Heine/Borel compactness principles is justified in source theories of proof mining. The justification rests on the metatheorems of the theory of proof mining. Ulrich Kohlenbach recently produced a counterexample showing that the metatheorems do not apply unrestrictedly to Heine-Borel compactness principles. In this short note, we present a simpler counterexample than Kohlenbach’s, showing that the metatheorems can fail because the source theory is already inconsistent.

Fernando Ferreira
Placing Green Bridges Optimally, with a Multivariate Analysis

We study the problem of placing wildlife crossings, such as green bridges, over human-made obstacles to challenge habitat fragmentation. The main task herein is, given a graph describing habitats or routes of wildlife animals and possibilities of building green bridges, to find a low-cost placement of green bridges that connects the habitats. We develop three problem models for this task, which model different ways of how animals roam their habitats. We settle the classical complexity and parameterized complexity (regarding the number of green bridges and the number of habitats) of the three problems.

Till Fluschnik, Leon Kellerhals
A Church-Turing Thesis for Randomness?

We discuss the difficulties in stating an analogue of the Church-Turing thesis for algorithmic randomness. We present one possibility and argue that it cannot occupy the same position in the study of algorithmic randomness that the Church-Turing thesis does in computability theory. We begin by observing that some evidence comparable to that for the Church-Turing thesis does exist for this statement: in particular, there are other reasonable formalizations of the intuitive concept of randomness that lead to the same class of random sequences (the Martin-Löf random sequences). However, we consider three properties that we would like a random sequence to satisfy and find that the Martin-Löf random sequences do not necessarily possess these properties to a greater degree than other types of random sequences, and we further argue that there is no more appropriate version of the Church-Turing thesis for algorithmic randomness. This suggests that consensus around a version of the Church-Turing thesis in this context is unlikely.

Johanna N. Y. Franklin
Probabilistic Models of k-mer Frequencies (Extended Abstract)

In this article, we review existing probabilistic models for modeling abundance of fixed-length strings (k-mers) in DNA sequencing data. These models capture dependence of the abundance on various phenomena, such as the size and repeat content of the genome, heterozygosity levels, and sequencing error rate. This in turn allows to estimate these properties from k-mer abundance histograms observed in real data. We also briefly discuss the issue of comparing k-mer abundance between related sequencing samples and meaningfully summarizing the results.

Askar Gafurov, Tomáš Vinař, Broňa Brejová
Defining Formal Explanation in Classical Logic by Substructural Derivability

Precisely framing a formal notion of explanation is a hard problem of great relevance for several areas of scientific investigation such as computer science, philosophy and mathematics. We study a notion of formal explanation according to which an explanation of a formula F must contain all and only the true formulae that concur in determining the truth of F. Even though this notion of formal explanation is defined by reference to derivability in classical logic, the relation that holds between the explained formula and the formulae explaining it has a distinct substructural flavour, due to the fact that no redundancy is admitted among the explaining formulae. We formalise this intuition and prove that this notion of formal explanation is essentially connected, in a very specific sense, to derivability in a substructural calculus.

Francesco A. Genco, Francesca Poggiolesi
Dedekind Cuts and Long Strings of Zeros in Base Expansions

In this paper, we study the complexity of irrational numbers under different representations. It is well-known that they can be computably transformed from one into another, but in general not subrecursively (with respect to many natural subrecursive classes). Our present focus is mainly on Dedekind cuts and base-b expansions in some base b. There exists a simple algorithm, which converts the Dedekind cut of an irrational number into its base-b expansion, but the opposite conversion is not subrecursively possible. This is why we want to enforce some natural conditions on the distribution of digits in the base-b expansion, under which we can obtain low complexity of the Dedekind cut. Our first theorem states that, for a subrecursive class $$\mathcal{S}$$ S with natural closure properties, the Dedekind cut of an irrational number will belong to $$\mathcal{S}$$ S , whenever its base-b expansion has only very small chunks of non-zero digits, which can be generated by means of the class $$\mathcal {S}$$ S . But much more interesting is the case, when long strings of zero digits alternate with long strings of non-zero digits. We give such an example, in which the Dedekind cut does not belong to $$\mathcal{S}$$ S , but after properly inserting zeros, its complexity lowers to belong to $$\mathcal{S}$$ S . We also give a construction of a real number, which has similar long stretches of zeros, but whose Dedekind cut can be made arbitrarily complex.

Ivan Georgiev
On the Impact of Treewidth in the Computational Complexity of Freezing Dynamics

An automata network is a network of entities, each holding a state from a finite set and evolving according to a local update rule which depends only on its neighbors in the network’s graph. It is freezing if there is an order on states such that the state evolution of any node is non-decreasing in any orbit. They are commonly used to model epidemic propagation, diffusion phenomena like bootstrap percolation or cristal growth. In this paper we establish how alphabet size, treewidth and maximum degree of the underlying graph are key parameters which influence the overall computational complexity of finite freezing automata networks. First, we define a general specification checking problem that captures many classical decision problems such as prediction, nilpotency, predecessor, asynchronous reachability. Then, we present a fast-parallel algorithm that solves the general problem when the three parameters are bounded, hence showing that the problem is in NC. Finally, we show that these problems are hard from two different perspectives. First, the general problem is W[2]-hard when taking either treewidth or alphabet as single parameter and fixing the others. Second, the classical problems are hard in their respective classes when restricted to families of graphs with sufficiently large treewidth.

Eric Goles, Pedro Montealegre, Martín Ríos Wilson, Guillaume Theyssier
Towards a Map for Incremental Learning in the Limit from Positive and Negative Information

In order to model an efficient learning paradigm, iterative learning algorithms access data one by one, updating the current hypothesis without regress to past data. Prior research investigating the impact of additional requirements on iterative learners left many questions open, especially in learning from informant, where the input is binary labeled.We first compare learning from positive information (text) with learning from informant. We provide different concept classes learnable from text but not by an iterative learner from informant. Further, we show that totality restricts iterative learning from informant.Towards a map of iterative learning from informant, we prove that strongly non-U-shaped learning is restrictive and that iterative learners from informant can be assumed canny for a wide range of learning criteria.Finally, we compare two syntactic learning requirements.

Ardalan Khazraei, Timo Kötzing, Karen Seidel
On Preserving the Computational Content of Mathematical Proofs: Toy Examples for a Formalising Strategy

Instead of using program extraction mechanisms in various theorem provers, I suggest that users opt to create a database of formal proofs whose computational content is made explicit; this would be an alternative approach which, as libraries of formal mathematical proofs are constantly growing, would rely on future advances in automation and machine learning tools, so that as blocks of (sub)proofs get generated automatically, the preserved computational content would get recycled, recombined and would eventually manifest itself in different contexts. To this end, I do not suggest restricting to only constructive proofs, but I suggest that proof mined, possibly also non-constructive proofs with some explicit computational content should be preferable, if possible. To illustrate what kind of computational content in mathematical proofs may be of interest I give several very elementary examples (to be regarded as building blocks of proofs) and some samples of formalisations in Isabelle/HOL. Given the state of the art in automation and machine learning tools currently available for proof assistants, my suggestion is rather speculative, yet starting to build a database of formal proofs with explicit computational content would be a potentially useful first step.

Angeliki Koutsoukou-Argyraki
In Search of the First-Order Part of Ramsey’s Theorem for Pairs

In reverse mathematics, determining the first-order consequences of Ramsey’s theorem for pairs and two colors is a long-standing open problem. In this paper, we give an overview of some recent developments related to this problem.

Leszek Aleksander Kołodziejczyk, Keita Yokoyama
On Subrecursive Representation of Irrational Numbers: Contractors and Baire Sequences

We study the computational complexity of three representations of irrational numbers: standard Baire sequences, dual Baire sequences and contractors. Our main results: Irrationals whose standard Baire sequences are of low computational complexity might have dual Baire sequences of arbitrarily high computational complexity, and vice versa, irrationals whose dual Baire sequences are of low complexity might have standard Baire sequences of arbitrarily high complexity. Furthermore, for any subrecursive class $$\mathcal{S}$$ S closed under primitive recursive operations, the class of irrationals that have a contractor in $$\mathcal{S}$$ S is exactly the class of irrationals that have both a standard and a dual Baire sequence in $$\mathcal{S}$$ S . Our results implies that a subrecursive class closed under primitive recursive operations contains the continued fraction of an irrational number $$\alpha $$ α if and only if there is a contractor for $$\alpha $$ α in the class.

Lars Kristiansen
Learning Languages in the Limit from Positive Information with Finitely Many Memory Changes

We investigate learning collections of languages from texts by an inductive inference machine with access to the current datum and a bounded memory in form of states. Such a bounded memory states ( $$\mathbf {BMS}$$ BMS ) learner is considered successful in case it eventually settles on a correct hypothesis while exploiting only finitely many different states.We give the complete map of all pairwise relations for an established collection of criteria of successful learning. Most prominently, we show that non-U-shapedness is not restrictive, while conservativeness and (strong) monotonicity are. Some results carry over from iterative learning by a general lemma showing that, for a wealth of restrictions (the semantic restrictions), iterative and bounded memory states learning are equivalent. We also give an example of a non-semantic restriction (strongly non-U-shapedness) where the two settings differ.

Timo Kötzing, Karen Seidel
Compression Techniques in Group Theory

This paper gives an informal overview over applications of compression techniques in algorithmic group theory.

Markus Lohrey
Computable Procedures for Fields

This tutorial will introduce listeners to many questions that can be asked about computable processes on fields, and will present the answers that are known, sometimes with proofs. This is not original work. The questions in greatest focus here include decision procedures for the existence of roots of polynomials in specific fields, for the irreducibility of polynomials over those fields, and for transcendence of specific elements over the prime subfield. Several of these questions are related to the construction of algebraic closures, making Rabin’s Theorem prominent.

Russell Miller
Minimum Classical Extensions of Constructive Theories

Reverse constructive mathematics, based on the pioneering work of Kleene, Vesley, Kreisel, Troelstra, Bishop, Bridges and Ishihara, is currently under development. Bishop constructivists tend to emulate the classical reverse mathematics of Friedman and Simpson. Veldman’s reverse intuitionistic analysis and descriptive set theory split notions in the style of Brouwer. Kohlenbach’s proof mining uses interpretations and translations to extract computational information from classical proofs. We identify the classical content of a constructive mathematical theory with the Gentzen negative interpretation of its classically correct part. In this sense HA and PA have the same classical content but intuitionistic and classical two-sorted recursive arithmetic with quantifier-free countable choice do not; $$\varSigma ^0_1$$ Σ 1 0 numerical double negation shift expresses the precise difference. Other double negation shift and weak comprehension principles clarify the classical content of stronger constructive theories. Any consistent axiomatic theory S based on intuitionistic logic has a minimum classical extension S $$^{+g}$$ + g , obtained by adding to S the negative interpretations of its classically correct consequences. Subsystems of Kleene’s intuitionistic analysis and supersystems of Bishop’s constructive analysis provide interesting examples, with the help of constructive decomposition theorems.

Joan Rand Moschovakis, Garyfallia Vafeiadou
Subrecursive Equivalence Relations and (non-)Closure Under Lattice Operations

The set of equivalence relations on any non-empty set is equipped with a natural order that makes it a complete lattice. The lattice structure only depends on the cardinality of the set, and thus the study of the lattice structure on any countably infinite set is (up to order-isomorphism) the same as studying the lattice of equivalence relations on the set of natural numbers.We investigate closure under the meet and join operations in the lattice of equivalence relations on the set of natural numbers. Among other results, we show that no set of co-r.e. equivalence relations that contains all logspace-decidable equivalence relations is a lattice.

Jean-Yves Moyen, Jakob Grue Simonsen
Interactive Physical ZKP for Connectivity: Applications to Nurikabe and Hitori

During the last years, many Physical Zero-knowledge Proof (ZKP) protocols for Nikoli’s puzzles have been designed. In this paper, we propose two ZKP protocols for the two Nikoli’s puzzles called Nurikabe and Hitori. These two puzzles have some similarities, since in their rules at least one condition requires that some cells are connected to each other, horizontally or vertically. The novelty in this paper is to propose two techniques that allow us to prove such connectivity without leaking any information about a solution.

Léo Robert, Daiki Miyahara, Pascal Lafourcade, Takaaki Mizuki
Positive Enumerable Functors

We study reductions well suited to compare structures and classes of structures with respect to properties based on enumeration reducibility. We introduce the notion of a positive enumerable functor and study the relationship with established reductions based on functors and alternative definitions.

Barbara F. Csima, Dino Rossegger, Daniel Yu
Splittings and Robustness for the Heine-Borel Theorem

The Heine-Borel theorem for uncountable coverings has recently emerged as an interesting and central principle in higher-order Reverse Mathematics and computability theory, formulated as follows: $$\textsf {HBU} $$ HBU is the Heine-Borel theorem for uncountable coverings given as $$\cup _{x\in [0,1]}(x-\varPsi (x), x+\varPsi (x))$$ ∪ x ∈ [ 0 , 1 ] ( x - Ψ ( x ) , x + Ψ ( x ) ) for arbitrary $$\varPsi :[0,1]\rightarrow {\mathbb R}^{+}$$ Ψ : [ 0 , 1 ] → R + , i.e. the original formulation going back to Cousin (1895) and Lindelöf (1903). In this paper, we show that $$\textsf {HBU} $$ HBU is equivalent to its restriction to functions continuous almost everywhere, an elegant robustness result. We also obtain a nice splitting $$\textsf {HBU} \leftrightarrow [\textsf {WHBU} ^{+}+\textsf {HBC} _{0}+\textsf {WKL} ]$$ HBU ↔ [ WHBU + + HBC 0 + WKL ] where $$\textsf {WHBU} ^{+}$$ WHBU + is a strengthening of Vitali’s covering theorem and where $$\textsf {HBC} _{0}$$ HBC 0 is the Heine-Borel theorem for countable collections (and not sequences) of basic open intervals, as formulated by Borel himself in 1898.

Sam Sanders
Non-collapse of the Effective Wadge Hierarchy

We study the recently suggested effective Wadge hierarchy in effective spaces, concentrating on the non-collapse property. Along with hierarchies of sets, we study hierarchies of k-partitions which are interesting on their own. In particular, we establish sufficient conditions for the non-collapse of the effective Wadge hierarchy and apply them to some concrete spaces.

Victor Selivanov
Effective Inseparability and Its Applications

We survey some recent applications of the classical notion of effective inseparability to computably enumerable structures, formal systems and lattices of sentences.

Andrea Sorbi
Simple Betting and Stochasticity

A sequence of zeros and ones is called Church stochastic if all subsequences chosen in an effective manner satisfy the law of large numbers with respect to the uniform measure. This notion may be independently defined by means of simple martingales, i.e., martingales with restricted (constant) wagers (hence, simply random sequences). This paper is concerned with generalization of Church stochasticity for arbitrary (possibly non-stationary) measures. We compare two ways of doing this: (i) via a natural extension of the law of large numbers (for non-i.i.d. processes) and (ii) via restricted martingales, i.e., by redefining simple randomness for arbitrary measures. It is shown that in the general case of non-uniform measures the respective notions of stochasticity do not coincide but the first one is contained in the second.

Tomasz Steifer
Péter on Church’s Thesis, Constructivity and Computers

The aim of this paper is to take a look at Péter’s talk Rekursivität und Konstruktivität delivered at the Constructivity in Mathematics Colloquium in 1957, where she challenged Church’s Thesis from a constructive point of view. The discussion of her argument and motivations is then connected to her earlier work on recursion theory as well as her later work on theoretical computer science.

Máté Szabó
Constructive Mathematics, Church’s Thesis, and Free Choice Sequences

We see the defining properties of constructive mathematics as being the proof interpretation of the logical connectives and the definition of function as rule or method.We sketch the development of intuitionist type theory as an alternative to set theory. We note that the axiom of choice is constructively valid for types, but not for sets. We see the theory of types, in which proofs are directly algorithmic, as a more natural setting for constructive mathematics than set theories like IZF.Church’s thesis provides an objective definition of effective computability. It cannot be proved mathematically because it is a conjecture about what kinds of mechanisms are physically possible, for which we have scientific evidence but not proof. We consider the idea of free choice sequences and argue that they do not undermine Church’s Thesis.

D. A. Turner
KL-Randomness and Effective Dimension Under Strong Reducibility

We show that the (truth-table) Medvedev degree KLR of Kolmogorov–Loveland randomness coincides with that of Martin-Löf randomness, MLR, answering a question of Miyabe. Next, an analogue of complex packing dimension is studied which gives rise to a set of weak truth-table Medvedev degrees isomorphic to the Turing degrees.

Bjørn Kjos-Hanssen, David J. Webb
An Algorithmic Version of Zariski’s Lemma

Zariski’s lemma was formulated and used by Oscar Zariski to prove Hilbert’s Nullstellensatz. This article gives an elementary and constructive proof of Zariski’s lemma and only uses basics of integral ring extensions under the condition that each field is discrete. After this constructive proof we take a look at the computational side. We give a computational interpretation of Zariski’s lemma and use our constructive proof to develop an algorithm which realises the computational interpretation. This is a typical approach in constructive mathematics.

Franziskus Wiesnet
Einstein Meets Turing: The Computability of Nonlocal Games

Quantum entanglement – the phenomenon where distant particles can be correlated in ways that cannot be explained by classical physics – has mystified scientists since the1930s, when quantum theory was beginning to emerge. Investigation into fundamental questions about quantum entanglement has continually propelled seismic shifts in our understanding of nature. Examples include Einstein, Podolsky and Rosen’s famous 1935 paper about the incompleteness of quantum mechanics, and John Bell’s refutation of EPR’s argument, 29 years later, via an experiment to demonstrate the non-classicality of quantum entanglement.More recently, the field of quantum computing has motivated researchers to study entanglement in information processing contexts. One question of deep interest concerns the computability of nonlocal games, which are mathematical abstractions of Bell’s experiments. The question is simple: is there an algorithm to compute the optimal winning probability of a quantum game – or at least, approximate it? In this paper, I will discuss a remarkable connection between the complexity of nonlocal games and classes in the arithmetical hierarchy. In particular, different versions of the nonlocal games computability problem neatly line up with the problems of deciding $$\varSigma _1^0$$ Σ 1 0 , $$\varPi _1^0$$ Π 1 0 , and $$\varPi _2^0$$ Π 2 0 sentences, respectively.

Henry Yuen
Computability of Limit Sets for Two-Dimensional Flows

A classical theorem of Peixoto qualitatively characterizes, on the two-dimensional unit ball, the limit sets of structurally stable flows defined by ordinary differential equations. Peixoto’s density theorem further shows that such flows are typical in the sense that structurally stable systems form an open dense set in the space of all continuously differentiable flows.In this note, we discuss the problem of explicitly finding the limit sets of structurally stable planar flows.

Daniel S. Graça, Ning Zhong
Backmatter
Metadata
Title
Connecting with Computability
Editors
Liesbeth De Mol
Prof. Dr. Andreas Weiermann
Florin Manea
David Fernández-Duque
Copyright Year
2021
Electronic ISBN
978-3-030-80049-9
Print ISBN
978-3-030-80048-2
DOI
https://doi.org/10.1007/978-3-030-80049-9

Premium Partner