Skip to main content
Top

2016 | Book

Limits of Computation

From a Programming Perspective

insite
SEARCH

About this book

This textbook discusses the most fundamental and puzzling questions about the foundations of computing. In 23 lecture-sized chapters it provides an exciting tour through the most important results in the field of computability and time complexity, including the Halting Problem, Rice's Theorem, Kleene's Recursion Theorem, the Church-Turing Thesis, Hierarchy Theorems, and Cook-Levin's Theorem. Each chapter contains classroom-tested material, including examples and exercises. Links between adjacent chapters provide a coherent narrative.

Fundamental results are explained lucidly by means of programs written in a simple, high-level imperative programming language, which only requires basic mathematical knowledge. Throughout the book, the impact of the presented results on the entire field of computer science is emphasised. Examples range from program analysis to networking, from database programming to popular games and puzzles. Numerous biographical footnotes about the famous scientists who developed the subject are also included.

"Limits of Computation" offers a thorough, yet accessible, introduction to computability and complexity for the computer science student of the 21st century.

Table of Contents

Frontmatter
Chapter 1. Limits? What Limits?
Abstract
This introductory chapter motivates the consideration of limits of computation. It contains a short summary of the physical limitations of building silicon-based computers. Then the focus shifts to the limitations inherent to computing that are independent of any hardware considerations. The main part consists of an overview of the content of the book, computability and time complexity.
Bernhard Reus

Computability

Frontmatter
Chapter 2. Problems and Effective Procedures
Abstract
This chapter provides the basic definition of what we mean by “computable” and what we mean by “problem.” A short historical perspective is given. The term “effective procedure” is introduced that describes a program executable in a finite number of simple steps in a mechanical way. Sets and structures on sets, i.e. relations and functions, together with their basic operations, are defined and some basic reasoning principles reviewed.
Bernhard Reus
Chapter 3. The WHILE-Language
Abstract
Turing machines are historically important but tedious to program and thus not an ideal language for studying computability and complexity issues. We thus prefer to program in a high-level language, on which the notions of computable and decidable can then be based. This chapter presents the imperative untyped language WHILE that provides basic imperative control operators and binary trees as built in datatype. It is explained how various datatypes like natural numbers and lists can be encoded in WHILE.
Bernhard Reus
Chapter 4. Semantics of WHILE
Abstract
If we want to take WHILE-programs as effective procedures we better make sure we understand exactly how to execute each command and ensure it can be executed using finite resources. Normally programs in high-level languages like WHILE are not directly executed on a machine like Turing machine programs are. They are interpreted by a special program, an interpreter, that can then be executed on any given computer’s hardware. We want to abstract away from all the details of hardware, but still ensure that WHILE-programs qualify as effective procedures, so in this chapter we assign WHILE a formal semantics that withstands any scrutiny. In order to describe the operational effect of commands we use relations involving stores that record current values of variables. The semantics of a WHILE-program is then its behaviour which describes the result of the program given any input.
Bernhard Reus
Chapter 5. Extensions of WHILE
Abstract
Some language features not included in WHILE make the programmer’s life so much easier and WHILE-programs also more readable. Examples of such features are built-in equality, number literals, explicit syntactic support for writing lists, macro calls, or switch statements. These extensions are the topic of this chapter. They do not require a change to the semantics defined in the previous chapter as programs that use them can be translated into equivalent programs in pure WHILE.
Bernhard Reus
Chapter 6. Programs as Data Objects
Abstract
Many interesting programs take as input a program or a program and some other data. Three types of programs in particular use programs as input: compilers, interpreters, and specialisers, which are briefly explained in this chapter. In order to be able to write such programs in WHILE, we need to be able to treat other WHILE-programs as data. To achieve that, an encoding for abstract syntax trees of WHILE-programs as lists is presented. Such lists can in turn be represented as values in the datatype of WHILE.
Bernhard Reus
Chapter 7. A Self-interpreter for WHILE
Abstract
This chapter presents a detailed description of an interpreter for WHILE written in WHILE. The fact that we can write a WHILE-interpreter in WHILE itself is evidence that our WHILE language was an acceptable choice for “effective procedures”. The WHILE-language does not provide recursive procedures and therefore we have to implement a traversal of abstract syntax trees using stacks.
Bernhard Reus
Chapter 8. An Undecidable (Non-computable) Problem
Abstract
This chapter discusses our first major result in computability theory, and our first ‘interesting’ problem. More precisely, we present a problem that is undecidable i.e. that no computer program can solve. We define the concept of WHILE-decidability and define the famous Halting problem whether a given WHILE-program terminates when run on a given input. The concept of diagonalisation is explained and then used to prove that the Halting problem is WHILE-undecidable.
Bernhard Reus
Chapter 9. More Undecidable Problems
Abstract
In this chapter we will look at more undecidable problems. We show a famous result, Rices theorem, that any non-trivial purely semantical property of programs undecidable. The proof uses a reduction from the Halting problem. Such reductions and the reasoning principles they give rise to are investigated. The concept of a semi-decidable problem is introduced, and it is shown that the Halting problem is semi-decidable by means of the self-interpreter.
Bernhard Reus
Chapter 10. Self-referencing Programs
Abstract
In WHILE there is no language feature that admits recursive macros or any direct reference to the program itself. This raises some questions. Can we write self-referencing programs in WHILE at all? More concretely, can we write a WHILE program that returns itself as data, i.e. its own abstract syntax tree? In this chapter Kleene’s Recursion Theorem, which answers those questions in the affirmative, is proved by one more application of the diagonalisation principle. Applications of the theorem are discussed and another theorem of Kleene’s, the parameterisation theorem, is shown, which provides the semantic justification of partial evaluation, a well-known program optimisation technique.
Bernhard Reus
Chapter 11. The Church-Turing Thesis
Abstract
The computability results discussed so far used WHILE-programs as choice of “effective procedures”. In this chapter we show that they do not depend on this particular choice of WHILE. Various other (well known) notions of computation are formally introduced. This list includes machine languages like Turing machines, random access memory machines and counter machines, for which a program consists of a sequence of labelled instructions, and jumps are the only control flow mechanisms. We also consider a flow chart language and cellular automata. It is then argued, by means of compilation, that all these languages are equally powerful in the sense that anything that can be programmed in one can be also programmed in any other. This provides evidence for the so-called Church-Turing thesis, that all reasonable formalizations of the intuitive notion of effective computability are equivalent.
Bernhard Reus

Complexity

Frontmatter
Chapter 12. Measuring Time Usage
Abstract
A problem is feasible if it can be computed in a practically acceptable amount of time for large input. This is vague, and we will have to make the notion of “feasibility” more precise. Instead of measuring real time we will introduce some abstract notion of time. This chapter defines runtime measures for the languages considered so far and discusses fairness of those definitions. A programming language with a runtime measure is called a timed programming language. This concept admits comparisons between the runtime of programs of different languages via compilation from one (timed) programming language into another one.
Bernhard Reus
Chapter 13. Complexity Classes
Abstract
In this chapter the notion of runtime measure for programs is lifted to problems. We classify problems according to the time it takes to decide them. The notion of time bounds is introduced which abstracts away from concrete input expressing runtime simply in terms of the size of the input. The focus will be mainly on linear, polynomial and exponential time. The Big-O and little-o notation are introduced to describe the order of growth of a function. independent from constant factors. This is in line with asymptotic worst-case complexity which we will be using throughout.
Bernhard Reus
Chapter 14. Robustness of P
Abstract
Does it matter for a problem’s complexity which model of computation is used to decide it? In this chapter we show that the class of polynomially decidable problems is robust under compilation between the various languages we have encountered. Stipulating that all notions of computation can simulate each other up to a polynomial factor is known as the Extended Church–Turing Thesis. It can be weakened to sequential models and is then called Cook’s thesis. A third, more questionable, thesis, often called Cobham–Edmonds thesis, states that the class of polynomially decidable problems is the class of feasible problems. The above mentioned theses will be discussed and evaluated.
Bernhard Reus
Chapter 15. Hierarchy Theorems
Abstract
In this chapter we investigate the question whether we can decide more problems if we have more time available to do so. The corresponding positive results are called Hierarchy theorems as they establish a hierarchy of problem classes. We distinguish the case when the time bounds are linear and show for the WHILE language restricted to one variable that larger constants allow for more problems to be decided. The version beyond linear time establishes a hierarchy between classes with specific time bounds, if one time bound grows much more quickly than the other.
Bernhard Reus
Chapter 16. Famous Problems in P
Abstract
This chapter presents problems on the “right side” of the limit that can be decided efficiently. These are famous problems and they are computable in time bounded by a polynomial with a low degree. A problem that is solved billions of times a day in network devices is the so-called predecessor problem. Other examples are parsing, primality test, linear programming, and various graph problems including shortest paths, maximal matchings, maximal flows and the Postman problem.
Bernhard Reus
Chapter 17. Common Problems Not Known to Be in P
Abstract
This chapter presents another set of useful and famous problems. This time, however, nobody knows yet whether they are in P. No algorithms have been discovered yet that solve these problems in polynomial time. For simplicity, all problems in this chapter will be formulated as decision problems. The problems include the Travelling Salesman Problem, the Graph Colouring Problem, the Max-Cut Problem, which are all graph problems, the 0-1 Knapsack Problem and the Integer Programming Problem. The consequence for a problem “being decidable in polynomial time” is also discussed.
Bernhard Reus
Chapter 18. The One-Million-Dollar Question
Abstract
Although it is difficult to find efficient solutions to decision problems like the Travelling Salesman or Linear Integer Programming presented earlier, it turns out to be relatively easy to check whether a certain candidate solution actually is a solution for the given problem. As this appears to be such a common pattern in optimisation problems, a complexity class will be defined accordingly. The problem arises whether this new class is equal to the “good” class of polynomially decidable problems. This turns out to be the most famous open problem in Theoretical Computer Science. But not only this, it appears to have significant impact on the world of computing and even physics.
Bernhard Reus
Chapter 19. How Hard Is a Problem?
Abstract
Since no concrete lower bounds for problems in NP are known, we identify the “hardest” problems in this class. To express the “harder” (or rather “not simpler than”) relationship one uses the concept of reduction. In order to ensure that one does not get out of the class of polynomial time verification, the reduction function is required to be computable in polynomial time. One can then show that if P \(\ne \) NP is assumed, none of those “hardest” problem in NP can actually have a polynomial time solution. In other words, to show that P = NP it suffices to find a polynomial time solution for any of the “hardest” problems in NP.
Bernhard Reus
Chapter 20. Complete Problems
Abstract
In this chapter we consider various examples of NP-complete problems. A famous result by Cook and Levin will provide us with a very first NP-complete problem. To this problem one can then apply reduction to show that other problems are NP-hard as well. Several such NP-complete problems are introduced: reachability of a (favourable) position for many interesting games, (unrestricted) database queries, and policy based routing. It turns out that most optimisation problems that are not in P are NP-complete. But there are also so-called NP-intermediate problems, for which we do not know whether they are NP-complete nor whether they are in P. Finally, will briefly discuss the concept of completeness for other complexity classes, i.e. P and semi-decidable problems.
Bernhard Reus
Chapter 21. How to Solve NP-Complete Problems
Abstract
There are hundreds of important problems that have been shown to be NP-complete. If we believe that P \(\ne \) NP, then no efficient algorithms exist to solve them. In this chapter we discuss how this affects the working programmer. As many of these NP-complete problems are relevant to business and industry, one needs coping strategies, which are the topic of this chapter. It is discussed what exact and approximation algorithms, can achieve and how parallelism and randomisation might help. Some new complexity classes will be defined along the way. The effectivity of the presented strategies are evaluated for the Travelling Salesman problem. Finally, it is discussed how the fact that there are no efficient algorithms known to factorise a number can be viewed as good news.
Bernhard Reus
Chapter 22. Molecular Computing
Abstract
This chapter is dedicated to molecular computing. Molecular computing uses DNA and molecular biology and chemistry as hardware. Historically, its beginnings are rooted in the attempt to solve NP-complete problems efficiently. Those attempts and the overall potential and the major challenges of DNA computing are reviewed. One particular abstract model of molecular computing, Chemical Reaction Networks, are introduced in more detail and compared to other notions of computations. The programming techniques for Chemical Reaction Networks are very different from traditional approaches presented earlier, as the programmer has no control over the order of reactions executed. A concrete “synthetic chemistry” implementation of this abstract model that uses DNA strand displacement is briefly explained.
Bernhard Reus
Chapter 23. Quantum Computing
Abstract
In this chapter we will look into a completely new way of computation based on quantum mechanics, harnessing the underlying parallelism of quantum effects and superposition in a controlled manner. Without going into too much detail, we will point out the role of probabilities and wave functions and mention some famous quantum algorithms. The difficulties of building quantum computers are outlined. It is explained to what extent quantum computing can provide speed-up and it is highlighted that quantum computers are unable to solve NP-complete problems in general (unless P \(=\) NP). Quantum computing also shows off the deep connections between complexity theory and physics.
Bernhard Reus
Backmatter
Metadata
Title
Limits of Computation
Author
Bernhard Reus
Copyright Year
2016
Electronic ISBN
978-3-319-27889-6
Print ISBN
978-3-319-27887-2
DOI
https://doi.org/10.1007/978-3-319-27889-6

Premium Partner