2021 | Book

# Introduction to Computation

## Haskell, Logic and Automata

Authors: Donald Sannella, Michael Fourman, Haoran Peng, Philip Wadler

Publisher: Springer International Publishing

Book Series : Undergraduate Topics in Computer Science

2021 | Book

Authors: Donald Sannella, Michael Fourman, Haoran Peng, Philip Wadler

Publisher: Springer International Publishing

Book Series : Undergraduate Topics in Computer Science

Computation, itself a form of calculation, incorporates steps that include arithmetical and non-arithmetical (logical) steps following a specific set of rules (an algorithm). This uniquely accessible textbook introduces students using a very distinctive approach, quite rapidly leading them into essential topics with sufficient depth, yet in a highly intuitive manner. From core elements like sets, types, Venn diagrams and logic, to patterns of reasoning, calculus, recursion and expression trees, the book spans the breadth of key concepts and methods that will enable students to readily progress with their studies in Computer Science.

Advertisement

An important aspect of things is that we’re able to tell the difference between one thing and another. Said another way, we can tell when two things are the same, or equalEquality. Obviously, it’s easy to tell the difference between a person and a noodle. You’d probably find it difficult to tell the difference between two zebras, but zebras can tell the difference. If two things a and b are equal, we write $$a=b$$ a = b ; if they’re different then we write $$a\not =b$$ a ≠ b .

A set is a collection Typeof things. In programming, a type is also a collection of things. Types are used in logic and mathematics with a similar meaning but without the computational constraints imposed by their use in programming, see https://en.wikipedia.org/wiki/Type_theory .

Now that we have some values, classified into types, we can start to compute with them. Starting with something familiar, let’s use Haskell to do some simple arithmetic calculationsArithmetic Expressionarithmetic.

It’s possible to work with sets by listing all of their elements—provided they are finite sets, of course—and then comparing sets by comparing the elements in these lists. But a visual representation provides much better support for human intuition.

So far, our computations have involved simple “atomic” data values, having types like Int and Bool. More interesting programs involve the use of “compound” data structuresData structure, which group together a number of data values and allow them to be handled as a unit.

Human language is often ambiguous, verbose and imprecise, and its structure is complex. Its ability to express layers and shades of meaning makes it well-suited to writing poetry, but not so good for applications that demand precision or when simplicity is important. You’re going to learn to use a language based on propositional logicLogicpropositional for describing features of things, making statements about them, and deciding whether or not statements are true or false. Propositional logic is a very simple form of logic where the focus is on ways of building up complex statements from simpler ones using logical connectivesConnective including conjunction ( $$\wedge $$ ∧ , or && in Haskell), disjunction ( $$\vee $$ ∨ , or ||) and negation ( $$\lnot $$ ¬ , or not).

As you will have discovered byTesting now, it’s easy to make mistakes in your function definitions. Sometimes you won’t be paying enough attention. Sometimes your fingers will type something different from what your brain intended. Nobody’s perfect!. Fortunately, Haskell will detect some of your mistakes automatically. These include syntaxSyntaxerror Errorsyntax and type errorsTypeerror Errortype as well as typos, like misspelled function names.

Now that you know how to use logic for making statements about things and how to check whether or not a given statement is true, the next step is to study patterns of reasoning that allow true statements to be combined to give other statements that are guaranteed to be true. We’ll start with ideas that go all the way back to Aristotle, the founder of logic, who looked at simple patterns of logical argument with two premises and a conclusion, called syllogismsSyllogism. The study of the sound syllogisms and the relationships between them dominated the subject for the next 2000 years. Nowadays, using the modern notation of symbolic logicLogicsymbolic, we can express what scholars were studying for all this time in a much simpler way, and see it as a straightforward combination of a couple of simple ideas. Since we’re not historians, the point of looking at syllogisms isn’t to learn about Aristotle and the history of logic. Rather, it’s to learn those simple ideas and how they fit together, as a starting point for studying the rest of symbolic logic, and to demonstrate the power of a good notation.

So far you’ve seen how to use rules to combine valid sequents into simple deductive arguments about statements of the form “every a is b”. Sometimes the antecedents and/or consequent of a sequent involved negation, to allow statements like “every a is not b”. You’re now going to learn how to build arguments that also take account of invalid sequents, which relies on a different kind of negation. This provides new ways of reasoning, as well as allowing arguments about existence of things that satisfy predicates (“some a is b”). Let’s start by looking at a simple example of commonsense reasoning. We’ll start by making an underlying assumption that Fiona is old enough to buy alcohol.

Recall that the list notation [4,6,8] is just shorthand for the expression 4:(6:(8:[])). So, every list can be written using ::@: (cons) and [][]@[] (empty list) Emptylist Listempty. In fact, every list can be written using : and [] in just one way. That fact is what makes pattern matching work on lists. Given a list l and a pattern p built using variables, [], : and literals, there is at most one way to match p against l. That match gives values to the variables in p. It might not match, if p requires a list that’s longer than l, or if p contains a literal that doesn’t match the value in the corresponding position in l.

We’ll now look at more examples of recursively defined functions which demonstrate some points that didn’t arise in earlier examples. To start, recallExpressionenumeration the notation [1..10]. Underlying this notation is the following Prelude function, where [m ..n ] stands for enumFromTo m n: Here the recursion is on integers rather than lists. We’ve learned how important it is that recursion is well-founded: it’s okay to define the result of applying a function to a value in terms of its application to a smaller value. But here, we’re defining enumFromTo m n in terms of enumFromTo (m+1) n! Of course, m+1 is larger than m; how can that be right? And the first equation must be the base case, since there is no recursion, but it looks different from all of the previous examples, so what’s going on there?.

The same patterns of computation keep coming up in function definitions. An example from the beginning of Chap. 11 was in the definitions of enumFromTo and prodFromTo:

SectionsSection areFunctionhigher-order convenient for supplying functional arguments to higher-order functions. For instance, we can replace

Chapters 8 and 9 covered 2000 years of logic, up to the mid-nineteenth century, but using modern notation which makes things much simpler. We’re now going to study modern symbolic logic.

So far, we’ve done a lot using the types that come “out of the box” with Haskell. The type of lists has been particularly useful, and higher-order functions have revealed the power of the function type ->. Both of these actually provide an infinite number of types: there is a type [t ] for every type t and a type s -> t for every s and t.

Tree-like structuresTree are ubiquitous in Informatics. They are used to provide conceptual models of situations and processes involving hierarchies, for representing the syntax of languages involving nesting, and for representing data in a way that is amenable to processing by recursive algorithms. We have already seen some pictures of trees in Chap. 12 , in the explanation of foldr and foldl, and the sequent calculus proofs in Chap. 14 have a tree-like structure.

Complex logical expressions like $$(a \wedge \lnot b \wedge (c \vee (d \wedge b)) \vee (\lnot b \wedge \lnot a)) \wedge c$$ ( a ∧ ¬ b ∧ ( c ∨ ( d ∧ b ) ) ∨ ( ¬ b ∧ ¬ a ) ) ∧ c are hard to understand and hard to work with. The much simpler expression $$\lnot b \wedge c$$ ¬ b ∧ c , to which it is equivalent, is obviously an improvement. When logical expressions are used to design hardware circuits, simpler expressions produce circuits that are cheaper because they have fewer components.

Our simple universe of discourse in Chap. 6 contained a number of things.

You’ve seen how to use sequent calculus to check whether a sequent is universally true or has a counterexample. We’re now going to look into the problem of checking satisfiabilitySatisfiablechecking: whether a logical expression is true for at least one combination of values for the variables, or predicates, that the expression contains.

Whenever you write a program that requires a kind of data that isn’t already built into Haskell, you need to decide how to represent it in terms of the existing types. Sometimes you’ll be able to find something appropriate in one of Haskell’s library modules. Other times, you will decide to use some combination of existing types, or you’ll define a new algebraic data type.

So far you’ve been writing small programs. Imagine for a moment that—perhaps 10 years from now—you’re a member of a team building the software for an autonomous robotaxi. At that point, you’ll be working on much larger programs in collaboration with other team members.

You’ve learned several methods for converting logical expressions to conjunctive normal form (CNF), starting with Karnaugh maps in Chap. 17. We’ll look more closely at one of those methods, using the laws of Boolean algebra, later in this chapter.

We’re now going to look into a special case of CNF expressions: those with clauses containing no more than two literals each. Surprisingly, there’s a dramatic efficiency difference between satisfiability checking in this case and when the limit is increased, even to just three literals per clause.

You’ve seen that there are some common functions, like == and show, that are available for many but not all types. For instance, neither == nor show will work on function types. The type of a polymorphic function can require that it be applied only on types that have such functions, and these requirements can be inferred automatically.

Trees are a common data structure, as you’ve seen. They were used to represent the syntax of languages like propositional logic in Chap. 16, and as the basis for an efficient way of representing sets of integers in Chap. 20.

Some of the programs you have seen use clever algorithms or clever data representations to avoid unnecessary computation. But for some problems, there is no clever algorithm, or at least none that has been discovered yet.

As you saw in Chap. 13 , everything in Haskell is ultimately based on lambda expressions, meaning that computation in Haskell is based on function application and substitution. That is Haskell’s model of computation: the basic mechanisms that underlie its way of producing results of computational problems.

The examples of deterministic finite automata in the last chapter were given in the form of simple diagrams, with states drawn as circles and transitions drawn as arrows between the circles. With a diagram, it’s pretty easy to try out examples of input strings to see what happens—provided the diagram isn’t so complicated that it resembles a bowl of spaghetti. We’re now going to look at a mathematical definition of DFAs that is equivalent to giving a diagram, but phrased in terms of symbols including Greek letters. But why bother?.

Should you take the lemon cake or the crème brûlée? Or, consider a more important decision: you’re lost in the mountains in a blizzard without a GPS, and you’re wondering whether to take the path to the left or to the right.

All of the Haskell programs you’ve seen so far have been pure mathematical functions that consume input values and produce output values. Your interaction with these functions has been by applying them to inputs during an interactive Haskell session, and looking at the outputs that appear on your screen once computation is finished.

You’ve seen how to describe sets of strings—or languages—using different kinds of finite automata. An automaton describes a language by accepting the strings in the language and rejecting the rest. It decides whether or not to accept a string via a mechanical computation process based on the sequence of symbols in the string.

Having learned a lot about regular languages, different ways of describing them, and a little about their applications, you will have rightly come to understand that they are important. As you’ve seen, the union of two regular languages is a regular language, and the same goes for intersection, complement, concatenation, and iteration. So, given just a few simple regular languages—for example, $$\{a\}$$ { a } and $$\{b\}$$ { b } —you can build a large number of new regular languages.