Skip to main content

Über dieses Buch

This volume has a dual significance to the ESPRIT Basic Research efforts towards forging strong links between European academic and industrial teams carrying out research, often interdisciplinary, at the forefront of Information Technology. Firstly, it consists of the proceedings of the "Symposium on Computational Logic" - held on the occasion of the 7th ESPRIT Conference Week in November 1990 - whose organisation was inspired by the work of Basic Research Action 3012 (COMPULOG). This is a consortium which has attracted world-wide interest, with requests for collaboration throughout Europe, the US and Japan. The work of COMPULOG acts as a focal point in this symposium which is broadened to cover the work of other eminent researchers in the field, thus providing a review of the state of the art in computational logic, new and important contributions in the field, but also a vision of the future. Secondly, this volume is the first of an ESPRIT Basic Research Series of publications of research results. It is expected that the quality of content and broad distribution of this series will have a major impact in making the advances achieved accessible to the world of academic and industrial research alike. At this time, all ESPRIT Basic Research Actions have completed their first year and it is most encouraging and stimulating to see the flow of results such as the fine examples presented in this symposium.



Problems and Promises of Computational Logic

The ultimate goal of the Basic Research Action, Compulog, is to develop the use of logic for all aspects of computation. This includes not only the development of a single logic for representing programs, program specifications, databases, and knowledge representations in artificial intelligence, but also the development of logic-based management tools. I shall argue that, for these purposes, two major extensions of logic programming are needed — abduction and metalevel reasoning. I shall also argue that extensions of full first-order logic may not be necessary.
Robert A. Kowalski

An Introduction to Prolog III

The Prolog III programming language extends Prolog by redefining the fundamental process at its heart: unification. Into this mechanism, Prolog III integrates refined processing of trees and lists, number processing, and processing of two-valued Boolean algebra. We present the specification of this new language and illustrate its capabilities by means of varied examples. We also present the theoretical foundations of Prolog III, which in fact apply to a whole family of programming languages. The central innovation is to replace the concept of unification by the concept of constraint solving.
Alain Colmerauer

On Open Defaults

In Reiter’s default logic, the parameters of a default are treated as metavariables for ground terms. We propose an alternative definition of an extension for a default theory, which handles parameters as genuine object variables. The new form of default logic may be preferable when the domain closure assumption is not postulated. It stands in a particularly simple relation to circumscription. Like circumscription, it can be viewed as a syntactic transformation of formulas of higher order logic.
Vladimir Lifschitz

On Asking What a Database Knows

The by now standard perspective on databases, especially deductive databases, is that they can be specified by sets of first order sentences. As such, they can be said to be claims about the truths of some external world; the database is a representation of that world.
Virtually all approaches to database query evaluation treat queries as first order formulas, usually with free variables whose bindings resulting from the evaluation phase define the answers to the query. Following Levesque [8, 9], we argue that, for greater expressiveness, queries should be formulas in an epistemic modal logic. Queries, in other words, should be permitted to address aspects of the external world as represented by the database, as well as aspects of the database itself, i.e. aspects of what the database knows about that external world. We shall also argue that integrity constraints are best viewed as sentences about what the database knows, not, as is usually the case, as first order sentences about the external world. On this view, integrity constraints are modal sentences and hence are formally identical to a strict subset of the permissible database queries. Integrity maintenance then becomes formally identical to query evaluation for a certain class of database queries.
We formalize these notions in Levesque’s language KFOPCE, and define the concepts of an answer to a query and of a database satisfying its integrity constraints. We also show that Levesque’s axiomatization of KFOPCE provides a suitable logic for reasoning about queries and integrity constraints. Next, we show how to do query evaluation and integrity maintenance for a restricted, but sizable class of queries/constraints. An interesting feature of this class of queries/constraints is that Prolog’s negation as failure mechanism serves to reduce query evaluation to first order theorem proving. This provides independent confirmation that negation as failure is really an epistemic operator in disguise. Finally, we provide sufficient conditions for the completeness of this query evaluator.
Raymond Reiter

Two Kinds of Program Specifications

Almost all computer programs interact in some way with the world outside the computer, but this interaction is rather trivial for most programs whose specification or verification has been studied. Thus a program for computing a mathematical function generally has trivial interaction, whereas interaction with the outside world is the essence of a program for controlling the landings of airplanes.
John McCarthy

Exploration with Mathematica

During the Fall Semester of 1989 I presented a junior/senior mathematics course on projective geometry at my university, with a syllabus organized along quite traditional lines. I used a Macintosh II computer and the Mathematica symbolic computation program, a product of Wolfram Research, Inc. The lectures were delivered primarily from the console, using a screen attached to the computer and projected by an overhead projector. The students were expected to do their homework on a computer, and the final examination was a project on a topic selected individually by each student, again, to be done on the computer.
Dana S. Scott

Composition Operators for Logic Theories

Some basic meta-level operators for putting logic theories together are introduced, which relate to set-theoretic union, intersection and difference. Both a transformational and an interpretive characterization of the operators are provided and proved equivalent. The former definition says how to syntactically construct a new theory out of two given theories, the latter provides a meta-level interpretation of the same operators. A declarative — both model-theoretic and fixpoint — semantics of the operators is also provided, allowing one to assign the minimal model of the resulting theory as a function of the models of the argument theories. Some examples from default reasoning, knowledge assimilation, inheritance networks and hypothetical reasoning are presented to demonstrate the expressive power of the operators.
Antonio Brogi, Paolo Mancarella, Dino Pedreschi, Franco Turini

The Synthesis of Logic Programs from Inductive Proofs

We describe a technique for synthesising logic (Prolog) programs from non-executable specifications. This technique is adapted from one for synthesising functional programs as total functions. Logic programs, on the other hand, define predicates. They can be run in different input modes, they sometimes produce multiple outputs and sometimes none. They may not terminate. The key idea of the adaptation is that a predicate is a total function in the all-ground mode, i.e. when all its arguments are inputs (pred(+,...,+) in Prolog notation). The program is synthesised as a function in this mode and then run in other modes. To make the technique work it is necessary to synthesise pure logic programs, without the closed world assumption, and then compile these into Prolog programs. The technique has been tested on the OYSTER (functional) program development system.
Alan Bundy, Alan Smaill, Geraint Wiggins

Studies in Pure Prolog: Termination

We provide a theoretical basis for studying termination of logic programs with the Prolog selection rule. To this end we study the class of left terminating programs. These are logic programs that terminate with the Prolog selection rule for all ground goals. First we show that various ways of defining semantics coincide for left terminating programs. Then we offer a characterization of left terminating programs that provides us with a practical method of proving termination. The method is proven to be complete and is illustrated by giving simple proofs of termination of the quicksort, permutation and mergesort programs for the desired class of goals.
Krzysztof R. Apt, Dino Pedreschi

Concept Logics

Concept languages (as used in BACK, KL-ONE, KRYPTON, LOOM) are employed as knowledge representation formalisms in Artificial Intelligence. Their main purpose is to represent the generic concepts and the taxonomical hierarchies of the domain to be modeled. This paper addresses the combination of the fast taxonomical reasoning algorithms (e.g. subsumption, the classifier etc.) that come with these languages and reasoning in first order predicate logic. The interface between these two different modes of reasoning is accomplished by a new rule of inference, called constrained resolution. Correctness, completeness as well as the decidability of the constraints (in a restricted constraint language) are shown.
Franz Baader, Hans-Jürgen Bürckert, Bernhard Hollunder, Werner Nutt, Jörg H. Siekmann

Programming in 2010? A scientific and industrial challenge

The development of software technology in the industrial world is rather discouraging for researchers today. Languages and systems offered by software houses and computer vendors are still very poor if compared to the state-of-the-art in advanced research. Fortran and Cobol are still among the most widely used programming languages, relational databases are just starting to penetrate the market. The breathtaking speed of transfer of new hardware technology from research labs into every day life is not at all matched by a comparable development in the software area. It is very hard to explain this enormous discrepancy of acceptance between the hardware and the software domain. Is it a lack of flexibility on the user side, or a lack of insight in the importance of software by the manufacturers, or is it just a matter of complexity? Is a new piece of software really that much more complex than a new hardware component?
Gérard Comyn

Perspective on Computational Logic

The art of programming is going to be altered in radical ways in the next 20 years. Programming, as we know it today, will become the activity of more users and of fewer “programmers” by 2010. Computational Logic will have to fit in a wider context. For applications programming, and to some extent system programming, the context is likely to be the following: applications will mainly become cooperative and distributed, and distribution will be transparent to the programmer; parallelism will be hidden; programming will rely on libraries of reusable objects, both for code and data structures, with well defined application programming interfaces, supported by powerful CASE systems; applications will be written by combining existing modules, extending and specializing them; powerful event-based languages, the so-called scripts, workflow, or rule-based languages, will provide the “programming power”; the libraries and the event languages will depend on application domains; this will generalize the spirit of 4th Generation languages; examples of such languages will be Office Procedure Languages, Resource Management Languages, Business Analysis Languages, etc.This evolution is underway, through the convergence of research in programming languages, databases, artificial intelligence, object-based technology, through the emergence of distributed systems, software engineering, and the influence of all the bodies working on standards for open systems.
Hervé Gallaire

Programming in the year 2010

In trying to make predictions about a field so young as computing, it is useful to draw comparisons with a related and much older field such as law.
Robert Kowalski

It’s Past Time for Practical Computer Checked Proofs of Program Correctness

In 1961 I published my first paper on computer-checked proofs of program correctness. In it I proposed that soon buyers would not pay for programs until furnished with computer checked proofs that they met there specifications. I was careful to point out that the buyer would have to be careful that the specification reflected his real requirements. Alas, almost 30 years later computer checked proofs are still not in practical use.
John McCarthy

Computational Logic needs Symbolic Mathematics

In a list entitled “Epigrams on Programming” published a few years ago, the late and well known computer pioneer Alan J. Perlis said (Epigram 65):
Make no mistake about it: Computers process numbers — not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.
Dana S. Scott
Weitere Informationen