Skip to main content

2009 | Buch

Neural-Symbolic Cognitive Reasoning

verfasst von: Dr. Artur S. d’Avila Garcez, Dr. Luís C. Lamb, Prof. Dov M. Gabbay

Verlag: Springer Berlin Heidelberg

Buchreihe : Cognitive Technologies

insite
SUCHEN

Über dieses Buch

Humans are often extraordinary at performing practical reasoning. There are cases where the human computer, slow as it is, is faster than any artificial intelligence system. Are we faster because of the way we perceive knowledge as opposed to the way we represent it?

The authors address this question by presenting neural network models that integrate the two most fundamental phenomena of cognition: our ability to learn from experience, and our ability to reason from what has been learned. This book is the first to offer a self-contained presentation of neural network models for a number of computer science logics, including modal, temporal, and epistemic logics. By using a graphical presentation, it explains neural networks through a sound neural-symbolic integration methodology, and it focuses on the benefits of integrating effective robust learning with expressive reasoning capabilities.

The book will be invaluable reading for academic researchers, graduate students, and senior undergraduates in computer science, artificial intelligence, machine learning, cognitive science and engineering. It will also be of interest to computational logicians, and professional specialists on applications of cognitive, hybrid and artificial intelligence systems.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The construction of robust computational models integrating reasoning and learning is a key research challenge for artificial intelligence. Recently, this challenge was also put forward as a fundamental problem in computer science [255]. Such a challenge intersects with another long-standing entry in the research agenda of artificial intelligence: the integration of its symbolic and connectionist paradigms. Such integration has long been a standing enterprise, with implications for and applications in cognitive science and neuroscience [51, 66, 130, 178, 179, 238, 240, 247, 248, 250]. Further, the importance of efforts to bridge the gap between the connectionist and symbolic paradigms of artificial intelligence has also been widely recognised (see e.g. [51, 66, 229, 242, 243]).
Chapter 2. Logic and Knowledge Representation
Abstract
This chapter introduces the basics of knowledge representation and of logic used throughout this book. The material on logic presented here follows the presentation given in [100]. Several books contain introductions to (logic-based) knowledge representation and reasoning, including [42, 87, 100, 106].
Chapter 3. Artificial Neural Networks
Abstract
This chapter introduces the basics of neural networks used in this book. Artificial neural networks have a long history in computer science and artificial intelligence. As early as the 1940s, papers were written on the subject [177]. Neural networks have been used in several tasks, including pattern recognition, robot control, DNA sequence analysis, and time series analysis and prediction [125]. Differently from (symbolic) machine learning, (numeric) neural networks perform inductive learning in such a way that the statistical characteristics of the data are encoded in their sets of weights, a feature that has been exploited in a number of applications [66]. A good introductory text on neural networks can be found in [127]. A thorough approach to the subject can be found in [125]. The book by Bose and Liang [35] provides a good balance between the previous two.
Chapter 4. Neural-Symbolic Learning Systems
Abstract
This chapter introduces the basics of neural-symbolic systems used thoughout the book. A brief bibliographical review is also presented. Neural-symbolic systems have become a very active area of research in the last decade. The integration of neural networks and symbolic knowledge was already receiving considerable attention in the 1990s. For instance, in [250], Towell and Shavlik presented the influential model KBANN (Knowledge-Based Artificial Neural Network), a system for rule insertion, refinement, and extraction from neural networks. They also showed empirically that knowledge-based neural networks, trained using the backpropagation learning algorithm (see Sect. 3.2), provided a very efficient way of learning from examples and background knowledge. They did so by comparing the performance of KBANN with other hybrid, neural, and purely symbolic inductive learning systems (see [159, 189] for a comprehensive description of a number of symbolic inductive learning systems, including inductive logic programming systems).
Chapter 5. Connectionist Modal Logic
Abstract
This chapter introduces connectionist modal logic (CML). CML can be seen as a general model for the development of connectionist nonclassical reasoning based on modal logics [79]. CML uses ensembles of neural networks to represent possible worlds, which underpin the semantics of several modal logics. In what follows, we introduce the theoretical foundations of CML and show how to build an integrated model for computing and learning modal logics.
Chapter 6. Connectionist Temporal Reasoning
Abstract
In this chapter, following the formalisation of connectionist modal logics (CML) presented in Chap. 5, we show that temporal and epistemic logics can be effectively represented in and combined with artificial neural networks, by means of temporal and epistemic logic programming. This is done by providing a translation algorithm from temporal-logic theories to the initial architecture of a neural network. A theorem then shows that the given temporal theory and the network are equivalent in the usual sense that the network computes a fixed-point semantics of the theory. We then describe a validation of the Connectionist Temporal Logic of Knowledge (CTLK) system by applying it to a distributed time and knowledge representation problem, the full version of the muddy children puzzle [87]. We also describe experiments on learning in the muddy children puzzle, showing how knowledge evolution can be analysed and understood in a learning model.
Chapter 7. Connectionist Intuitionistic Reasoning
Abstract
In this chapter, we present a computational model combining intuitionistic reasoning and neural networks. We make use of ensembles of neural networks to represent intuitionistic theories, and show that for each intuitionistic theory and intuitionistic modal theory, there exists a corresponding neural-network ensemble that computes a fixed-point semantics of the theory. This provides a massively parallel model for intuitionistic reasoning. As usual, the neural networks can be trained from examples to adapt to new situations using standard neural learning algorithms, thus providing a unifying foundation for intuitionistic reasoning, knowledge representation, and learning.
Chapter 8. Applications of Connectionist Nonclassical Reasoning
Abstract
This chapter presents some benchmark distributed-knowledge-representation applications of connectionist modal and intuitionistic reasoning. It shows how CML can be used for distributed knowledge representation and reasoning, illustrating the capabilities of the proposed connectionist model. It also compares the CML representation of a distributed knowledge representation problem with the representation of the same problem in connectionist intuitionistic logic (CIL), the type of reasoning presented in Chap. 7. We begin with a simple card game, as described in [87].
Chapter 9. Fibring Neural Networks
Abstract
As we have seen in Chaps. 4 to 7, neural networks can deal with a number of reasoning mechanisms. In many applications these need to be combined (fibred) into a system capable of dealing with the different dimensions of a reasoning agent. In this chapter, we introduce a methodology for combining neural-network architectures based on the idea of fibring logical systems [101]. Fibring allows one to combine different logical systems in a principled way. Fibred neural networks may be composed not only of interconnected neurons but also of other networks, forming a recursive architecture. A fibring function then defines how this recursive architecture behaves by defining how the networks in the ensemble relate to each other, typically by allowing the activation of neurons in one network (A) to influence the change of weights in another network (B). Intuitively, this can be seen as training network B at the same time as network A is running. Although both networks are simple, standard networks, we can show that, in addition to being universal approximators like standard feedforward networks, fibred neural networks can approximate any polynomial function to any desired degree of accuracy, thus being more expressive than standard feedforward networks.
Chapter 10. Relational Learning in Neural Networks
Abstract
Neural networks have been very successful as robust, massively parallel learning systems [125]. On the other hand, they have been severely criticised as being essentially propositional. In [176], John McCarthy argued that neural networks use unary predicates only, and that the concepts they compute are ground instances of these predicates. Thus, he claimed, neural networks could not produce concept descriptions, only discriminations.
Chapter 11. Argumentation Frameworks as Neural Networks
Abstract
Formal models of argumentation have been studied in several areas, notably in logic, philosophy, decision making, artificial intelligence, and law [25, 31, 39, 48, 83, 111, 153, 210, 212, 267]. In artificial intelligence, models of argumentation have been one of the approaches used in the representation of commonsense, nonmonotonic reasoning. They have been particularly successful in modelling chains of defeasible arguments so as to reach a conclusion [194, 209]. Although symbolic logic-based models have been the standard for the representation of argumentative reasoning [31, 108], such models are intrinsically related to artificial neural networks, as we shall show in this chapter.
Chapter 12. Reasoning about Probabilities in Neural Networks
Abstract
In this chapter, we show that artificial neural networks can reason about probabilities, thus being able to integrate reasoning about uncertainty with modal, temporal, and epistemic logics, which have found a large number of applications, notably in game theory and in models of knowledge and interaction in multiagent systems [87,103,207]; artificial intelligence and computer science have made extensive use of decidable modal logics, including in the analysis and model checking of distributed and multiagent systems, in program verification and specification, and in hardware model checking. Finally, the combination of knowledge, time, and probability in a connectionist system provides support for integrated knowledge representation and learning in a distributed environment, dealing with the various dimensions of reasoning of an idealised agent [94, 202].
Chapter 13. Conclusions
Abstract
This chapter reviews the neural-symbolic approach presented in this book and provides a summary of the overall neural-symbolic cognitive model. The book deals with how to represent, learn, and compute expressive forms of symbolic knowledge using neural networks. We believe this is the way forward towards the provision of an integrated system of expressive reasoning and robust learning. The provision of such a system, integrating the two most fundamental phenomena of intelligent cognitive behaviour, has been identified as a key challenge for computer science [255]. Our goal is to produce computational models with integrated reasoning and learning capability, in which neural networks provide the machinery necessary for cognitive computation and learning, while logic provides practical reasoning and explanation capabilities to the neural models, facilitating the necessary interaction with the outside world.
Backmatter
Metadaten
Titel
Neural-Symbolic Cognitive Reasoning
verfasst von
Dr. Artur S. d’Avila Garcez
Dr. Luís C. Lamb
Prof. Dov M. Gabbay
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-73246-4
Print ISBN
978-3-540-73245-7
DOI
https://doi.org/10.1007/978-3-540-73246-4