Skip to main content

About this book

Cognitive Science is a discipline that brings together research in natural and artificial systems and this is clearly reflected in the diverse contributions to From Animals to Robots and Back.

In tribute to Aaron Sloman and his pioneering work in Cognitive Science and Artificial Intelligence, the editors have collected a unique collection of cross-disciplinary papers that include work on:

· intelligent robotics;

· philosophy of cognitive science;

· emotional research

· computational vision;

· comparative psychology; and

· human-computer interaction.

Key themes such as the importance of taking an architectural view in approaching cognition, run through the text. Drawing on the expertize of leading international researchers, contemporary debates in the study of natural and artificial cognition are addressed from complementary and contrasting perspectives with key issues being outlined at various levels of abstraction.

From Animals to Robots and Back, will give readers with backgrounds in the study of both natural and artificial cognition an important window on the state of the art in cognitive systems research.

Table of Contents


Chapter 1. Bringing Together Different Pieces to Better Understand Whole Minds

This collection of chapters looks at a diverse set of topics in the study of cognition. These topics do not of course, despite their broad range, include all the pieces needed for a whole mind. However, what this collection provides is explorations of how some very different research topics can be brought together through shared themes and outlook.

Dean Petters

Chapter 2. Aaron Sloman: A Bright Tile in AI’s Mosaic

When AI was still a glimmer in Alan Turing’s eye, and when (soon afterwards) it was the new kid on the block at MIT and elsewhere, it wasn’t regarded primarily as a source of technical gizmos for public use or commercial exploitation (Boden 2006, p. 10.i-ii). To the contrary, it was aimed at illuminating the powers of the human mind.

Margaret A. Boden

Chapter 3. Losing Control Within the H-Cogaff Architecture

The paper describes progress towards producing deep models of emotion. It does this by working towards the development of a single explanatory framework to account for diverse psychological phenomena where control is lost. Emotion theories are reviewed to show how emotions can be described according to various emotional components and emotional phases. When applied in isolation, these components or phases can result in shallow models of emotion. The CogAff schema and HCogAff architecture are presented as frameworks, which can be used to organise and integrate these various separate ways in which emotion can be described. The examples of losing control discussed in this paper include: short-term emotional interrupts to ongoing processing; experiencing grief after the loss of an attachment figure; longer term emotional strengthening of motivation; using strong emotions to guarantee keeping one’s own commitments; Freudian Repression as a defensive loss of access to painful information; and self-deception as a general strategy in deceiving others.

Dean Petters

Chapter 4. Acting on the World: Understanding How Agents Use Information to Guide Their Action

Most animals navigate a dynamic and shifting sea of information provided by their environment, their food or prey and other animals. How do they work out, which pieces of information are the most important or of most interest to them, and gather information on those parts to guide their action later? In this essay, I briefly outline what we already know about how animals use information flexibly and efficiently. I then discuss a few of the unsolved problems relating to how animals collect information by directing their attention or exploration selectively, before suggesting some approaches which might be useful in unravelling these problems.

Jackie Chappell

Chapter 5. A Proof and Some Representations

Hilbert defined proofs as derivations from axioms via the modus ponens rule and variable instantiation (this definition has a certain parallel to the ‘recognise-act cycle’ in artificial intelligence). A pre-defined set of rules is applied to an initial state until a goal state is reached. Although this definition is very powerful and it can be argued that nothing else is needed, the nature of proof turns out to be much more diverse, for instance, changes in representation are frequently done. We will explore some aspects of this by McCarthy’s ‘mutilated checkerboard’ problem and discuss the tension between the complexity and the power of mechanisms for finding proofs.

Manfred Kerber

Chapter 6. What Does It Mean to Have an Architecture?

In this chapter, I propose an approach to architectures, which makes precise exactly what it means for an agent to ‘have’ an architecture, and allows us to establish properties of an agent expressed in terms of its architectural description. Using this approach, it is possible to verify whether two different agent programmes really have the same architecture or the same properties, allowing a more precise comparison of architectures and the agent programmes which implement them. I illustrate the approach with a number of examples which show both how to establish qualitative and quantitative properties of architectures and agent programmes, and how to establish whether a particular agent has a particular architecture. I focus on architectures of software agents, however, a similar approach can be applied to robots and other kinds of intelligent systems.

Brian Logan

Chapter 7. Virtual Machines: Nonreductionist Bridges Between the Functional and the Physical

Various notions of


have been proposed as a solution to the “mind–body problem” to account for the dependence of mental states on their realizing physical states. In this chapter, we view the mind–body problem as an instance of the more general problem of how a virtual machine (VM) can be implemented in other virtual or physical machines. We propose a formal framework for defining virtual machine architectures and how they are composed of interacting functional units. The aim is to define a rich notion of


that can ultimately show how virtual machines defined in different ontologies can be related by way of implementing one virtual machine in another virtual (or physical) machine without requiring that the ontology in which the

implemented VM

is defined to be reducible to the ontology of the

implementing VM


Matthias Scheutz

Chapter 8. Building for the Future: Architectures for the Next Generation of Intelligent Robots

In this article I explore two ideas. The first is that the idea of architectures for intelligent systems is ripe for exploitation given the current state of component technologies and available software. The second idea is that in order to encourage progress in architecture research, we must concentrate on research methodologies that prevent us from continually reinventing and reimplementing existing work. The two ideas I propose for this are building software toolkits that provide useful architectures for the way researchers currently develop systems, and focusing on

architectural design patterns

, rather than whole architectures.

Nick Hawes

Chapter 9. What Vision Can, Can’t and Should Do

Computer vision has come a long way since its beginnings. In this chapter, we review some of the recent successes, which seem to indicate that many aspects of vision have indeed been solved and that the way should now be paved for robotic systems that can operate freely in the real world. On closer inspection though that is not the case just yet. A set of specialised solutions in different sub areas, however impressive individually, does not constitute a unified theory of vision. We point out some of the problems of current approaches, most notably lack of abstraction and dealing with uncertainty. Finally, we suggest what research should and should not focus on in order to advance on a broader basis.

Michael Zillich

Chapter 10. The Rocky Road from Hume to Kant: Correlations and Theories in Robots and Animals

This essay will address the problem of prediction. Prediction is at its root concerned with the idea of causation. The notion of how causal relationships can be represented in minds has been an important thread in Sloman’s work, and ongoing conversations with him have influenced my thinking. This paper will examine several aspects of prediction and causation. First it examines reasons why animals and machines benefit from being able to predict, and the consequent requirements on prediction mechanisms. Next it will examine some actual machines that we have synthesised for predicting the effects a robot manipulator has on an object it pushes. These mechanisms contain varying amounts of prior knowledge. This leads to the issue of whether and how predicting machines benefit from prior knowledge, and whether a prediction mechanism is equivalent in any sense to the notion of a theory. The paper will reach a point where I claim that given the constraints faced by animals and robots it is often better to construct many micro-theories rather one macro-theory. The idea of theories will also lead to the examination of the notion of levels of description in theory building. This will lead in turn to consider whether hierarchies of increasingly abstract prediction machines can lead to better robots, and to better understanding of animals.

Jeremy L. Wyatt

Chapter 11. Combining Planning and Action, Lessons from Robots and the Natural World

Acting continuously and robustly in a complex environment is something that animals and people do every day but it is something that has proved to be very difficult to engineer into robotic systems. This paper looks at developments in architectures for combining planning and acting over the past 20 years and discusses the strengths and weaknesses of this approach for industrial applications. Several examples are given of ways in which theories from the natural world have influenced the development of robotic applications. In particular in line with the reason for this symposium the paper describes how the opinions of Aaron Sloman have influenced the author and his work. The paper discusses what steps still need to be made to realise systems capable of interacting reliably with the natural world and still carrying out useful tasks. These future steps also have the potential to expand our understanding of the mechanisms used by biological systems.

Jeremy Baxter

Chapter 12. Developing Expertise with Objective Knowledge: Motive Generators and Productive Practice

Experts seek to derive manifold benefits from objective knowledge. Viewed as progressive problem solvers (Bereiter and Scardamalia


), they are not immune to psychological and practical challenges to learning in depth, particularly given demands for breadth and a lack of cognitive productivity tools. What mental changes occur when one understands deeply and develops new skills, new attitudes and implicit knowledge? With a few scenarios, I propose that deep understanding of conceptual artifacts, in the sense of Bereiter (


), establishes and configures diverse motive generators that enable the valenced detection of gaps of understanding, cognitive infelicities and opportunities (cognitive itches). This proposal, derived from a designer-based approach to motivation (Sloman


; Beaudoin and Sloman


), is significantly different from how motivation is typically treated in psychology. It raises many questions about how motivational mechanisms develop and operate in the propensities of expertise. I suggest that experts facing great cognitive productivity demands can benefit from productive practice.

Luc P. Beaudoin

Chapter 13. From Cognitive Science to Data Mining: The First Intelligence Amplifier

This chapter gives an account of the nine Laws of Data Mining, and proposes two hypotheses about data mining and cognition. The nine Laws describe key properties of the data mining process, and their explanations explore the reasons behind these properties. The first hypothesis is that data mining is a kind of intelligence amplifier, because the data mining process enables the data miner to see things which they could not see unaided, as stated in the sixth law of data mining. The second hypothesis is that machine learning algorithms have a special value to data mining because they represent knowledge in a way which is cognitively plausible, and this makes them more suitable for intelligence amplification.

Tom Khabaza

Chapter 14. Modelling User Linguistic Communicative Competences for Individual and Collaborative Learning

In this article, an innovative framework for use in Intelligent Computer-Assisted Language Learning (henceforth, ICALL) systems (as developed by the ATLAS research group) is presented in terms of the different models that compose it. It is argued that such a general framework allows the design and development of ICALL systems in a technologically, pedagogically and linguistically robust fashion, thereby avoiding the use of ad hoc knowledge models, which prove difficult to move from one system to another. Such a framework has been designed to overcome three problems present in most second language learning systems: the oversimplification and reduction of the vastness and complexity of the learning domain to a few formal linguistic aspects (studied in closed and decontextualised activities), the lack of underlying pedagogic principles, and the complexity of automatic language parsing and speech recognition. The framework attempts to capture and model the relevant pedagogic, linguistic and technological elements for the effective development of second language (henceforth, L2) competence. One of the goals were that any ICALL system developed around this framework would structure the complex network of communicative language competences (linguistic, pragmatic and sociolinguistic) and processes (reception, production and interaction) within the L2 learning process in a causal quantitative way, adapting such process to the progress made by a given student.

Timothy Read, Elena Bárcena

Chapter 15. Loop-Closing Semantics

How is semantic content possible? How can parts of the world refer to other parts? On what grounds (if any) can we claim that simple mechanisms, such as thermometers, thermostats, clocks, and rulers, etc., refer to features of the world in virtue of their causal powers rather than our intentional practices with respect to them? I introduce Sloman’s Tarskian-inspired “loop-closing theory” in order to answer these questions. Loop-closing theory reduces a subset of semantic properties to the causal properties of control systems. I develop Sloman’s account by specifying a metalanguage to describe the causal structure of loop-closing models, and then identify and define a control system’s

manipulable feature

, which is the subset of the world necessarily present for control success. Loop-closing theory identifies the referential content of a control system’s information-bearing substates with the manipulable feature. I conclude by applying loop-closing semantics to some illustrative test cases, such as the semantic properties of memory addressing in CPUs, the referential content of bacterial magnetosomes, the problem of misrepresentation, and connections to Ramsay-Whyte success semantics.

Ian Wright


Additional information

Premium Partner

Neuer Inhalt
image credits