Skip to main content
Top

2013 | Book

Biologically Inspired Cognitive Architectures 2012

Proceedings of the Third Annual Meeting of the BICA Society

Editors: Antonio Chella, Roberto Pirrone, Rosario Sorbello, Kamilla Rún Jóhannsdóttir

Publisher: Springer Berlin Heidelberg

Book Series : Advances in Intelligent Systems and Computing

insite
SEARCH

About this book

The challenge of creating a real-life computational equivalent of the human mind requires that we better understand at a computational level how natural intelligent systems develop their cognitive and learning functions. In recent years, biologically inspired cognitive architectures have emerged as a powerful new approach toward gaining this kind of understanding (here “biologically inspired” is understood broadly as “brain-mind inspired”). Still, despite impressive successes and growing interest in BICA, wide gaps separate different approaches from each other and from solutions found in biology. Modern scientific societies pursue related yet separate goals, while the mission of the BICA Society consists in the integration of many efforts in addressing the above challenge. Therefore, the BICA Society shall bring together researchers from disjointed fields and communities who devote their efforts to solving the same challenge, despite that they may “speak different languages”. This will be achieved by promoting and facilitating the transdisciplinary study of cognitive architectures, and in the long-term perspective – creating one unifying widespread framework for the human-level cognitive architectures and their implementations.

This book is a proceedings of the Third Annual Meeting of the BICA Society, which was hold in Palermo-Italy from October 31 to November 2, 2012. The book describes recent advances and new challenges around the theme of understanding

how to create general-purpose humanlike artificial intelligence using inspirations from studies of the brain and the mind.

Table of Contents

Frontmatter
Back to Basics and Forward to Novelty in Machine Consciousness

Machine consciousness has emerged from the confusion of an oxymoron into an evolving set of principles which, by leaning on information integration theories, define and distinguish what is meant by ‘a conscious machine’. This paper reviews this process of emergence by indicating how it is possible to break away from the Chalmers ‘hardness’ of a computational consciousness by a general concept of A becoming conscious of B where both are formally described. We highlight how this differs from classical AI approaches, by following through a simple example using the specific methodology of weightless neural nets as an instance of a system that owes its competence to something that can be naturally described as ‘being conscious’ rather depending on the use AI algorithms structured by a programmer.

Igor Aleksander, Helen Morton
Characterizing and Assessing Human-Like Behavior in Cognitive Architectures

The Turing Test is usually seen as the ultimate goal of Strong Artificial Intelligence (Strong AI). Mainly because of two reasons: first, it is assumed that if we can build a machine that is indistinguishable from a human is because we have completely discovered how a human mind is created; second, such an intelligent machine could replace or collaborate with humans in any imaginable complex task. Furthermore, if such a machine existed it would probably surpass humans in many complex tasks (both physically and cognitively). But do we really need such a machine? Is it possible to build such a system in the short-term? Do we have to settle for the now classical narrow AI approaches? Isn’t there a more reasonable medium term challenge that AI community should aim at? In this paper, we use the paradigmatic Turing test to discuss the implications of aiming too high in the AI research arena; we analyze key factors involved in the design and implementation of variants of the Turing test and we also propose a medium term plausible agenda towards the effective development of Artificial General Intelligence (AGI) from the point of view of artificial cognitive architectures.

Raúl Arrabales, Agapito Ledezma, Araceli Sanchis
Architects or Botanists? The Relevance of (Neuronal) Trees to Model Cognition

The only known cognitive architecture capable of human-level (or rat-level) performance is the human (rat) brain. Why have cognitive architectures equivalent to those instantiated by mammalian brains not been implemented in computers already? Is it just because we have not yet found the right ‘boxes’ to model in our data flow diagrams? Or is there a mysterious reason requiring the underlying hardware to be biologically-based? We surmise that the answer might lie in between: certain essential biological aspects of nervous systems, not (yet) routinely implemented in computational models of cognition, might prove to be necessary functional components. One such aspect is the tree-like structure of neuronal dendrites and axons, the branching inputs and outputs of nerve cells. The basic roles of dendritic and axonal arbors are respectively to integrate synaptic signals from, and to propagate firing patterns to, thousands of other neurons. Synaptic integration and plasticity in dendrites is mediated by their morphology and biophysics. Axons provide the vast majority of cable length, constituting the main determinant of network connectivity. Together, neuronal computation and circuitry provide the substrates for the activity dynamics instantiating cognitive function.

An additional consequence of the tree-shape of neurons is far less recognized. Axons and dendrites determine not only which neurons are connected to which, and thus the possible activation patterns (mental states), but also which new connections might form given the right experience. Specifically, new synapses can only form between axons and dendrites that are in close proximity. This fundamental constraint may constitute the neural correlate of the common observation that knowledge of appropriate background information is necessary for memory formation. Specifically, neurons that are near each other (in connectivity space) encode related information. This biologically inspired design feature also enables cognitive architectures (e.g. neural networks) to learn real associations better than spurious co-occurrences, resulting in greater performance and enhanced robustness to noise.

Giorgio Ascoli
Consciousness and the Quest for Sentient Robots

Existing technology allows us to build robots that mimic human cognition quite successfully, but would this make these robots conscious? Would these robots really feel something and experience their existence in the world in the style of the human conscious experience? Most probably not. In order to create true conscious and sentient robots we must first consider carefully what consciousness really is; what exactly would constitute the phenomenal conscious experience. This leads to the investigation of the explanatory gap and the hard problem of consciousness and also the problem of qualia. This investigation leads to the essential requirements for conscious artifacts and these are: 1.) The realization of a perception process with qualia and percept location externalization, 2.) The realization of the introspection of the mental content, 3.) The reporting allowed by seamless integration of the various modules and 4.) A grounded self-concept with the equivalent of a somatosensory system. Cognitive architectures that are based on perception/response feedback loops and associative sub-symbolic/symbolic neural processing would seem to satisfy these requirements.

Pentti O. A. Haikonen
Biological Fluctuation “Yuragi” as the Principle of Bio-inspired Robots

The current robotics and artificial intelligence based on computer technology is very different from biological systems in principle. They suppress the noise by spending much energy and obtain clear states such as on/off and 1/0. One the other hand, the biological system utilizes noise naturally existing in the system and in the environment. The noise is called biological fluctuation “Yuragi (in Japanese).” We are studying applications of Yuragi to robotic systems as collaborative works between robotics and biology in Osaka University.

Hiroshi Ishiguro
Active Learning by Selecting New Training Samples from Unlabelled Data

Human utilizes active learning to develop their knowledge efficiently, and a new active learning model is presented and tested for automatic speech recognition systems. Human can self-evaluate their knowledge systems to identify the weak or uncertain topics, and seek for the answers by asking proper questions to experts (or teachers) or searching books (or webs). Then, the new knowledge is incorporated into the existing knowledge system.

Recently this active learning becomes very important to many practical applications such as speech recognition and text classification. On the internet abundant unlabelled data are available, but it is still difficult and time consuming to get accurate labels for the data. With the active learning paradigm, based on uncertainty analysis, a few data will be identified to be included in the training database and the corresponding labels will be asked to users. The answers will be incorporated into the current knowledge base by an incremental learning algorithm. This process will be repeated to result in a high- accuracy classification system with minimum number of labelled training data.

The active learning algorithm had been applied to both a simple toy problem and a real-world speech recognition task. We introduced a uncertainty measure for each unlabelled data, which is calculated from the current classifier. The developed algorithm shows better recognition performance with less number of labelled data for the classifier training. In the future we will also incorporate a smooth transition on the selection strategy based on the exploitation-exploration trade-off. At the early stage of learning human utilizes exploitation while exploration is applied at the later stage.

Ho Gyeong Kim, Cheong-An Lee, Soo-Young Lee
Biologically Inspired beyond Neural: Benefits of Multiple Modeling Levels

Biologically inspired cognitive architectures can adopt distinct levels of abstraction, from symbolic theories to neural implementations. Despite or perhaps because of those widely different approaches, they can constrain and benefit from each other in multiple ways. The first type of synergy occurs when a higher-level theory is implemented in terms of lower-level mechanisms, bringing implementational constraints to bear on functional abstractions. For instance, the ACT-RN neural network implementation constrained future developments of the ACT-R production system cognitive architecture in biologically plausible directions. The second type of synergy is when cognitive architectures at distinct levels are combined, leading to capabilities that woudn’t be readily available in either modeling paradigm in isolation. The SAL hybrid architecture, a Synthesis of the ACT-R cognitive architecture and the Leabra neural architecture, provides an illustration through its combination of high-level control and low-level perception. The third type of synergy results when the same task or phenomena are modeled at different levels, bringing insights and constraints across levels. Models of the sensemaking processes developed in both ACT-R and Leabra illustrate the deep correspondence between mechanisms at the symbolic, subsymbolic and neural levels.

Christian Lebiere
Turing and de Finetti Ganes: Machines Making Us Think

Turing proposed his game as an operational definition of intelligence. Some years before the Italian mathematician Bruno de Finetti had showed how the concept of “bet” could be formalized by the notion of subjective probability. We will show how both approaches exhibit strong analogies and hide behind the formalism the complex debate on subjectivity and the abductive facet of human cognition.

Ignazio Licata
How to Simulate the Brain without a Computer

The brain is fundamentally different from numerical information processing devices. On the system level it features very low power consumption, fault tolerance and the ability to learn. On the microscopic level it is composed of constituents with a high degree of diversity and adaptability forming a rather uniform fabric with universal computing capabilities.

Neuromorphic architectures attempt to build physical models of such neural circuits with the aim to capture the key features and exploit them for information processing. In the talk I will review recent work and discuss future developments.

Karlheinz Meier
Odor Perception through Network Self-organization: Large Scale Realistic Simulations of the Olfactory Bulb

Analysis of the neural basis for odor recognition may have significant impact not only for a better explanation of physiological and behavioral olfactory processes, but also for industrial applications in the development of odor-sensitive devices and applications to the fragrance industry. While the underlying mechanisms are still unknown, experimental findings have given important clues at the level of the activation patterns in the olfactory glomerular layer by comparing the responses to single odors and to mixtures. The processing rules supporting odor perception are controversial. A point of general importance in sensory physiology is whether sensory activity is analytic (or elemental) or synthetic (or configural). Analytic refers to the perception of individual components in a mixture, whereas synthetic refers to the creation of a unified percept. An example of analytic perception is taste in that sweet and sour can be combined in a dish and tasted individually. The mixture of blue and yellow to make green is an example of synthetic perception in vision. Which one of these properties applies to any given odor mixture is unclear and often confusing. In recent papers this problem has been intensely investigated experimentally but the underlying computational and functional processes are still poorly understood. The main reason is that the practical impossibility of recording simultaneously from a reasonable set of cells makes it nearly impossible to decipher and understand the emergent properties and behavior of the olfactory bulb network. We addressed this problem using a biological inspired cognitive architecture: a large-scale realistic model of the olfactory bulb. We directly used experimental imaging data on the activation of 74 glomeruli by 72 odors from 12 homologous series of chemically related molecules to drive a biophysical model of 500 mitral cells and 10,000 granule cells (1/100th of the real system), to analyze the operations of the olfactory bulb circuits. The model demonstrates how lateral inhibition modulates the evolving dynamics of the olfactory bulb network, generating mitral and granule cell responses that support/explain several experimental findings, and suggests how odor identity can be represented by a combination of temporal and spatial patterns, with both feedforward excitation and lateral inhibition via dendrodendric synapses as the underlying mechanisms facilitating network self-organization and the emergence of synchronized oscillations. More generally, our model provides the first physiologically plausible explanation of how lateral inhibition can act at the brain region level to form the neural basis of signal recognition. Through movies and real time simulations, it will be shown how and why the dynamical interactions between active mitral cells through the granule cell synaptic clusters can account for a variety of puzzling behavioral results on odor mixtures and on the emergence of synthetic or analytic perception.

Michele Migliore
Extending Cognitive Architectures

New powerful approach in cognitive modeling and intelligent agent design, known as biologically inspired cognitive architectures (BICA), allows us to create in the near future general-purpose, real-life computational equivalents of the human mind, that can be used for a broad variety of practical applications. As a first step toward this goal, state-of-the-art BICA need to be extended to enable advanced (meta-)cognitive capabilities, including social and emotional intelligence, human-like episodic memory, imagery, self-awareness, teleological capabilities, to name just a few. Recent extensions of mainstream cognitive architectures claim having many of these features. Yet, their implementation remains limited, compared to the human mind. This work analyzes limitations of existing extensions of popular cognitive architectures, identifies specific challenges, and outlines an approach that allows achieving a “critical mass” of a human-level learner.

Alexei V. Samsonovich
Babies and Baby-Humanoids to Study Cognition

Simulating and getting inspiration from biology is certainly not a new endeavor in robotics. However, the use of humanoid robots as tools to study human cognitive skills it is a relatively new area of research which fully acknowledges the importance of embodiment and interaction (with the environment and with others) for the emergence of cognitive as well as motor, perceptual and social abilities.

The aim of this talk is to present our approach to investigating human cognitive abilities by explicitly addressing cognition as a developmental process through which the system becomes progressively more skilled and acquires the ability to understand events, contexts, and actions, initially dealing with immediate situations and increasingly acquiring a predictive capability.

During the talk I will also argue that, within this approach, robotics engineering and neuroscience research are mutually supportive by providing their own individual complementary investigation tools and methods: neuroscience from an “analytic” perspective and robotics from a “synthetic” one. In order to demonstrate the synergy between neuroscience and robotics I will start by briefly reviewing a few aspects of neuroscience and robotics research in the last 30 years to show that the two fields have indeed progressed in parallel even if, until recently, on almost independent tracks. I will then show that since the discovery of visuo-motor neurons our view of how the brain implements perceptual abilities has evolved, from a model where perception was linked to motor control for the only (yet fundamental) purpose of providing feedback signals, toward an integrated view where perception is mediated not only by our sensory inputs but also by out motor abilities. As a consequence the implementation and use of perceptive and complex humanoid robots has become not only a very powerful modeling tool of human behavior (and the underlying brain mechanisms) but also an important source of hypothesis regarding the mechanisms used by the nervous system to control our own actions and to predict the goals of someone elses actions. During the talk I will present results of recent psychophysical investigations in children and human adults as well as artificial implementations in our baby humanoid robot iCub.

Giulio Sandini
Towards Architectural Foundations for Cognitive Self-aware Systems

The BICA 2012 conference main purpose is to take a significant step forward towards the BICA Challenge -creating a real-life computational equivalent of the human mind. This challenge apparently calls for a global, multidisciplinary joint effort to develop biologically-inspired dependable agents that perform well enough as to to be fully accepted as autonomous agents by the human society. We say “apparently” because we think that “biologically-inspired” needs to be re-thought due to the mismatch between natural and artificial agent organization and their construction methods: the natural and artificial construction processes. Due to this constructive mismatch and the complexity of the operational requirements of world-deployable machines, the question of dependability becomes a guiding light in the search of the proper architectures of cognitive agents. Models of perception, cognition and action that render self-aware machines will become a critical asset that marks a concrete roadmap to the BICA challenge.

In this talk we will address a proposal concerning a methodology for extracting universal, domain neutral, architectural design patterns from the analysis of biological cognition. This will render a set of design principles and design patterns oriented towards the construction of better machines. Bio-inspiration cannot be a one step process if we we are going to to build robust, dependable autonomous agents; we must build solid theories first, departing from natural systems, and supporting our designs of artificial ones.

Ricardo Sanz, Carlos Hernández
Achieving AGI within My Lifetime: Some Progress and Some Observations

The development of artificial intelligence (AI) systems has to date taken largely a constructionist approach, with manual programming playing a central role. After half a century of AI research, enormous gaps persist between artificial and natural intelligence. The differences in capabilities are readily apparent on virtually every scale we might want to compare them on, from adaptability to resilience, flexibility to robustness, to applicability. We believe the blame lies with a blind application of various constructionist methodologies building AI systems by hand. Taking a fundamentally different approach based on new constructivist principles we have developed a system that goes well beyond many of the limitations of present AI systems. Our system can automatically acquire complex skills through observation and imitation. Based on new programming principles supporting deep reflection and auto-catalytic principles for maintenance and self-construction of architectural operation, the system is domain-dependent and can be applied to a vast array of problem areas. We have tested the system on a challenging task: Learning a subset of socio-communicative skills by observing humans engaged in a simulated TV interview. This presentation introduces the core methodological ideas, architectural principles, and shows early test scenarios of the system in action.

Kristinn R. Thórisson
Learning and Creativity in the Global Workspace

The key goal of cognitive science is to produce an account of the phenomenon of mind which is mechanistic, empirically supported, and credible from the perspective of evolution. In this talk, I will present a model based on Baars’ [1] Global Workspace account of consciousness, that attempts to provide a general, uniform mechanism for information regulation. Key ideas involved are: information content and entropy [4,8], expectation [3,7], learning multi-dimensional, multi-level representations [2] and data [5], and data-driven segmentation [6].

The model was originally based in music, but can be generalised to language [9]. Most importantly, it can account for not only perception and action, but also for creativity, possibly serving as a model for original linguistic thought.

Geraint A. Wiggins
Multimodal People Engagement with iCub

In this paper we present an engagement system for the iCub robot that is able to arouse in human partners a sense of “co-presence” during human-robot interaction. This sensation is naturally triggered by simple reflexes of the robot, that speaks to the partners and gazes the current “active partner” (e.g. the talking partner) in interaction tasks. The active partner is perceived through a multimodal approach: a commercial rgb-d sensor is used to recognize the presence of humans in the environment, using both 3d information and sound source localization, while iCub’s cameras are used to perceive his face.

Salvatore M. Anzalone, Serena Ivaldi, Olivier Sigaud, Mohamed Chetouani
Human Action Recognition from RGB-D Frames Based on Real-Time 3D Optical Flow Estimation

Modern advances in the area of intelligent agents have led to the concept of cognitive robots. A cognitive robot is not only able to perceive complex stimuli from the environment, but also to reason about them and to act coherently. Computer vision-based recognition systems serve the perception task, but they also go beyond it by finding challenging applications in other fields such as video surveillance, HCI, content-based video analysis and motion capture. In this context, we propose an automatic system for real-time human action recognition. We use the Kinect sensor and the tracking system in [1] to robustly detect and track people in the scene. Next, we estimate the 3D optical flow related to the tracked people from point cloud data only and we summarize it by means of a 3D grid-based descriptor. Finally, temporal sequences of descriptors are classified with the Nearest Neighbor technique and the overall application is tested on a newly created dataset. Experimental results show the effectiveness of the proposed approach.

Gioia Ballin, Matteo Munaro, Emanuele Menegatti
Modality in the MGLAIR Architecture

The MGLAIR cognitive agent architecture includes a general model of modality and support for concurrent multimodal perception and action. It provides afferent and efferent modalities as instantiable objects used in agent implementations. Each modality is defined by a set of properties that govern its use and its integration with reasoning and acting. This paper presents the MGLAIR model of modalities and mechanisms for their use in computational cognitive agents.

Jonathan P. Bona, Stuart C. Shapiro
Robotics and Virtual Worlds: An Experiential Learning Lab

Aim of the study was to investigate the cognitive processes involved and stimulated by educational robotics (LEGO

®

robots and Kodu Game Lab) in lower secondary school students. Results showed that LEGO

®

and KGL artifacts involve specific cognitive and academic skills. In particular the use of LEGO

®

is related to deductive reasoning, speed of processing visual targets, reading comprehension and geometrical problem solving; the use of KGL is related to visual-spatial working memory, updating skills and reading comprehension. Both technologies, moreover, are effective in the improvement of visual-spatial working memory. Implications for Human-Robot Interaction and BICA challenge are discussed.

Barbara Caci, Antonella D’Amico, Giuseppe Chiazzese
Comprehensive Uncertainty Management in MDPs

Multistage decision-making in robots involved in real-world tasks is a process affected by uncertainty. The effects of the agent’s actions in a physical environment cannot be always predicted deterministically and in a precise manner. Moreover, observing the environment can be a too onerous for a robot, hence not continuos. Markov Decision Processes (MDPs) are a well-known solution inspired to the classic probabilistic approach for managing uncertainty. On the other hand, including fuzzy logics and possibility theory has widened uncertainty representation. Probability, possibility, fuzzy logics, and epistemic belief allow treating different and not always superimposable facets of uncertainty. This paper presents a new extended version of MDP, designed for managing all these kinds of uncertainty together to describe transitions between multi-valued fuzzy states. The motivation of this work is the design of robots that can be used to make decisions over time in an unpredictable environment. The model is described in detail along with its computational solution.

Vincenzo Cannella, Roberto Pirrone, Antonio Chella
A New Humanoid Architecture for Social Interaction between Human and a Robot Expressing Human-Like Emotions Using an Android Mobile Device as Interface

In this paper we illustrate a humanoid robot able to interact socially and naturally with a human by expressing human-like body emotions. The emotional architecture of this robot is based on an emotional conceptual space generated using the paradigm of Latent Semantic Analysis. The robot generates its overall affective behavior (Latent Semantic Behavior) taking into account the visual and phrasal stimuli of human user, the environment and its personality, all encoded in his emotional conceptual space. The robot determines its emotion according by all these parameters that influence and orient the generation of his behavior not predictable from the user. The goal of this approach is to obtain an affinity matching with humans. The robot exhibit a smoothly natural transition in his emotion changes during the interaction with humans taking also into account the previous generated emotions. To validate the system, we implemented the distribute system on an Aldebaran NAO small humanoid robot and on a Android Phone HTC and we tested this social emotional interaction using the phone device as intelligent interface between human and robot in a complex scenario.

Antonio Chella, Rosario Sorbello, Giovanni Pilato, Giorgio Vassallo, Marcello Giardina
The Concepts of Intuition and Logic within the Frame of Cognitive Process Modeling

Possible interpretation of intuition and logic is discussed in terms of neurocomputing, dynamic information theory, and pattern recognition. The thinking is treated as a self-organizing process of recording, storing, processing, generation, propagation of information under no external control. Information levels, being formed successively due to this process are described, with the transition between each level is shown to be accompanied with reduction of information. The hidden information could be interpreted as a basis for intuition, whereas verbalized abstract concepts and relations refer to the logical thinking.

O. D. Chernavskaya, A. P. Nikitin, J. A. Rozhilo
Do Humanoid Robots Need a Body Schema?

The concept of body schema is analysed, comparing the biological and artificial domains and emphasizing its logical necessity for the efficient coordination of redundant degrees of freedom. The implementation of the body schema by means of the passive motion paradigm is summarised, suggesting that a well-defined body schema may be the basic building block for developing a powerful cognitive architecture.

Dalia De Santis, Vishwanathan Mohan, Pietro Morasso, Jacopo Zenzeri
Simulation and Anticipation as Tools for Coordinating with the Future

A key goal in designing an artificial intelligence capable of performing complex tasks is a mechanism that allows it to efficiently choose appropriate and relevant actions in a variety of situations and contexts. Nowhere is this more obvious than in the case of building a

general

intelligence, where the contextual choice and application of actions must be done in the presence of large numbers of alternatives, both subtly and obviously distinct from each other. We present a framework for action selection based on the concurrent activity of multiple forward and inverse models. A key characteristic of the proposed system is the use of simulation to choose an action: the system continuously simulates the external states of the world (proximal and distal) by internally emulating the activity of its sensors, adopting the same decision process as if it were actually operating in the world, and basing subsequent choice of action on the outcome of such simulations. The work is part of our larger effort to create new observation-based machine learning techniques. We describe our approach, an early implementation, and an evaluation in a classical AI problem-solving domain: the Sokoban puzzle.

Haris Dindo, Giuseppe La Tona, Eric Nivel, Giovanni Pezzulo, Antonio Chella, Kristinn R. Thórisson
Solutions for a Robot Brain

Some problems exist when trying to build a cognitive architecture. It should be able to learn any activity, understand human words, learn from its own experiences and manage a bigger memory without slowing down. Here we show some possible solutions.

Walter Fritz
Exemplars, Prototypes and Conceptual Spaces

This paper deals with the problem of the computational representation of “non classical” concepts, i.e. concepts that do not admit a definition in terms of necessary and sufficient conditions (sect. 1). We review some empirical evidence from the field of cognitive psychology, suggesting that concept representation is not an unitary phenomenon. In particular, it seems likely that human beings employ (among other things) both prototypes and exemplar based representations in order to deal with non classical concepts (sect. 2). We suggest that a cognitively in-spired, hybrid prototype-exemplar based approach could be useful also in the field of artificial computational systems (sect. 3). In sect. 4, we take into consideration conceptual spaces as a suitable framework for developing some aspects of such a hybrid approach (sect. 5). Some conclusion follows (sect. 6).

Marcello Frixione, Antonio Lieto
The Small Loop Problem: A Challenge for Artificial Emergent Cognition

We propose the Small Loop Problem as a challenge for biologically inspired cognitive architectures. This challenge consists of designing an agent that would autonomously organize its behavior through interaction with an initially unknown environment that offers basic sequential and spatial regularities. The Small Loop Problem demonstrates four principles that we consider crucial to the implementation of emergent cognition: environment-agnosticism, self-motivation, sequential regularity learning, and spatial regularity learning. While this problem is still unsolved, we report partial solutions that suggest that its resolution is realistic.

Olivier L. Georgeon, James B. Marshall
Crowd Detection Based on Co-occurrence Matrix

This paper describes a new approach for crowd detection based on the analysis of the gray level dependency matrix (GLDM), a technique already exploited for measuring image texture. New features for characterizing the GLDM have been proposed, and both Adaboost and Bayesian classifiers have been applied to the new feature introduced, and the system has been tested on a real-case scenario inside a stadium.

Stefano Ghidoni, Arrigo Guizzo, Emanuele Menegatti
Development of a Framework for Measuring Cognitive Process Performance

Cognitive architectures are concerned with the design of intelligent systems that should be able to perform cognitive tasks. The current paper introduces a theoretical approach to develop a general measurement framework for intelligent systems performance. The framework is based on concepts from communication theory and assumes the activity of an intelligent system to be the result of the communication between the system and its environment. Using the different entropies involved in this communication, the framework defines the system communication state and it is argued that this state captures the technical performance of the system.

Wael Hafez
I Feel Blue: Robots and Humans Sharing Color Representation for Emotional Cognitive Interaction

The paper presents a representation of colors integrated in a cognitive architecture inspired by the Psi model. In the architecture designed for a humanoid robot, the observation and recognition of humans and objects influence the emotional state of the robot. The representation of color is an additional feature that allows the robot to be “

in tune

” with the humans and share with them a physical space and interactions. This representation takes into account the current hypothesis about how the human brain allows sophisticated process and manage the colors, considering both universals and linguistic approaches. The paper describes in detail the problems of color representation, the potential of a cognitive architecture able to associate them with emotions, and how they can influence the interactions with the human.

Ignazio Infantino, Giovanni Pilato, Riccardo Rizzo, Filippo Vella
Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction Inside a Spontaneous Setting

The present paper aims to validate our research on human-humanoid interaction (HHI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prior interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a successfully and efficiency HHI interaction in every daylife activities.

Hiroshi Ishiguro, Shuichi Nishio, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Calí
Internal Simulation of an Agent’s Intentions

We present the Associative Self-Organizing Map (A-SOM) and propose that it could be used to predict an agent’s intentions by internally simulating the behaviour likely to follow initial movements. The A-SOM is a neural network that develops a representation of its input space without supervision, while simultaneously learning to associate its activity with an arbitrary number of additional (possibly delayed) inputs. We argue that the A-SOM would be suitable for the prediction of the likely continuation of the perceived behaviour of an agent by learning to associate activity patterns over time, and thus a way to read its intentions.

Magnus Johnsson, Miriam Buonamente
A Model of Primitive Consciousness Based on System-Level Learning Activity in Autonomous Adaptation

Although many models of consciousness have been proposed from various viewpoints, these models have not been based on learning activities in a whole system with capability of autonomous adaptation. Through investigating a learning process as the whole system, consciousness is basically modeled as system level learning activity to modify both own configuration and states in autonomous adaptation. The model not only explains the time delay of Libet’s experiment, but also is positioned as an improved model of Global Workspace Theory.

Yasuo Kinouchi, Yoshihiro Kato
Decision-Making and Action Selection in Two Minds

This paper discusses the differences between decision-making and action selection. Human behavior can be viewed as the integration of output of System 1,

i.e.

, unconscious automatic processes, and System 2,

i.e.

, conscious deliberate processes. System 1 activates a sequence of automatic actions. System 2 monitors System 1’s performance according to the plan it has created and, at the same time, it activates future possible courses of actions. Decision-making narrowly refers to System 2’s slow functions for planning for the future and related deliberate activities,

e.g.

, monitoring, for future planning. On the other hand, action selection refers to integrated activities including not only System 1’s fast activities but also System 2’s slow activities, not separately but integrally. This paper discusses the relationships between decision-making and action selection based on the architecture model the authors have developed for simulating human beings’

in situ

action selection, Model Human Processor with Real time Constraints (MHP/RT) [3] by extending the argument we have done in the argument we have made in previous work [5].

Muneo Kitajima, Makoto Toyota
Cognitive Chrono-Ethnography: A Methodology for Understanding Users for Designing Interactions Based on User Simulation with Cognitive Architectures

A handful cognitive architectures have been proposed in the BICA society that are capable of simulating human beings’ behavior selections. The purpose of this paper is to discuss the importance of designing interactions between users and the information provided to users via PC displays, traffic road signs, or any other information devices, and to suggest biologically-inspired cognitive architectures, BICA, are useful for designing interactions that should satisfy users by taking into account the variety of interactions that would happen and defining requirements that should satisfy user needs through user simulation using a cognitive architecture in BICA. A new methodology of defining requirements based on user simulations using a cognitive architecture, Cognitive-Chrono Ethnography (CCE), is introduced. A CCE study is briefly described.

Muneo Kitajima, Makoto Toyota
Emotional Emergence in a Symbolic Dynamical Architecture

We present a cognitive architecture based on a unified model of cognition, Stanovich’s tripartite framework, which provides an explanation of how reflective and adaptive human behaviour emerges from the interaction of three distinct cognitive levels (minds). In this paper, we focus on the description of our emotional model which is deeply rooted in neuromodulatory phenomena. We illustrate the emergence of emotional state using a psychological task : the emotional Stroop task.

Othalia Larue, Pierre Poirier, Roger Nkambou
An Integrated, Modular Framework for Computer Vision and Cognitive Robotics Research (icVision)

We present an easy-to-use, modular framework for performing computer vision related tasks in support of cognitive robotics research on the

iCub

humanoid robot. The aim of this biologically inspired, bottom-up architecture is to facilitate research towards visual perception and cognition processes, especially their influence on robotic object manipulation and environment interaction. The

icVision

framework described provides capabilities for detection of objects in the 2D image plane and locate those objects in 3D space to facilitate the creation of a world model.

Jürgen Leitner, Simon Harding, Mikhail Frank, Alexander Förster, Jürgen Schmidhuber
Insertion Cognitive Architecture

The main goal of the paper is to demonstrate that the Insertion Modeling System IMS, which is under development in Glushkov Institute of Cybernetics, can be used as an instrument for the development and analysis of cognitive (intellectual) agents that is agents that model human mind. Insertion cognitive architecture is real time insertion machine which realizes itself, has a center to evaluate the success of its behavior and is committed to achieving maximum success repeated. As an agent, this machine is inserted in its external environment and has the means to interact with it. The internal environment of cognitive agent creates and develops its model and the model of external environment. In order to achieve its goals it uses the basic techniques developed in the field of biologically inspired cognitive architectures as well as software development techniques.

Alexander Letichevsky
A Parsimonious Cognitive Architecture for Human-Computer Interactive Musical Free Improvisation

This paper presents some of the historical and theoretical foundations for a new cognitive architecture for human-computer interactive musical free improvisation. The architecture is parsimonious in that it has no access to musical knowledge and no domain-general subsystems, such as memory or representational abilities. The paper first describes some of the features and limitations of the architecture. It then illustrates how this architecture draws on insights from cybernetics, artificial life, artificial intelligence and ecological theory by situating it within a historical context. The context presented consists of a few key developments in the history of biologically-inspired robotics, followed by an indication of how they connect to James Gibson’s ecological theory. Finally, it describes how a recent approach to musicology informed by ecological theory bears on an implementation of this architecture.

Adam Linson, Chris Dobbyn, Robin Laney
Cognitive Integration through Goal-Generation in a Robotic Setup

What brings together multiple sensory, cognitive, and motor skills? Experimental evidences show that the interaction among thalamus, cortex and amygdala is involved in the generation of elementary cognitive behaviours that are used to achieve complex goals and to integrate the agent skills. Furthremore, the interplay among these structure is likely to be responsible of a middle level of cognition that could fill the gap between high-level cognitive reasoning and low-level sensory processing. In this paper, we address this issue and we outline a goal-generating architecture implemented on a NAO robot.

Riccardo Manzotti, Flavio Mutti, Giuseppina Gini, Soo-Young Lee
A Review of Cognitive Architectures for Visual Memory

Numerous cognitive architectures have been proposed for human cognition, ranging from perception, decision making, to action and control. These architectures play a vital role as foundation for building intelligent systems, whose capabilities may one day be similar to that of the human brain. However, most of these architectures do not address the challenges and opportunities specific to visual perception and memory, which form important parts of our daily tasks and experiences. In this paper, we briefly review some of the cognitive architectures related to perception and memory. As studies of visual perception and memory are active research areas in cognitive science, we summarized what has been done so far, how neurobiology and psychology have identified different memory systems, and how different stages of memory processing are performed. We described different types of visual memory, namely short-term memory, long-term memory, episodic and semantic memories. Finally, we tried to predict what should be done towards a visual memory architecture for enabling autonomous visual information processing systems.

Michal Mukawa, Joo-Hwee Lim
A Model of the Visual Dorsal Pathway for Computing Coordinate Transformations: An Unsupervised Approach

In humans, the problem of coordinate transformations is far from being completely understood. The problem is often addressed using a mix of supervised and unsupervised learning techniques. In this paper, we propose a novel learning framework which requires only unsupervised learning. We design a neural architecture that models the visual dorsal pathway and learns coordinate transformations in a computer simulation comprising an eye, a head and an arm (each entailing one degree of freedom). The learning is carried out in two stages. First, we train a posterior parietal cortex (PPC) model to learn different frames of reference transformations. And second, we train a head-centered neural layer to compute the position of an arm with respect to the head. Our results show the self-organization of the receptive fields (gain fields) in the PPC model and the self-tuning of the response of the head-centered population of neurons.

Flavio Mutti, Hugo Gravato Marques, Giuseppina Gini
Multiagent Recursive Cognitive Architecture

The hypothesis of invariant of organizational structure of intelligent decision making process based on cognitive functions, the

multiagent recursive cognitive architecture

(MuRCA) is proposed. Its application for the creation of self-organizing emergent systems, capable of goal-setting and adaptive behavior based on the semantic models of reality is substantiated.

Zalimkhan V. Nagoev
A Biologically-Inspired Perspective on Commonsense Knowledge

Since the seminal papers by John McCarthy [1,2], the problem to design intelligent systems able to handle common sense knowledge has become a real puzzle [3,6,7]. According to the McCarthy and Hayes suggestion, “The first task [to construct a general intelligent computer program] is to define even a naive, common-sense view of the world precisely enough to program a computer to act accordingly. This is a very difficult task in itself” [5]: 6. Perhaps the frame problem, i.e., how can a representational system deal with the enormous amount of knowledge that is necessary to everyday behaviour, needs nowadays a new account. The BICA challenge, that is, the challenge to make a general purpose and computational equivalent of the human intelligence by means of an approach based on biologically inspired cognitive architectures, can be considered as an example of this kind of new perspective [1,8].

Pietro Perconti
Coherence Fields for 3D Saliency Prediction

In the coherence theory of attention [26] a coherence field is defined by a hierarchy of structures, supporting the activities across the different stages of visual attention. At the interface between low level and mid level attention processing stages are the

proto-objects

, generated in parallel and collecting features of the scene at specific location and time. These structures fade away if the region is not further attended by attention. We introduce a method to computationally model these structures on the basis of experiments made in dynamic 3D environments, where the only control is due to the Gaze Machine, a gaze measurement framework that can record pupil motion at the required speed and project the point of regard in the 3D space [25],[24]. We show also how, from these volatile structures, it is possible to predict saliency in 3D dynamic environments.

Fiora Pirri, Matia Pizzoli, Arnab Sinha
Principles of Functioning of Autonomous Agent-Physicist

An interesting approach towards human-level intelligence has been proposed at BICA 2011 [1]. Namely, it is proposed that an intelligent artificial BICA agent would be able to win a political election against human candidates.

The current work proposes another approach. Our approach is based on the fact that

the most serious cognitive processes are processes of scientific cognition

. The background of this approach is the report by Modest Vaintsvaig at the Russian conference “Neuroinformatics-2011” [2]; that report considers the models of an autonomous agent that tries to cognize elementary laws of mechanics. The agent observes movements and collisions of rigid bodies, forms its own knowledge about interactions of bodies. Basing on these observations, the agent can generalize its knowledge and cognize regularities of mechanical interactions. So, modeling of such autonomous agents, we can try to analyze, how agents could discover (by themselves, without any human help) elementary laws of mechanics. Ultimately, such agents could discover three Newton’s laws of mechanics. Thus, we can investigate autonomous agents that could come to the discovery of the laws of nature. It is natural to think that these agents have human-level intelligence.

Using our knowledge about scientific activity of Isaac Newton, we can represent intelligence of such investigating agent in some details. The agent should have an aspiration for the acquisition of the new knowledge and for the transforming of its knowledge into compact form. The agent should have the curiosity that directs the agent to ask the questions about the external world and to resolve these questions by executing real physical experiments. The agent should take into account the interrelations between different kinds of the scientific knowledge. It is natural to assume that a certain society of cognizing agents exists; the agent of the society informs other agents about its scientific results. For example, considering Isaac Newton as a prototype of the main agent, we can consider also agents that are analogous to Galileo Galilei, Rene Descartes, Johannes Kepler, Gottfried Wilhelm Leibniz, Robert Hooke. The agent should have the self-consciousness, the emotional estimation of the results of its cognition activity and the desire to reach the highest results within the society. Agents should have the tendency to get the clear, strong and compact knowledge, such as Newton’s laws or Euclidean axioms.

Vladimir G. Red’ko
Affect-Inspired Resource Management in Dynamic, Real-Time Environments

We describe a novel affect-inspired mechanism to improve the performance of computational systems operating in dynamic environments. In particular, we designed a mechanism that is based on ideas from fear in humans to dynamically reallocate operating system-level resources to processes as they are needed to deal with time-critical events. We evaluated this system in MINIX and Linux in a simulated unmanned aerial vehicle (UAV) testbed. We found the affect-based system was not only able to react more rapidly to time-critical events as intended, but since the dynamic processes for handling these events did not need to use significant CPU when they were not in time-critical situations, the simulated UAV was able to perform even non-emergency tasks at a higher level of efficiency and reactivity than was possible in the standard implementation.

W. Scott Neal Reilly, Gerald Fry, Michael Reposa
An Approach toward Self-organization of Artificial Visual Sensorimotor Structures

Living organisms exhibit a strong mutual coupling between physical structure and behavior. For visual sensorimotor systems, this interrelationship is strongly reflected by the topological organization of a visual sensor and how the sensor is moved with respect to the organism’s environment. Here we present an approach which addresses simultaneously and in a unified manner i) the organization of visual sensor topologies according to given sensor-environment interaction patterns, and ii) the formation of motor movement fields adapted to specific sensor topologies. We propose that for the development of well-adapted visual sensorimotor structures, the perceptual system should optimize available resources to accurately perceive an observed phenomena, and at the same time, should co-develop sensory and motor layers such that the relationship between past and future stimuli is simplified on average. In a mathematical formulation, we implement this request as an optimization problem where the variables are the sensor topology, the layout of the motor space, and a prediction mechanism establishing a temporal relationship. We demonstrate that the same formulation is applicable for spatial self-organization of both, visual receptive fields and motor movement fields. The results demonstrate how the proposed principles can be used to develop sensory and motor systems with favorable mutual interdependencies.

Jonas Ruesch, Ricardo Ferreira, Alexandre Bernardino
Biologically Inspired Methods for Automatic Speech Understanding

Automatic Speech Recognition (ASR) and Understanding (ASU) systems heavily rely on machine learning techniques to solve the problem of mapping spoken utterances into words and meanings. The statistical methods employed, however, greatly deviate from the processes involved in human language acquisition in a number of key aspects. Although ASR and ASU have recently reached a level of accuracy that is sufficient for some practical applications, there are still severe limitations due, for example, to the amount of training data required and the lack of generalization of the resulting models. In our opinion, there is a need for a paradigm shift and speech technology should address some of the challenges that humans face when learning a first language and that are currently ignored by the ASR and ASU methods. In this paper, we point out some of the aspects that could lead to more robust and flexible models, and we describe some of the research we and other researchers have performed in the area.

Giampiero Salvi
Modeling Structure and Dynamics of Selective Attention

We present a cognitive architecture that includes perception, memory, attention, decision making, and action. The model is formulated in terms of an abstract dynamics for the activations of features, their binding into object entities, semantic categorization as well as related memories and appropriate reactions. The dynamical variables interact in a connectionist network which is shown to be adaptable to a variety of experimental paradigms. We find that selective attention can be modeled by means of inhibitory processes and by a threshold dynamics. The model is applied to the problem of disambiguating a number of theories for negative priming, an effect that is studied in connection to selective attention.

Hecke Schrobsdorff, Matthias Ihrke, J. Michael Herrmann
How to Engineer Biologically Inspired Cognitive Architectures

Biologically inspired cognitive architectures are complex systems where different modules of cognition interact in order to reach the global goals of the system in a changing environment. Engineering and modeling this kind of systems is a hard task due to the lack of techniques for developing and implementing features like learning, knowledge, experience, memory, adaptivity in an inter-modular fashion. We propose a new concept of intelligent agent as abstraction for developing biologically cognitive architectures.

Valeria Seidita, Massimo Cossentino, Antonio Chella
An Adaptive Affective Social Decision Making Model

Social decision making under stressful circumstances may involve strong emotions and contagion from others, and requires adequate prediction and valuation capabilities. In this paper based on principles from Neuroscience an adaptive agent-based computational model is proposed to address these aspects in an integrative manner. Using this model adaptive decision making of an agent in an emergency evacuation scenario is explored. By simulation computational learning mechanisms are identified required for effective decision making of agents.

Alexei Sharpanskykh, Jan Treur
A Robot Uses an Evaluation Based on Internal Time to Become Self-aware and Discriminate Itself from Others

The authors are attempting to clarify the nature of human consciousness by creating functions similar to that phenomenon in a robot. First of all, they focused on self-aware, confirming a new hypothesis from an experiment on a robot using a neural network with the MoNAD structure which they created based on the concept of a human neural circuit. The basis of this hypothesis is that “the entity receives an anticipated response within a certain evaluation standard based on internal time.” This paper discusses the theory of awareness in robots, related experiments, this hypothesis and the prospects for the future.

Toshiyuki Takiguchi, Junichi Takeno
Why Neurons Are Not the Right Level of Abstraction for Implementing Cognition

The cortex accounts for 70% of the brain volume. The human cortex is made of micro-columns, arrangements of 110 cortical neurons (Mountcastle), grouped in by the thousand in so-called macro-colums (or columns) which belong to the same functional unit as exemplified by Nobel laureates Hubel and Wiesel with the orientation columns of the primary visual cortex. The cortical column activity does not exhibit the limitations of single neurons: activation can be sustained for very long periods (sec.) instead of been transient and subject to fatigue. Therefore, the cortical column has been proposed as the building block of cognition by several researchers, but to not effect – since explanations about how the cognition works at the column level were missing. Thanks to the Theory of neural Cognition, it is no more the case.

Claude Touzet
Intertemporal Decision Making: A Mental Load Perspective

In this paper intertemporal decision making is analysed from a framework that defines the differences in value for decision options at present and future time points in terms of the extra amount of mental burden or work load that is expected to occur when a future option is chosen. It is shown how existing hyperbolic and exponential discounting models for intertemporal decision making both fit in this more general framework. Furthermore, a specific computational model for work load is adopted to derive a new discounting model. It is analysed how this intertemporal decision model relates to the existing hyperbolic and exponential intertemporal decision models. Finally, it is shown how this model relates to a transformation between subjective and objective time experience.

Jan Treur
A Non-von-Neumann Computational Architecture Based on in Situ Representations: Integrating Cognitive Grounding, Productivity and Dynamics

Human cognition may be unique in the way it combines cognitive grounding, productivity (compositionality) and dynamics. This combination imposes constraints on the underlying computational architecture. These constraints are not met in the von-Neumann computational architecture underlying forms of symbol manipulation. The constraints are met in a computational architecture based on ‘in situ’ grounded representations, consisting of (distributed) neuronal assembly structures. To achieve productivity, the in situ grounded representations are embedded in (several) neuronal ‘blackboard’ architectures, each specialized for processing specific forms of (compositional) cognitive structures, such as visual structures (objects, scenes), propositional (linguistic) structures and procedural (action) sequences. The architectures interact by the neuronal assemblies (in situ representations) they share. This interaction provides a combination of local and global information processing that is fundamentally lacking in symbolic architectures of cognition. Further advantages are briefly discussed.

Frank van der Velde
A Formal Model of Neuron That Provides Consistent Predictions

We define maximal specific rules that avoid the problem of statistical ambiguity and provide predictions with maximum conditional probability. Also we define a special semantic probabilistic inference that learn these maximal specific rules and may be considered as a special case of Hebbian learning. This inference we present as a formal model of neuron and prove that this model provides consistent predictions.

E. E. Vityaev
Safely Crowd-Sourcing Critical Mass for a Self-improving Human-Level Learner/“Seed AI”

Artificial Intelligence (AI), the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [11]. We argue that this is primarily because more AI researchers are focused on problem-solving or rigorous analyses of intelligence rather than creating a “self” that can “learn” to be intelligent and secondarily due to the excessive amount of time that is spent re-inventing the wheel. We propose a plan to architect and implement the hypothesis [19] that there is a reasonably achievable minimal set of initial cognitive and learning characteristics (called critical mass) such that a learner starting anywhere above the critical knowledge will acquire the vital knowledge that a typical human learner would be able to acquire. We believe that a

moral

, self-improving learner (“seed AI”) can be created today via a safe “sousveillance” crowd-sourcing process and propose a plan by which this can be done.

Mark R. Waser
Unconscious Guidance of Pedestrians Using Vection and Body Sway

In daily life our behavior is guided by various visual stimuli, such as the information on direction signs. However, our environmentally-based perceptual capacity is often challenged under crowded conditions, even more so in critical circumstances like emergency evacuations. In those situations, we often fail to pay attention to important signs. In order to achieve more effective direction guidance, we considered the use of unconscious reflexes in human walking. In this study, we experimented with vision-guided walking direction control by inducing subjects to shift their gaze direction using a vection stimulus combined with body sway. We confirmed that a shift in a subject’s walking direction and body sway could be induced by a combination of vection and vibratory stimulus. We propose a possible mechanism for this effect.

Norifumi Watanabe, Takashi Omori
The Analysis of Amodal Completion for Modeling Visual Perception

The challenge of creating a real-life computational equivalent of the human mind encompasses several aspects. Of fundamental relevance is to understand the cognitive functions of natural intelligent systems. Most of human brain is devoted to perceptual tasks which are not purely perceptive but convey also emotional competence.

Amodal completion is a widespread phenomenon in vision: it is the ability to perceive an object in its entirety even though some parts of the object are hidden by another entity, as in the case of occlusion. The aim of our study was to test whether certain characteristics of colour can influence the division of a bi-coloured rectangle into its two respective parts when the border between them is occluded by another rectangle and therefore seen amodally.

Liliana Albertazzi, James Dadam, Luisa Canal, Rocco Micciolo
Naturally Biased Associations between Colour and Shape: A Brentanian Approach

The real challenge for a biologically inspired cognitive architecture is understanding and being able to generating the human likeness of artifacts[3]. The basic characteristics of what it means to be human are still imprecise, however. For example, we are still struggling for understanding the nature of awareness or meaning.

Vision studies are in no better shape, divided as they are between reduction to neurophysiology and the science of qualitative perceiving [2,8,9]. From the perspective point of the latter, we have conducted a battery of studies on the relations between color and shape. Starting from the hypothesis of naturally biased associations in the general population, we tested whether shapes with varying perceptual characteristics lead to consistent choices of colors (for details, Albertazzi et al. [2]).

Liliana Albertazzi, Michela Malfatti
Architecture to Serve Disabled and Elderly

We propose an architecture (discussed in the context of a dressing and cleaning application for impaired and elderly persons) that combines a cognitive framework that generates motor commands with the MOSAIC architecture which selects the right motor command according to the proper context. The ambition is to have robots able to understand humans intentions (dressing or cleaning intentions), to learn new tasks only by observing humans, and to represent the world around it by using conceptual spaces. The cognitive framework implements the learning by demonstration paradigm and solves the related problem to map the observed movement into the robot motor system. Such framework is assumed to work with two operative modalities: observation and imitation. During the observation the robot identifies the main actions and the properties of the involved objects; hence it classifies, organizes and labels them. This is done to create a repertoire of actions and to represent the world around. During the imitation the robot selects the proper rules to reproduce the observed movement and generate the proper motor commands. The end goal is to connect the generated motor commands to the operative context (dressing or cleaning). The MOSAIC architecture is the suitable solution for this problem. MOSAIC is made of multiple couplings of predictors, which predict system motor behaviour, and controllers which properly select the motor command depending on the context. The proposed architecture presents one controller for each context and each controller is paired with one predictor. The motor commands generated by the framework are sent to the predictor, whose estimates are then compared with the current sensory feedback and the difference between them generates a prediction error. The smaller the prediction error, the more likely the context. Once the right prediction is identified, the paired controller is selected and used.

Miriam Buonamente, Magnus Johnsson
Bio-inspired Sensory Data Aggregation

The Ambient Intelligence (AmI) research field focuses on the design of systems capable of adapting the surrounding environmental conditions so that they can match the users needs, whether those are consciously expressed or not [4][1].

In order to achieve this goal, an AmI system has to be endowed with sensory capabilities in order to monitor environment conditions and users’ behavior and with cognitive capabilities in order to obtain a full context awareness. Amy systems have to distinguish between ambiguous situations, to learn from the past experience by exploiting feedback from the users and from the environment, and to react to external stimuli by modifying both its internal state and the external state.

Alessandra De Paola, Marco Morana
Clifford Rotors for Conceptual Representation in Chatbots

In this abstract we introduce an unsupervised sub-symbolic natural language sentences encoding procedure aimed at catching and representing into a Chatbot Knowledge Base (KB) the concepts expressed by an user interacting with a robot.

The chatbot KB is coded in a conceptual space induced from the application of the Latent Semantic Analysis (LSA) paradigm on a corpus of documents. LSA has the effect of decomposing the original relationships between elements into linearly-independent vectors. Each basis vector can be considered therefore as a “conceptual coordinate”, which can be tagged by the words which better characterize it. This tagging is obtained by performing a (TF-IDF)-like weighting schema [3], that we call TW-ICW (term weight-inverse conceptual coordinate weight), to weigh the relevance of each term on each conceptual coordinate.

Agnese Augello, Salvatore Gaglio, Giovanni Pilato, Giorgio Vassallo
Neurogenesis in a High Resolution Dentate Gyrus Model

It has often been thought that adult brains are unable to produce new neurons. However, neurogenesis, or the birth of new neurons, is a naturally occurring phenomenon in a few specific brain regions. The well-studied dentate gyrus (DG) region of hippocampus in the medial temporal lobe is one such region. Nevertheless, the functional significance of neurogenesis is still unknown. Artificial neural network models of the DG not only provide a framework for investigating existing theories, but also aid in the development of new hypothesis and lead to greater neurogenesis understanding.

Craig M. Vineyard, James B. Aimone, Glory R. Emmanuel
A Game Theoretic Model of Neurocomputation

Within brains, complex dynamic neural interactions are an aggregate of excitatory and inhibitory neural firings. This research considers some key neuron operating properties and functionality to analyze the dynamics through which neurons interact and are responsive together. Frameworks have often been developed as simplifications (e.g. Ising, sandpile, Björner), but still provide an elegant framework for modeling complex interactions. Also, petri nets serve as a rich mathematical framework for describing distributed systems. Building on this research, we present a game theoretic neural chip-firing model that is based on spreading activation of neurotransmitters capable of representing aspects of neural activity.

Craig M. Vineyard, Glory R. Emmanuel, Stephen J. Verzi, Gregory L. Heileman
Backmatter
Metadata
Title
Biologically Inspired Cognitive Architectures 2012
Editors
Antonio Chella
Roberto Pirrone
Rosario Sorbello
Kamilla Rún Jóhannsdóttir
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-34274-5
Print ISBN
978-3-642-34273-8
DOI
https://doi.org/10.1007/978-3-642-34274-5

Premium Partner