Introduction
Necessity of a multilevel approach to cognition
Roadmap towards a “middle-out” approach
Potential benefits
-
a simulation of a functional model of a brain as a symbolic virtual machine
-
a graphical formalism whose repetitive patterns could be identified as its neural circuits
-
an experimental platform that allows for reproducing these simulations.
Materials and methods
Bottom up design of virtual circuits
A case of classical conditioning
cs
elicits a weak defensive reflex and a strong noxious unconditioned stimulus
us
produces a massive withdrawal reflex. After a few pairings of stimuli
cs
and
us,
where
cs
slightly precedes
us
, a stimulus
cs
alone will trigger a significantly enhanced withdrawal reflex i.e., the organism has learned a new behavior. This can be represented by a wiring diagram, or virtual circuit (Fig. 1), adapted from Carew et al. (1981) to allow for a one to one correspondence with symbolic expressions.
sense(us)
and
sense(cs)
are coupled with sensors (not shown here) capturing external stimuli
us
and
cs
and correspond to sensory neurons. The components
motor(us)
and
motor(cs)
are coupled with action effectors (also not shown) and correspond to motor neurons. Finally, the component
ltp
embodies the mechanism of long term potentiation and acts as a facilitatory interneuron reinforcing the pathway (i.e. augmenting its weight) between
sense(cs)
and
motor(cs)
. The interaction of these components are represented by the iconic symbols
->=>-
and
/|\
that correspond to a synaptic transmission (i.e.,
->=>-
represents a synapse) and to the modulation of a synapse, respectively. The symbols
*
and
+
stand for conjunctive and disjunctive operators (i.e., they are used to represent the convergence of incoming signals and the dissemination of an outgoing signal, respectively). Classical conditioning then follows from the application of a hebbian learning principle i.e., “neurons that fire together wire together” (Hebb 1949; Gerstner and Kistler 2002).cs
and
us
, which in conclusion leads to implement the
ltp
component as a detector of coincidence.A simple case of operant conditioning
I = [I
1
,I
2
,.]
of primitive features (e.g., vectors
[mat,smooth]
and
[shiny,smooth]
could correspond to grains and pebbles, respectively). The generic circuit given in Fig. 2, where
I
stands as a parameter, represents the wiring of four components
sense(I)
,
learn(accept(I))
,
accept(I)
and
reject(I)
, together with two
ltp
and two opposite
ltd
(for long term depression) components. In addition to the external stimuli captured by component
sense(I)
, this circuit incorporates the two internal stimuli
excite(accept(I))
and
inhibit(accept(I))
that correspond to feedbacks from probing the food according to a set of accept elements.
I
. At the beginning of the simulation, and for any
I
, the pathway from
sense(I)
to
learn(accept(I))
is open, while the pathways to both
accept(I)
and
reject(I)
are closed. After a few trials, the pigeon will no longer probe his food, i.e., he will have learned to close the pathway to
learn(accept(I))
and to open either
accept(I)
or
reject(I)
, associating thus each input vector
I
with an action. With regard to the hypothetical neurological substrate corresponding to this scheme, let us just mention that this process matches some recent results from neuroscience, where emergent pictures of the brain are based on the existence of-
two populations of neurons that have opposing spiking patterns in anticipation of movement suggesting that these reflect neural ensembles engaged in a competition (Zagha et al. 2015)
-
a fundamental principle in circuit neuroscience according to which inhibition in neuronal networks during baseline conditions allows in turn for disinhibition, which then stands as a key mechanism for circuit plasticity, learning, and memory retrieval (Letzkus et al. 2015).
Representing circuits by symbolic expressions
-
contrary to a neuron that alternates roles in cycles, a thread can be simultaneously a source and a recipient by maintaining parallel communications.
-
contrary to traditional neuron models in which incoming signals are summed in some way into an integrated value, thread inputs can be processed individually.
->=>-
corresponding to a synaptic transmission is implemented by a
send/receive
pair, and the symbol
/|\
corresponding to the modulation of a synapse is implemented by a
join/merge
pair. A thread named
Thread
will be represented by a symbolic expression having the format
thread(Thread,
Tree
)
, where
Tree
is an instruction tree. Similarly, a named fiber in a named model will be represented by an expression
threads(Model(Fiber):
List
)
, where
List
is a list of thread expressions. As an example, the circuit in Fig. 1 gives rise to the fiber expression given in Fig. 3.
fire
,
send
,
merge
, etc. As another example illustrated in Fig. 2, an instruction tree can contain an alternative (e.g., as in the thread
try
that has two branches commanded by a guard). Formally, symbolic expressions representing instruction trees belong to a language S whose syntax is defined by the production rules given in Fig. 4.
fire
,
send
,
merge
, etc. (see the “Appendix” for a definition of this instruction set). This language S of instruction trees is not to be confused with the language L that will be used to define virtual code implications (and more generally the state of a virtual machine, see “Top down construction of a virtual machine” section) into which instruction trees will be then compiled, as illustrated below.Compiling instruction trees into virtual code implications
-
Guard => T:Instruction
sense(us)
in Fig. 3:
learn(accept(I))
from Fig. 2, whose instruction tree contains an alternative giving rise to the following expression will be compiled into the following virtual code implications, whose repetitive successive clock values correspond to possible descends into a tree:
Top down construction of a virtual machine
-
a set of registers comprising, for each active thread, a local clock and four internal stimuli registers (i.e., fetch, catch, excite, inhibit) holding one value at a time
-
a set of local signal queues attached to active threads and holding multiple values at a time
-
a content addressable memory holding the virtual code implications attached to threads, as well as the sets of current weights and accept elements.
-
contrary to traditional stored-program computers, this machine doesn’t have an instruction register holding the current instruction being executed after its retrieval from an addressable memory; by interpreting code deduced just in time from virtual implications compiled themselves from thread configurations that are akin to brain states, the overall architecture of this system could turn out to be closer to that of a brain.
-
virtual code implications are reminiscent of daemons that run as computer background processes and are triggered by foreground application software; daemons were in common use in the early days of the Artificial Intelligence paradigm, when Neuroscience didn’t yet provide a neural substrate for models of perception and cognition (Powers 2015).
-
similarly to machine code compiled from application software, this new kind of daemons is compiled from thread fibers that are thus akin to cognition software
-
finally, as described above, these daemons are triggered by local deductions within a given stream; global deductions at the model level (to be introduced below) will give access, from within any stream, to previously active threads that will thus achieve the status of a global memory (see “Microcircuits implementing synaptic plasticity” sections).
Results
Microcircuits implementing synaptic plasticity
-
send/receive
denoted by the symbols-
->=>-
or-<=<-
-
-
join/merge
denoted by-
/|\
or\|/
-
ltp/ltd)
-
push/pull
denoted by-
-<A>-
-
stm
)-
store/retrieve
denoted by-
–{P}-
-
ltm
) based on long term storage and retrieval (
lts/ltr).
Synaptic transmission
send(Q)
and
receive(P)
correspond to the transmission of a local signal by a pre-synaptic neuron
P
followed by its reception by a post-synaptic neuron
Q
and are used to model local communications within a given stream
.
The firing of
P
is assumed to have occurred earlier e.g., in reaction to the capture of an external stimulus. These expressions give rise to the communication protocol given in Fig. 5.
send/receive
protocol corresponds to an asynchronous communication subject to a threshold. It involves a predefined weight between the sender
P
and the receiver
Q.
This weight can be incremented/decremented by an
ltp/ltd
thread. After firing thread
Q
and sending it a signal, thread
P
goes on executing its next instruction. On the other side, thread
Q
waits for the reception of a signal from thread
P
and proceeds only if the weight between
P
and
Q
stands above a given threshold. In any case no data is passed between the two threads, and the overall process just amounts to allowing
Q
to proceed on behalf (or at the demand) of
P
.Long term potentiation/depression (ltp/ltd)
join/merge
pair is used in conjunction with the
send/receive
pair in order to implement the modulation of a synapse. The following microcircuitimplementing ltp gives rise to the protocol given in Fig. 6.
cs
and
us
. Towards this end,
sense(us)
fires an
ltp
thread that in turn calls on a
join
thread to wait for a signal from
sense(cs)
. In parallel,
sense(cs)
calls on a
merge
thread to post a signal for
ltp
and then executes a
send(motor(cs))
command to
motor(cs)
. When met by
sense(cs)
, thread
ltp
eventually increments the weight between
sense(cs)
and
motor(cs)
.Short term cache memory (stm)
A
. This can be represented bywhich gives rise to the protocol in Fig. 7.
stm(A)
means that the previous value of
A
is no longer available. Furthermore, broadcasting a path, which amounts to posting a global signal, means that it can be received by any thread
Q
attached to any stream.Associative long term memory (ltm) based on long term storage and retrieval (lts/ltr)
P
and
Q
attached to separate streams (and thus also possibly active at different times) to be associated in order to trigger a recall thread
R.
These two streams will be linked together via a long term memory
ltm(P)
thread embedded in a microcircuit driven by a double communication protocol depicted by
-{P}-.
This new protocol involves two complementary long term storage/retrieval (
lts/ltr
) threads that allow for the building of a storage trace and a later retrieval of previously active threads. This is well in line with results by Rubin and Fusi (2007) demonstrating that if the initial memory trace in neurons is below a certain threshold, then it cannot be retrieved immediately after the occurrence of the experience that created the memory. This can be represented by the following microcircuit:This microcircuit gives rise to the communication protocol in Fig. 8.
ltp(Q,R)
thread (which gets fired by
P
and waits for a local signal from
Q
in order to relate
Q
and
R)
, an
ltr(P,Q,R)
thread is fired by
Q
and waits for a path to
ltm(P)
in order to relate
Q
and
R.
Computational architecture formal specifications
Functional signatures
Formal specifications
-
identifiers starting with capital letters represent variables
-
expression
F(|X)
represents a term with an arbitrary atomic functorF
and any number of arguments e.g.,F(|X)
can be unified withp(1)
,f(a,b)
, etc. -
the character “ – ” represents a blank variable whose instantiation is not required.
loop
,
interrupt
,
if then else
, etc.) have an intuitive meaning and won’t be developed here. The others do represent an implementation of the formal notions of a context and of contextual deduction.Implementing a context as a dynamic set of elements
Setting the value of a register in context
set
operation:
Contextual deduction
ist
predicate standing for “is true in this context” is defined as follows:
Compiling virtual code implications
Loading a model
Running a model
Examples of mesoscale circuits
A model of the first level of animal awareness
a(I)
,
b(J)
,
c(K)
, where
a
,
b
,
c
correspond to the left button, the right button and the sample, respectively, and the parameters
I
,
J
,
K
take the values
green
or
red
. In addition to these external stimuli, two internal stimuli i.e.,
fetch(a),excite(peck(a(I),c(K)))
and
fetch(b),excite(peck(b(J),c(K)))
first command the choice made by the pigeon (i.e., either “left” or “right”, resulting from a random selection) and then provide a positive feedback when the choice was correct (i.e., the pigeon got rewarded). As an example, if the input configuration is
a([green])
,
b([red])
,
c([green])
then the correct choice is
fetch(a)
leading to
peck(a)
.a
and
b
whose color
I
and
J
does match the color
K
of the sample
c.
Note that the pathways to
peck(a)
and
peck(b)
are opened by an
ltp
thread initiated by the middle layer whenever a trial ends with a reward.
A model of the second level of animal awareness
-
an information stage can be initiated by one of two threads
sense(A(I),B([]))
andsense(A([]),B(J))
, where the parametersA
,B
denote for example the left and right corner, the parametersI
andJ
are the expressionflower(food)
signaling a flower with food, and “[]
” signals a location without a flower -
the choice stage is initiated by a thread
sense(A(I),B(J))
, whereI
andJ
can be either one of two expressionsflower(food)
andflower([])
corresponding to the location of a flower with food and without food, respectively -
these two stages are interconnected via a new interaction protocol denoted by
-<A>-
or-<B>-
allowing for the short term cache memory (orstm
) of locationA
orB
.
A
and
B
allowing for the representation of various environments. They then add: “If, however, the organism were truly aware of using the rule, it would, when transferred to a win/shift lose/stay paradigm, readjust after only a few trials”, which actually they do not. This is reflected in the model by the fact that implementing the converse win/shift lose/stay strategy requires to consider negative inhibit feedbacks instead of the positive excite used above.A model of the third level of animal awareness
-{P}-
implementing an associative long term memory (or
ltm
). This protocol involves two complementary long term storage/retrieval (
lts/ltr
) processes that allow for the building of a thread storage trace and a later retrieval from this trace. The first phase (Fig. 12), which starts with the categorization of each variable object
X
into edible and inedible items, will end up memorizing familiar objects as an association
{food(X)}
or
{toy(X)}
.
A
and
B
, starts with a recall from familiar objects. As a result of remembering the category of object
X
, the sorting process applies to all familiar objects without additional training.
Running a simulation
|:
and ≫
>
, respectively:The log of a simulation run for sorting objects is as follows:
Discussion
Comparative approach
From single neurons to neural assemblies
Proposal characteristics
ltm
) based on long term storage and retrieval (
lts/ltr
), as introduced in “Associative long term memory (ltm) based on long term storage and retrieval (lts/ltr)” section, playing a key role. This whole approach relies on the direct mapping of perceived invariant structures. This mechanism reflects in particular the prime importance of vision as a means of first carving the brain to reflect the reality of the world, and then act on it in return (Barret 2008).Potential benefits
-
first inducing plausible mesoscale circuits that represent the application of rules such as matching/oddity to sample, win/stay loose/shift, recall/sort, corresponding to the solution of elementary cognitive tasks such as of association, cross-modal integration, etc.
-
embedding then these circuits in order to solve higher level tasks such meta-cognition.
-
neurons communicate through action potentials, or ‘‘spikes’’ i.e., asynchronous impulses whose height and width are largely invariant; consequently, information is conveyed only in the identity of the neuron that spiked and the time at which it spiked
-
the information flow in a network can be represented as a time series of neural identifiers; this allows for the encoding of neural activity through the so-called address event representation (AER) information protocol (Boahen 2000).
Open perspectives
ltp/ltd
processes, we put forward the hypothesis that our proposed complementary
lts/ltr
processes play a similar role for reentry. In support of this thesis, let us simply confront Edelman (1987) own words: “One of the fundamental tasks of the nervous system is to carry on adaptive perceptual categorization (..). A necessary condition for such perceptual categorization is assumed to be reentry” with the very fact that
lts/ltr
associative processes were introduced in “Results” in order to implement the concept of a category.-
potential conscious contents
P
might have first to be memorized (or directly produced) in{P}
via anlts
or some other equivalent process. -
a triggering event
Q
might then be required in order to elicit the retrieval of{P}
. -
the association of
{P}
andQ
could finally be made “conscious” inR
via anltr
or another equivalent, possibly amplifying process.
Conclusion
lts/ltr
pair might be a candidate for the role of the canonical microcircuit looked for in (Modha et al. 2011). Finally yet, if
ltp/ltd
threads have been explained at the light of the so-called spike-time dependent plasticity, or STDP (Markram et al. 1997; Brette et al. 2007), their extension into hypothetical
lts/ltr
threads raises the issue of their possible grounding into actual biological processes.