Abstract
This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition.
Similar content being viewed by others
Notes
In the same sense in which “praxis” is used to refer to “theory in practice”, we use “praxical” to qualify interactions that are information—or knowledge-oriented. An embodied and embedded agent has a praxical relation with its surroundings when it learns about, and operates on, its environment in ways that are conducive to the acquisition of implicit information or knowledge about it. In human agents, practical experience is non-theoretical, whereas praxical experience is pre—but also pro—theoretical, as it conduces to theory.
For a robot with similar skills to fotoc’s see Lego Wall Follower. It is equipped with a turret, enabling the rotation of its sensor (in the right direction) when a wall is detected, see http://www.techeblog.com/index.php/tech-gadget/lego-roverbot for a more detailed description.
The diurnal motion (being these of flowers or of leaves) is a response to the direction of the sun, performed by motor cells in flexible segments of the plant specialized in pumping potassium ions into nearby tissues (thus changing the turgor pressure) reversibly.
In the Theory of Level of Abstraction (LoA) discrete mathematics is used to specify and analyse the behaviour of information systems. The definition of a LoA is: given a well-defined set X of values, an observable of type X is a variable whose value ranges over X. A LoA consists of a collection of observables of given types. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. A LoA consists of a collection of observables, each with a well-defined possible set of values or outcomes. Each LoA makes possible an analysis of the system, the result of which is called a model of the system. Evidently, a system may be described at a range of LoAs and so can have a range of models. More intuitively, a LoA is comparable to an ‘interface’, which consists of a set of features, the observables.
Even if LoAs are not yet directly involved in the emergence of the elementary abilities required to overcome the SGP, a clear analysis of an agent’s LoAs is crucial in order to understand the development of advanced semantic abilities. Hence, it is important to introduce an explicit reference to them at this early stage in the description of the architecture of an AM².
Bacteria interact with the external environment, sending and receiving signals. The transmission of the signal is possible thanks to some receptors—glicoproteins—on the membrane. Such receptors interact with the signal’s molecules, ligandi. The interaction determines a change that determines a new behaviour of the bacteria.
References
Barklund, J., et al. (2000). Reflection principles in computational logic. Journal of Logic and Computation, 10, 743–786.
Brazier, F. M. T., et al. (1999). Compositional modelling of reflective agents. International Journal of Human-Computer Studies, 50, 407–431.
Brooks, R. A. (1990). Elephants don’t play chess. Robotics and Autonomous Systems, 6, 3–15.
Cointe P. (Ed.) (1999). Meta-Level architectures and reflection, second international conference on reflection. Saint-Malo, France: Springer-Verlag.
Donahoe J. W., et al. (Eds.) (1997). Neural network models of cognition: Biobehavioral foundations. Amsterdam: Elsevier Science Press.
Floridi, L. (2004). Open problems in the philosophy of information. Metaphilosophy, 35, 554–582.
Floridi, L., et al. (2004). The method of abstraction. In M. Negrotti (Ed.), Yearbook of the artificial, dedicated to “models in contemporary sciences” (pp. 177–220). Berna: P. Lang.
Grim, P., et al. (2001). Evolution of communication with a spatialized genetic algorithm. Evolution of Communication, 3, 105–134.
Harnad, S. (1990). The symbol grounding problem. Physica, D, 335–346.
Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: John Wiley & Sons.
Menczer, F., et al. (2000). Efficient and scalable pareto optimization by evolutionary local selection algorithms. Evolutionary Computation, 8, 223–247.
Menczer, F., et al. (2001). Evolving heterogeneous neural agents by local selection. In M. Patel, et al. (Eds.), Advances in the evolutionary synthesis of intelligent agents (pp. 337–366). Cambridge, MA: MIT Press.
Mosses, P. D. (1992). Action semantics. Cambridge University Press.
Pinker, S. (1994). The language instinct. New York: William Morrow.
Real, L. A. (1991). Animal choice behavior and the evolution of cognitive architecture. Science, 30, 980–985.
Smith, J. M., et al. (1999). The origins of life. Oxford University Press.
Steels, L. (2005). The emergence and evolution of linguistic structure: from lexical to grammatical communication systems. Connection Science, 17, 213–230.
Taddeo, M., et al. (2005). Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17, 419–445.
Varshavskaya, P. (2002). Behavior-based early language development on a humanoid robot. In C. G. Prince, et al. (Eds.), Second international workshop on epigenetic robotics: Modelling cognitive development in robotic systems (pp. 149–158). Edinburgh, Scotland.
Acknowledgements
A first draft of this paper was the topic of a seminar given by one of us (Mariarosaria) at the Department of Philosophy at University of Padua and we are grateful to the participants for their helpful discussions. We would also like to thank Massimiliano Carrara, Roberto Cordeschi and Jeff Sanders for their suggestions and comments on several versions of this paper. We are very grateful to the members of our research group on the philosophy of information, the IEG, at the University of Oxford for their useful comments; in particular, we would like to acknowledge the help and the very valuable feedback by Sebastian Sequoiah-Grayson and Matteo Turilli. Filippo Menczer very kindly provided many suggestions about the use of ELSA and Local Selection in solving the Symbol Grounding Problem. None of the people listed above is responsible for any remaining mistake.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Taddeo, M., Floridi, L. A Praxical Solution of the Symbol Grounding Problem. Minds & Machines 17, 369–389 (2007). https://doi.org/10.1007/s11023-007-9081-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-007-9081-3