Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Abduction, Problem Solving, and Practical Reasoning

Frontmatter

Virtuous Distortion

Abstraction and Idealization in Model-Based Science

The use of models in the construction of scientific theories is as wide-spread as it is philosophically interesting (and, one might say, vexing). Neither in philosophical analysis nor scientific practice do we find a univocal concept of model; but there is an established usage in which a model is constituted, at least in part, by the theorist’s idealizations and abstractions. Idealizations are expressed by statements known to be false. Abstractions are achieved by suppressing what is known to be true. Idealizations, we might say, over-represent empirical phenomena, whereas abstractions under represent them. Accordingly, we might think of idealizations and abstractions as one another’s duals. In saying what is false and failing to say what is true, idealization and abstraction introduce distortions into scientific theories. Even so, the received and deeply entrenched view of scientists and philosophers is that these distortions are both necessary and virtuous. A good many people who hold this view see the good of models as merely instrumental, in a sense intended to contrast with “cognitive”. Others, however, take the stronger and more philosophically challenging position that the good done by these aspects of scientific modeling is cognitive in nature. Roughly speaking, something has instrumental value when it helps produce a result that “works”. Something has cognitive value when it helps produce knowledge. Accordingly, a short way of making the cognitive virtue claim is as follows: Saying what’s false and suppressing what is true is, for wide ranges of cases, indispensable to the production of scientific knowledge. Given the sheer volume of traffic in the modeling literature, focused discussions of what makes these distortions facilitators of scientific knowledge attracts comparatively slight analytical attention by philosophers of science and philosophically-minded scientists. This is perhaps less true of the distortions effected by abstraction than those constituted by idealization. Still, in relation to the scale of use of the models methodology, these discussions aren’t remotely as widespread and, when even they do occur, are not particularly “thick”. The principal purpose of this paper is to thicken the analysis of the cognitive virtuosity of falsehood-telling and truth-suppression. The analysis will emphasize the influence of these factors on scientific understanding.

John Woods, Alirio Rosales

Naturalizing Peirce’s Semiotics: Ecological Psychology’s Solution to the Problem of Creative Abduction

The study of model-based reasoning (MBR) is one of the most interesting recent developments at the intersection of psychology and the philosophy of science. Although a broad and eclectic area of inquiry, one central axis by which MBR connects these disciplines is anchored at one end in theories of internal reasoning (in cognitive science), and at the other, in C.S. Peirce’s semiotics (in philosophy). In this paper, we attempt to show that Peirce’s semiotics actually has more natural affinity on the psychological side with ecological psychology, as originated by James J. Gibson and especially Egon Brunswik, than it does with non-interactionist approaches to cognitive science. In particular, we highlight the strong ties we believe to exist between the triarchic structure of semiotics as conceived by Peirce, and the similar triarchic stucture of Brunswik’s lens model of organismic achievement in irreducibly uncertain ecologies. The lens model, considered as a theory of creative abduction, provides a concrete instantiation of at least one, albeit limited, interpretation of Peirce’s semiotics, one that we believe could be quite fruitful in future theoretical and empirical investigations of MBR in both science and philosophy.

Alex Kirlik, Peter Storkerson

Smart Abducers as Violent Abducers

Hypothetical Cognition and “Military Intelligence”

I will describe the so-called coalition enforcement hypothesis, which sees humans as self-domesticated animals engaged in a continuous hypothetical activity of building cooperation through morality, incorporating punishing policies at the same time: morality and violence are seen as strictly intertwined with social and institutional aspects, implicit in the activity of cognitive niche construction.

Hypothetical thinking

(and so

abduction

) is in turn very often embedded in various linguistic kinds of the so-called fallacious reasoning. Indeed, in evolution, coalition enforcement works through the building of social cognitive niches seen as new ways of diverse human adaptation, where guessing hypotheses is central and where guessing hypotheses is occurring as it can, depending on the cognitive/moral options human beings adopt. I will also stress the moral and violent effect played by human natural languages, focusing on the analysis of the relationships between language, logic, fallacies, and abduction. This “military nature of abductive hypothetical reasoning in linguistic communication (

military intelligence

) is intrinsically “moral” (protecting the group by obeying shared norms), and at the same time “violent” (for example, harming or mobbing others - members or not of the group - still to protecting the group itself). However, the “military” power can be considered also active at the level of

model-based cognition

: taking advantage of the naturalistic perspective on abductive “hypothesis generation” at the level of both instinctual behavior and representation-oriented behavior, where nonlinguistic features drive a “plastic” model-based cognitive role, cognition gains a fundamental semiotic, eco-physical, and “military” significance, which nicely furnishes further insight into a kind of “social epistemology”.

Lorenzo Magnani

Different Cognitive Styles in the Academy-Industry Collaboration

Previous studies on obstacles in technology transfer between universities and companies emphasized the economic, legal, and organizational aspects, mainly focused in transfer of patents and licences. Since research collaboration implies a complex phenomenon of linguistic and cognitive coordination and attuning among members of the research group, a deeper cognitive investigation about this dimension might give some interesting answer to

academy-industry problem

. The main hypothesis is that there can be different cognitive styles in thinking, problem solving, reasoning and decision making that can hamper the collaboration between academic and industrial researchers. These different cognitive styles are linked and mostly determined by a different set of values and norms that are part of background knowledge. Different background knowledge is also responsible of bad linguistic coordination and understanding and of the difficulty of a successful psychology of group. The general hypotheses that will be inferred in this paper represent a research programme of empirical tests to control the effects on cognitive styles of different scientific and technological domains and geographical contexts.

Riccardo Viale

Abduction, Induction, and Analogy

On the Compound Character of Analogical Inferences

Analogical reasoning has been investigated by philosophers and psychologists who have produced different approaches like “schema induction” (Gick and Holyoak) or the “structure-mapping theory” (Gentner).What is commonplace, however, is that analogical reasoning involves processes of matching and mapping. Apart from the differences that exist between these approaches, one important problem appears to be the lack of inferential precision with respect to these processes of matching and mapping. And this is all the more problematic, because analogical reasoning is widely conceived of as “inductive” reasoning. However, inductive reasoning - in a narrow and technical sense - is not creative, whereas analogical reasoning counts as an important source of human creativity. It is C. S. Peirce’s merit to have pointed to this fact and that induction can merely extrapolate and generalize something already at hand, but not the kind of reasoning that leads to new concepts. Indeed, inventive reasoning is usually identified with abduction, and consequently abduction should play at least some role in analogy. Peirce has claimed that analogy is a compound form of reasoning that integrates abduction and induction, but the intriguing question is still, how these two inferences are to be reconstructed precisely. In the proposed paper I hold that analogical reasoning can indeed be analyzed in this way and that this helps us to reach a much more precise and differentiated understanding of the forms and processes of analogical reasoning. In particular I hold that (at least) two forms of analogical reasoning have to be distinguished, because they represent different inferential paths. The underlying inferential processed will be explicated in detail and illustrated by various examples.

Gerhard Minnameier

Belief Revision vs. Conceptual Change in Mathematics

In his influential book Conceptual Revolutions (1992), Thagard asked whether the question of conceptual change is identical with the question of belief revision. One might argue that they are identical, because “whenever a concept changes, it does so by virtue of changes in the beliefs that employ that concept”. According to him, however, all those kinds of conceptual change that involve conceptual hierarchies (e.g., branch jumping or tree switching) cannot be interpreted as simple kinds of belief revision. What is curious is that Thagard’s interesting question has failed to attract any serious response from belief revision theorists. The silence of belief revision theorists may be due to both wings of their fundamental principle of informational economy, i.e., the principle of minimal change and the principle of entrenchment. Indeed, Gärdenfors and Rott conceded that their formal theory of belief revision “is concerned solely with small changes like those occurring in normal science” [8]. In this paper, I propose to re-examine Thagard’s question in the context of the problem of conceptual change in mathematics. First, I shall present a strengthened version of the argument for the redundancy of conceptual change by exploiting the notion of implicit definition in mathematics. If the primitive terms of a given mathematical structure are defined implicitly by its axioms, how could there be other conceptual changes than those via changing axioms? Secondly, I shall examine some famous episodes of domain extensions in the history of numbers in terms of belief revision and conceptual change. Finally, I shall show that there are extensive and intricate interaction between conceptual change and belief revision in these cases.

Woosuk Park

Affordances as Abductive Anchors

In this paper we aim to explain how the notion of abduction may be relevant in describing some crucial aspects related to the notion of affordance, which was originally introduced by the ecological psychologist James J. Gibson. The thesis we develop in this paper is that an affordance can be considered an abductive anchor. Hopefully, the notion of abduction will clear up some ambiguities and misconceptions still present in current debate. Going beyond a merely sentential conception, we will argue that the role played by abduction is two fold. First of all, it is decisive in leading us to a better definition of affordance. Secondly, abduction turns out to be a valuable candidate in clarifying the various issues related to affordance detection.

Emanuele Bardone

A Model-Based Reasoning Approach to Prevent Crime

Within the field of criminology, one of the main research interests is the analysis of the

displacement of crime

. Typical questions that are important in understanding the displacement of crime are: When do hot spots of high crime rates emerge? Where do they emerge? And, perhaps most importantly, how can they be prevented? In this paper, an agent-based simulation model of crime displacement is presented, which can be used not only to

simulate

the spatio-temporal dynamics of crime, but also to

analyze

and

control

those dynamics. To this end, methods from Artificial Intelligence and Ambience Intelligence are used, which are aimed at developing intelligent systems that monitor human-related processes, and provide appropriate support. More specifically, an explicit domain model of crime displacement has been developed, and, on top of that, model-based reasoning techniques are applied to the domain model, in order to analyze which environmental circumstances result in which crime rates, and to determine which support measures are most appropriate. The model can be used as an analytical tool for researchers and policy makers to perform thought experiments, i.e., to shed more light on the process under investigation, and possibly improve existing policies (e.g., for surveillance). The basic concepts of the model are defined in such a way that it can be directly connected to empirical information.

Tibor Bosse, Charlotte Gerritsen

Abducing the Crisis

Macroeconomic crises are events marked by “broken promises” that shatter the expectations that many agents had entertained about their economic prospects and wealth positions. Crises lead to reappraisals of the views of the world upon which agents had based their expectations, plans and decisions, and to a reconsideration of theories and models on the part of analysts. A crisis triggers widespread efforts of abduction in search of new hypothesis and explanations. In this paper we will explore, in particular, the abductions that analysts may apply after a crisis and see how they reveal the prevalence of “wrong” abductions at the onset of the crisis. In order to carry out this exercise, we study the general role of abduction in economic analysis, both theoretical and practical. Economic theory generally proceeds by constructing models, that is, mental schemes based on mental experiments. They are often written in mathematical language but, apart from their formal expression, they use metaphors, analogies and pieces of intuition to motivate their assumptions and to give support to their conclusions. We try to capture all these elements in a formal scheme and apply the ensuing model of abduction to the analysis of macroeconomic crises.

Ricardo F. Crespo, Fernando Tohmé, Daniel Heymann

Pathophysiology of Cancer and the Entropy Concept

Entropy may be seen both from the point of view of thermodynamics and from the information theory, as an expression of system heterogeneity. Entropy, a system-specific entity, measures the distance between the present and the predictable end-stage of a biological system. It is based upon statistics of internal characteristics of the system. A living organism maintains its low entropy and reduces the entropy level of its environment due to communication between the system and its environment. Carcinogenesis is characterized by accumulating genomic mutations and is related to a loss of internal cellular information. The dynamics of this process can be investigated with the help of information theory. It has been suggested that tumor cells might regress to a state of minimum information during carcinogenesis and that information dynamics are integrally related to tumor development and growth. The great variety of chromosomal aberrations in solid tumors has limited its use as a variable to measure tumor aggressiveness or to predict prognosis. The introduction of Shannon’s entropy to express karyotypic diversity and uncertainty associated to sample distribution has overcome this problem. During carcinogenesis, mutations of the genome and epigenetic alterations (e.g. changes in methylation or protein composition) occur, which reduce the information content by increasing the randomness and raising the spatial entropy inside the nucleus. Therefore, we would expect a raise of entropy of nuclear chromatin in cytological or histological preparations with increasing malignancy of a tumor. In this case, entropy is calculated based on the co-occurrence matrix or the histogram of gray values of digitalized images. Studies from different laboratories based on various types of tumors demonstrated that entropy derived variables describing chromatin texture are independent prognostic features. Increasing entropy values are associated with a shorter survival. In summary, the entropy concept helped us to create in a parsimonious way a theoretical model of carcinogenesis, as well as prognostic models regarding survival.

Konradin Metze, Randall L. Adam, Gian Kayser, Klaus Kayser

A Pattern Language for Roberto Burle Marx Landscape Design

Patterns were developed by Christopher Alexander [2] to synthesize rules of good design practice. Although he does not tell us where he took his patterns from, it is possible to infer that they are the result of his sensible observation of existing situations in European cities. However, these solutions are not necessarily true for situations in other countries, with different climates, economies and societies.The Brazilian landscape designer Roberto Burle Marx is considered to have achieved the highest level of excellence and success in his designs for private gardens and public open spaces. In other words, there is no doubt about his being considered a “specialist”, in the AI (artificial Intelligence) sense, in his field. The present paper proposes a systematization of the knowledge present in the work of Brazilian landscape designer Marx as “patterns” that can be used by students to overcome their difficulties related to the lack of professional experience.

Carlos Eduardo Verzola Vaz, Maria Gabriela Caffarena Celani

A Visual Model of Peirce’s 66 Classes of Signs Unravels His Late Proposal of Enlarging Semiotic Theory

In this paper I will present the visual model of Peirce’s 66 classes of signs, which I call the Signtree Model, and show how the model helps on developing the enlarged semiotic system that Peirce left unfinished. Peirce’s best-known classification is that of 10 classes of signs. However, in his later years, when developing the sign process in much greater detail, Peirce proposed a classification of no less than 66 classes of signs. In contrast to the first classification, Peirce never worked out the details, making it a difficult topic that has received little attention from semioticians. For a better understanding of the 66 classes, I built the Signtree Model, which makes clear that the 66 classes work together composing a single dynamic system. As the Signtree describes all the 66 classes and visually shows how they are related in a dynamic system, the model can be a powerful tool for semiotic analysis, revealing details of a complex process composed of many elements and multiple relations emphasizing semiosis and the growing of signs. More than that, the Signtree gives clues about philosophical issues such as the relation between semiotic and pragmatism, between semiotic and metaphysics, and the relation among the three branches of semiotic: speculative grammar, critical logical and methodeutic.

Priscila Borges

The Role of Agency Detection in the Invention of Supernatural Beings

An Abductive Approach

Over the last decade, a multidisciplinary approach (merging cognitive science, anthropology and evolutionary psychology) has stressed the fundamental importance of cognitive constraints concerning the complex phenomenon defined as “religion”. The main feature is the predominant presence of belief in agent-concepts that display a strong counterfactual nature, in spite of a minor degree of

counterintuitiveness

. Consistently with the major trend in cognitive science, we contend that agents populating religious beliefs were generated by the same processes involved to infer the presence of ordinary agents. Coherently with the Peircean framework, in which all cognitive performance is considered as a sign-mediated activity, our main point is that those processes of agency detection are characterized at all levels - from the less conscious to the higher ones - by the inferential procedure called

abduction

. Hence, the very invention of supernatural agents seems to be the result of a particular series of abductive processes that served some other purposes before (i.e. the detection of predators and preys) and whose output was coherent with that of other abductive patterns. Eventually, they would be externalized and recapitulated in the well-known figures of deities, spirits, and so on: thoughts concerning supernatural beings, at first rather vague, were

embodied

in material culture with the result of fixing them in more stable and sharable representations that could be manipulated and acquired back in the mind in the definitive form.

Tommaso Bertolotti, Lorenzo Magnani

Formal and Computational Aspects of Model Based Reasoning

Frontmatter

Does Logic Count?

Deductive Logic as a General Theory of Computation

What is the relation of the ordinary first-order logic and the general theory of computation? A hoped-for connection would be to interpret a computation of the value b of a function

f

(

x

) for the argument a as a deduction of the equation (

b

 = 

f

(

a

)) in a suitable elementary number theory. This equation may be thought of as being obtained by a computation in an equation calculus from a set of defining equations plus propositional logic plus substitution of terms for variables and substitution of identicals. Received first-order logic can be made commensurable with this equation calculus by eliminating predicates in terms of their characteristic functions and eliminating existential quantifiers in terms of Skolem functions. It turns out that not all sets of defining equations can be obtained in this way if the received first-order logic is used. However, they can all be obtained if independence-friendly logic is used. This turns all basic problems of computation theory into problems of logical theory.

Jaakko Hintikka

Causal Abduction and Alternative Assessment: A Logical Problem in Penal Law

Epidemiological investigations very often allow saying with certainty that there is a relation between a macrophenomenon F and a certain value of increase or decrease of a certain pathology P. The abductive inference which leads to such a conclusion, however, does not allow establishing which cases of the pathology P are actually caused by cases of F and which are not. Given that in order to establish penal responsibility in most Western countries the law requires that there is a causal relation among token - events (which here we will identify with so-called Kim-events) it is frequently argued that in such cases no causal relation, and a fortiori no penal responsibility, can be properly established. The problem will be examined with the tools of quantified conditional logic. The aim of the paper is to argue that identifying a causal relation in which causes and effects are at a different level of determination does not prevent establishing penal responsibilities.

Claudio Pizzi

On a Theoretical Analysis of Deceiving: How to Resist a Bullshit Attack

This paper intends to open a discussion on how certain dangerous kinds of deceptive reasoning can be defined, in which way it is achieved in a discussion, and which would be the strategies for defense against such deceptive attacks on the light of some principles accepted as fundamental for rationality and logic.

Walter Carnielli

Using Analogical Representations for Mathematical Concept Formation

We argue that visual, analogical representations of mathematical concepts can be used by automated theory formation systems to develop further concepts and conjectures in mathematics. We consider the role of visual reasoning in human development of mathematics, and consider some aspects of the relationship between mathematics and the visual, including artists using mathematics as inspiration for their art (which may then feed back into mathematical development), the idea of using visual beauty to evaluate mathematics, mathematics which is visually pleasing, and ways of using the visual to develop mathematical concepts. We motivate an analogical representation of number types with examples of “visual” concepts and conjectures, and present an automated case study in which we enable an automated theory formation program to read this type of visual, analogical representation.

Alison Pease, Simon Colton, Ramin Ramezani, Alan Smaill, Markus Guhe

Good Experimental Methodologies and Simulation in Autonomous Mobile Robotics

Experiments have proved fundamental constituents for natural sciences and it is reasonable to expect that they can play a useful role also in engineering, for example when the behavior of an artifact and its performance are difficult to characterize analytically, as it is often the case in autonomous mobile robotics. Although their importance, experimental activities in this field are often carried out with low standards of methodological rigor. Along with some initial attempts to define good experimental methodologies, the role of simulation experiments has grown in the last years, as they are increasingly used instead of experiments with real robots and are now considered as a good tool to validate autonomous robotic systems. In this work, we aim at investigating simulations in autonomous mobile robotics and their role in experimental activities conducted in the field.

Francesco Amigoni, Viola Schiaffonati

The Logical Process of Model-Based Reasoning

Standard bivalent propositional and predicate logics are described as the theory of correct reasoning. However, the concept of model-based reasoning (MBR) developed by Magnani and Nersessian rejects the limitations of implicit or explicit dependence on abstract propositional, truth-functional logics or their modal variants. In support of this advance toward a coherent framework for reasoning, my paper suggests that complex reasoning processes, especially MBR, involve a novel logic of and in reality. At MBR04, I described a new kind of logical system, grounded in quantum mechanics (now designated as logic in reality; LIR), which postulates a foundational dynamic dualism inherent in energy and accordingly in causal relations throughout nature, including cognitive and social levels of reality. This logic of real phenomena provides a framework for analysis of physical interactions as well as theories, including the relations that constitute MBR, in which both models and reasoning are complex, partly non-linguistic processes. Here, I further delineate the logical aspects of MBR as a real process and the relation between it and its target domains. LIR describes 1) the relation between model theory - models and modeling - and scientific reasoning and theory; and 2) the dynamic, interactive aspects of reasoning, not captured in standard logics. MBR and its critical relations, e.g., between internal and external cognitive phenomena, are thus not “extra-logical” in the LIR interpretation. Several concepts of representations from an LIR standpoint are discussed and the position taken that the concept may be otiose for understanding of mental processes, including MBR. In LIR, one moves essentially from abduction as used by Magnani to explain processes such as scientific conceptual change to a form of inference implied by physical reality and applicable to it. Issues in reasoning involving computational and sociological models are discussed that illustrate the utility of the LIR logical approach.

Joseph E. Brenner

Constructive Research and Info-computational Knowledge Generation

It is usual when writing on research methodology in dissertations and thesis work within Software Engineering to refer to Empirical Methods, Grounded Theory and Action Research. Analysis of Constructive Research Methods which are fundamental for all knowledge production and especially for concept formation, modeling and the use of artifacts is seldom given, so the relevant first-hand knowledge is missing. This article argues for introducing of the analysis of Constructive Research Methods, as crucial for understanding of research process and knowledge production. The paper provides characterization of the Constructive Research Method and its relations to Action Research and Grounded Theory. Illustrative examples from Software Engineering, Cognitive Science and Brain Simulation are presented. Finally, foundations of Constructive Research are analyzed within the framework of Info-Computationalism.

Gordana Dodig Crnkovic

Emergent Semiotics in Genetic Programming and the Self-Adaptive Semantic Crossover

We present SASC, Self-Adaptive Semantic Crossover, a new class of crossover operators for genetic programming. SASC operators are designed to induce the emergence and then preserve good building-blocks, using meta-control techniques based on semantic compatibility measures. SASC performance is tested in a case study concerning the replication of investment funds.

Rafael Inhasz, Julio Michael Stern

An Episodic Memory Implementation for a Virtual Creature

This work deals with the research on intelligent virtual creatures and cognitive architectures to control them. Particularly, we are interested in studying how the use of episodic memory could be useful to improve a cognitive architecture in such a task. Episodic memory is a neurocognitive mechanism for accessing past experiences that naturally makes part of human process of decision making, which usually enhances the chances of a successful behavior. Even though there are already some initiatives in such a path, we are still very far from this being a well known technology to be widely embedded in our intelligent agents. In this work we report on our ongoing efforts to bring up such technology by building up a cognitive architecture where episodic memory is a central capability.

Elisa Calhau de Castro, Ricardo Ribeiro Gudwin

Abduction and Meaning in Evolutionary Soundscapes

The creation of an artwork named RePartitura is discussed here under principles of Evolutionary Computation (EC) and the triadic model of thought: Abduction, Induction and Deduction, as conceived by Charles S. Peirce. RePartitura uses a custom-designed algorithm to map image features from a collection of drawings and an Evolutionary Sound Synthesis (ESSynth) computational model that dynamically creates sound objects. The output of this process is an immersive computer generated sonic landscape, i.e. a synthesized Soundscape. The computer generative paradigm used here comes from the EC methodology where the drawings are interpreted as a population of individuals as they all have in common the characteristic of being similar but never identical. The set of specific features of each drawing is named as genotype. Interaction between different genotypes and sound features produces a population of evolving sounds. The evolutionary behavior of this sonic process entails the self-organization of a Soundscape, made of a population of complex, never-repeating sound objects, in constant transformation, but always maintaining an overall perceptual self-similarity in order to keep its cognitive identity that can be recognize for any listener. In this article we present this generative and evolutionary system and describe the topics that permeates from its conceptual creation to its computational implementation. We underline the concept of self-organization in the generation of soundscapes and its relationship with computer evolutionary creation, abductive reasoning and musical meaning for the computational modeling of synthesized soundscapes.

Mariana Shellard, Luis Felipe Oliveira, Jose E. Fornari, Jonatas Manzolli

Consequences of a Diagrammatic Representation of Paul Cohen’s Forcing Technique Based on C.S. Peirce’s Existential Graphs

This article examines the forcing technique developed by Paul Cohen in his proof of the independence of the Generalized Continuum Hypothesis from the ZFC axioms of set theory in light of the theory of abductive inference and the diagrammatic system of Existential Graphs elaborated by Peirce. The history of the development of Cohen’s method is summarized, and the key steps of his technique for defining the extended model

M

[

G

] from within the ground model

M

are outlined. The relations between statements in

M

and their correspondent reference values in

M

[

G

] are modeled in Peirce’s Existential Graphs as the construction of a modal covering over the sheet of assertion. This formalization clarifies the relationship between Peirce’s EG-

β

and EG-

γ

and lays the foundation for theorizing the abductive emergence of the latter out of the former.

Gianluca Caterina, Rocco Gangle

Models, Mental Models, Representations

Frontmatter

How Brains Make Mental Models

Many psychologists, philosophers, and computer scientist have written about mental models, but have remained vague about the nature of such models. Do they consist of propositions, concepts, rules, images, or some other kind of mental representation? This paper will argue that a unified account can be achieved by understanding mental models as representations consisting of patterns of activation in populations of neurons. The fertility of this account will be illustrated by showing its applicability to causal reasoning and the generation of novel concepts in scientific discovery and technological innovation. I will also discuss the implications of this view of mental models for evaluating claims that cognition is embodied.

Paul Thagard

Applications of an Implementation Story for Non-sentential Models

The viability of the proposal that human cognition involves the utilization of non-sentential models is seriously undercut by the fact that no one has yet given a satisfactory account of how neurophysiological circuitry might realize representations of the right sort. Such an account is offered up here, the general idea behind which is that high-level models can be realized by lower-level computations and, in turn, by neural machinations. It is shown that this account can be usefully applied to deal with problems in fields ranging from artificial intelligence to the philosophy of science.

Jonathan Waskan

Does Everyone Think, or Is It Just Me?

A Retrospective on Turing and the Other-Minds Problem

It has been roughly 60 years since Turing wrote his famous article on the question, “Can machines think?” His answer was that the ability to converse would be a good indication of a thinking computer. This procedure can be understood as an abductive inference: That a computer could converse like a human being would be explained if it had a mind. Thus, Turing’s solution can be viewed as a solution to the other-minds problem, the problem of knowing that minds exist other than your own, applied to the special case of digital computers. In his response, Turing assumed that thinking is a matter of running a given program, not having a special kind of body, and that the development of a thinking program could be achieved in a simulated environment. Both assumptions have been undermined by recent developments in Cognitive Science, such as neuroscience and robotics. The physical details of human brains and bodies are indivisible from the details of human minds. Furthermore, the ability and the need of human beings to interact with their physical and social environment are crucial to the nature of the human mind. I argue that a more plausible solution to Turing’s question is an analogical abduction: An attribution of minds to computers that have bodies and ecological adaptations akin to those of human beings. Any account of human minds must take these factors into consideration. Any account of non-human minds should take human beings as a model, if only because we are best informed about the human case.

Cameron Shelley

Morality According to a Cognitive Interpretation: A Semantic Model for Moral Behavior

In recent years researches in the field of cognitive psychology have favored an interpretation of moral behavior primarily as the product of basic, automatic and unconscious cognitive mechanisms for the processing of information, rather than of some form of principled reasoning. This paper aims at undermining this view and to sustain the old-fashioned thesis according to which moral judgments are produced by specific forms of reasoning. As critical reference our research specifically addresses the so called Rawlsian model which hinges on the idea that human beings produce their moral judgments on the basis of a moral modular faculty “that enables each individual to unconsciously and automatically evaluate a limitless variety of actions in terms of principles that dictate what is permissible, obligatory, or forbidden”.[25, p. 36] In this regard we try to show that this model is not able to account for the moral behavior of different social groups and different individuals in critical situations, when their own moral judgment disagrees with the moral position of their community. Furthermore, the critical consideration of the Rawlsian model constitutes the theoretical basis for the constructive part of our argument, which consists of a proposal about how to develop a semantic, quasi-rationalistic model to describe moral reasoning. This model aims to account for both moral reasoning and the corresponding emotions on the basis of the information which morally relevant concepts consist of.

Sara Dellantonio, Remo Job

The Symbolic Model for Algebra: Functions and Mechanisms

The symbolic mode of reasoning in algebra, as it emerged during the sixteenth century, can be considered as a form of model-based reasoning. In this paper we will discuss the functions and mechanisms of this model and show how the model relates to its arithmetical basis. We will argue that the symbolic model was made possible by the epistemic justification of the basic operations of algebra as practiced within the abbaco tradition. We will also show that this form of model-based reasoning facilitated the expansion of the number concept from Renaissance interpretations of number to the full notion of algebraic numbers.

Albrecht Heeffer

The Theoretician’s Gambits: Scientific Representations, Their Formats and Content

It is quite widely acknowledged, in the field of cognitive science, that the format in which a set of data is displayed (lists, graphs, arrays, etc.) matters to the agents’ performances in achieving various cognitive tasks, such as problem-solving or decision-making. This paper intends to show that formats also matter in the case of theoretical representations, namely general representations expressing hypotheses, and not only in the case of data displays. Indeed, scientists have limited cognitive abilities, and representations in different formats have different inferential affordances for them. Moreover, this paper shows that, once agents and their limited cognitive abilities get into the picture, one has to take into account both the way content is formatted and the cognitive abilities and epistemic peculiarities of agents. This paves the way to a dynamic and pragmatic picture of theorizing, as a cognitive activity consisting in creating new inferential pathways between representations.

Marion Vorms

Modeling the Epistemological Multipolarity of Semiotic Objects

For practitioners of semiotics the most controversial questions constitute the status and nature of the semiotic object equalized with the sign separated from its object(s) of reference or encompassing its object(s) reference, i.e., whether the sign is a unilateral entity or a plurilateral unit comprised of interrelated constituents, or a relation (a network of relations) between those constituents. Further questions refer to the manifestations of signs, namely, whether they appear in material or spiritual (corporeal or intelligible, physical or mental), concrete or abstract, real or ideal forms of being, being examined subjectively or objectively in their extraorganismic or intraorganismic manifestations. Accordingly, signs are approached either extra- or introspectively, through individual tokens or general types, occurring in the realm of man only; in the realm of all living systems, or in the universe of creatures, extraterrestrial and divine in nature. These varieties of sign conceptions exhibit not only differences in terminology but also in the formation of their visual presentations. Bearing in mind the need for their analysis and comparison, the practitioner of semiotic disciplines has to find a parameter or a matrix that would contain features and components characteristic for particular approaches to their forms of being and manifestation. Within the framework of this article, the adept readers will be provided with a theory-and-method related outlooks on the token and type relationships between the mental and concrete existence modes of semiotic objects and their objects of reference. Having reviewed all hitherto known sign conceptions, it will be demonstrated how their two main components, the

signans

and

signatum

, may be modeled with their collective and individual properties as oscillating between the possible four epistemological positions: logical positivism, rational empiricism, empirical rationalism, and absolute rationalism.

Zdzisław Wa̧sik

Imagination in Thought Experimentation: Sketching a Cognitive Approach to Thought Experiments

We attribute the capability of imagination to the madman as to the scientist, to the novelist as to the metaphysician, and last but not least to ourselves. The same, apparently, holds for thought experimentation. Ernst Mach was the first to draw an explicit link between these two mental acts; moreover - in his perspective - imagination plays a pivotal role in thought experimentation. Nonetheless, it is not clear what kind of imagination emerges from Mach’s writings. Indeed, heated debates among cognitive scientists and philosophers turn on the key distinction between sensory and cognitive imagination. Generally speaking, we can say that sensory imagination shares some processes with perception, cognitive imagination with the formation of belief. Both the vocabulary used in the literature on thought experiments and what I refer to as “Machian tradition” indicate imagination as a notion of central importance in the reasoning involved in thought experiments. However, most authors have really focused on sensory (in particular, visual) imagination, but have neglected the second kind. Moreover, some authors attribute to Mach the idea that it is visual imagery that is primarily at work in thought experiments. I claim another interpretation is possible, according to which Mach can be said to deal with cognitive imagination. The main aim of this paper is to retrace Mach’s original arguments and establish a connection with the cognitive literature on imagination. I will argue that imagination

tout court

could play a role in thought experimentation. Once imagination is seen as the key to the “cognitive black-box” of the thought experiment, we will have moved a step closer to a simulative imagining-based account of thought experimentation.

Margherita Arcangeli

Representations of Contemporaneous Events of a Story for Novice Readers

We are working on a story comprehension tool for novice readers, among whom are 6-8 olds in Italy. The tool also asks them to reason on the temporal dimension of stories. In the design of the tool, we stumbled on the following question: how we can render qualitative temporal relations of a story with a visual representation that is conceptually adequate to novice readers. The question triggered the trans-disciplinary work reported on in this paper, written by a cognitive psychologist, an engineer and a logician. The work primarily consists in an experimental study with 6-8 old novice readers, first and second graders of an Italian primary school. We read them a story, and then asked them to visually represent certain contemporaneous relations of the story. The results of the experiment shed light on the variety of strategies that such children employ. The results also triggered two novel experimental studies that are reported on in the conclusion to this paper.

Barbara Arfé, Tania Di Mascio, Rosella Gennari

Understanding and Augmenting Human Morality: An Introduction to the ACTWith Model of Conscience

Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sciences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with traditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of the following paper is to present such a synthesis in the form of a model of moral cognition, the ACTWith model of conscience. The purpose of the model is twofold. One, the ACTWith model is intended to shed light on personal moral dispositions, and to provide a tool for actual human moral agents in the refinement of their moral lives. As such, it relies on the power of personal introspection, bolstered by the careful study of moral exemplars available to all persons in all cultures in the form of literary or religious figures, if not in the form of contemporary peers and especially leadership. Two, the ACTWith model is intended as a minimum architecture for fully functional artificial morality. As such, it is essentially amodal, implementation non-specific and is developed in the form of an information processing control system. There are given as few hard points in this system as necessary for moral function, and these are themselves taken from review of actual human cognitive processes, thereby intentionally capturing as closely as possible what is expected of moral action and reaction by human beings. Only in satisfying these untutored intuitions should an artificial agent ever be properly regarded as moral, at least in the general population of existing moral agents. Thus, the ACTWith model is intended as a guide both for individual moral development and for the development of artificial moral agents as future technology permits.

Jeffrey White

Analog Modeling of Human Cognitive Functions with Tripartite Synapses

Searching for an understanding of how the brain supports conscious processes, cognitive scientists have proposed two main classes of theory: Global Workspace and Information Integration theories. These theories seem to be complementary, but both still lack grounding in terms of brain mechanisms responsible for the production of coherent and unitary conscious states. Here we propose - following James Robertson’s “Astrocentric Hypothesis” - that conscious processing is based on analog computing in astrocytes. The “hardware” for these computations is calcium waves mediated by adenosine triphosphate signaling. Besides presenting our version of this hypothesis, we also review recent findings on astrocyte morphology that lend support to their functioning as Local Hubs (composed of protoplasmic astrocytes) that integrate synaptic activity, and as a Master Hub (composed, in the human brain, by a combination of interlaminar, fibrous, polarized and varicose projection astrocytes) that integrates whole-brain activity.

Alfredo Pereira, Fábio Augusto Furlan

The Leyden Jar in Luigi Galvani’s thought: A Case of Analogical Visual Modeling

In De viribus electricitatis in motu muscolari. Commentarius, Luigi Galvani offers an “analogical modeling” case where he “retrieves” the perceptual structure of the representation of the Leyden experiment pertaining to the electricity domain. In this way Galvani’s suspicion and surprise about the existence of an “animal electricity” were strengthened. Using “model based reasoning”, Galvani infers that what yields nervous fluid in the frog is the putting of the conductive arc of electricity on it, which also is a source of electricity.

Nora Alejandrina Schwartz

Modeling the Causal Structure of the History of Science

This paper is an overview of an approach in the philosophy of science of constructing causal models of the history of science. Units of scientific knowledge, called “advances”, are taken to be related by causal connections, which are modeled in computers by probability distribution functions. Advances are taken to have varying “causal strengths” through time. The approach suggests that it would be interesting to develop a causal model for scientific reasoning. A discussion of counterfactual histories of science is made, with a classification of three types of counterfactual analyses: (i) in economic and technologic history, (ii) in the history of science and mathematics, and (iii) in social history and evolutionary biology.

Osvaldo Pessoa
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise