Skip to main content

Über dieses Buch

This edited volume explores the intersection between philosophy and computing. It features work presented at the 2016 annual meeting of the International Association for Computing and Philosophy. The 23 contributions to this volume neatly represent a cross section of 40 papers, four keynote addresses, and eight symposia as they cut across six distinct research agendas.

The volume begins with foundational studies in computation and information, epistemology and philosophy of science, and logic. The contributions next examine research into computational aspects of cognition and philosophy of mind. This leads to a look at moral dimensions of man-machine interaction as well as issues of trust, privacy, and justice.

This multi-disciplinary or, better yet, a-disciplinary investigation reveals the fruitfulness of erasing distinctions among and boundaries between established academic disciplines. This should come as no surprise. The computational turn itself is a-disciplinary and no former discipline, whether scientific, artistic, or humanistic, has remained unchanged. Rigorous reflection on the nature of these changes opens the door to inquiry into the nature of the world, what constitutes our knowledge of it, and our understanding of our place in it. These investigations are only just beginning. The contributions to this volume make this clear: many encourage further research and end with open questions.



Chapter 1. Introduction to This Volume

The 2016 meeting of the International Association for Computing and Philosophy brought together a highly interdisciplinary consortium of scholars eager to share their current research at the increasingly important intersection of a number of fields, including computer science, robotics engineering, artificial intelligence, logic, biology, cognitive science, economics, sociology, and philosophy. This introductory chapter serves to organize and connect the discussions across these domains of inquiry, describing both the insights and broad relevance of the research well represented in this volume.
Don Berkich

Computation and Information


Chapter 2. Computation in Physical Systems: A Normative Mapping Account

The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind.
Paul Schweizer

Chapter 3. The Notion of ‘Information’: Enlightening or Forming?

‘Information’ is a fundamental notion in the field of artificial intelligence including various sub-disciplines such as cybernetics, artificial life, robotics, etc. Practically the notion is often taken for granted and used naively in an unclarified and philosophically unreflected manner, whilst philosophical attempts at clarifying ‘information’ have not yet found much consensus within the science-philosophical community. One particularly notorious example of this lack of consensus is the recent Fetzer-Floridi dispute about what is ‘information’—a dispute which has remained basically unsettled until today in spite of a sequence of follow-up publications on this topic. In this chapter our philosophical analysis reveals with reference to Gottlob Frege’s classical semiotics that the above-mentioned Fetzer-Floridi dispute cannot come to any solution at all, because the two competing notions of ‘information’ in that dispute are basically synonyms of what Frege had called ‘sense’ (Sinn) versus what Frege had called ‘meaning’ (Bedeutung). As Frege had convincingly distinguished sense and meaning very clearly from each other, it is obvious that ‘information’ understood like ‘sense’ and ‘information’ understood like ‘meaning’ are incompatible and cannot be reconciled with each other. Moreover we also hint in this chapter at the often-forgotten pragmatic aspects of ‘information’ which is to say that ‘information’ can always only be ‘information for somebody’ with regard to a specific aim or goal or purpose. ‘Information’, such understood, is thus a teleological notion with a context-sensitive embedding into what the late Wittgenstein had called a ‘language-game’ (Sprachspiel). Shannon’s quantified notion of ‘information’, by contrast, which measures an amount of unexpected surprise and which is closely related to the number of definite yes-no-questions which must be asked in order to obtain the desired solution of a given quiz puzzle, is not the topic of this chapter—although also in Shannon’s understanding of ‘information’ the quiz puzzle scenario, within which those yes-no-questions are asked and counted, is obviously purpose-driven and Sprachspiel-dependent. We conclude our information-philosophical analysis with some remarks about which notion of ‘information’ seems particularly amenable and suitable for an autonomic mobile robotics project which one of the two co-authors is planning for future work. To separate this suitable notion of ‘information’ from other ones a new word, namely ≪enlightation≫, is coined and introduced.
Francois Oberholzer, Stefan Gruner



Chapter 4. Modal Ω-Logic: Automata, Neo-Logicism, and Set-Theoretic Realism

This essay examines the philosophical significance of Ω-logic in Zermelo-Fraenkel set theory with choice (ZFC). The dual isomorphism between algebra and coalgebra permits Boolean-valued algebraic models of ZFC to be interpreted as coalgebras. The modal profile of Ω-logical validity can then be countenanced within a coalgebraic logic, and Ω-logical validity can be defined via deterministic automata. I argue that the philosophical significance of the foregoing is two-fold. First, because the epistemic and modal profiles of Ω-logical validity correspond to those of second-order logical consequence, Ω-logical validity is genuinely logical, and thus vindicates a neo-logicist conception of mathematical truth in the set-theoretic multiverse. Second, the foregoing provides a modal-computational account of the interpretation of mathematical vocabulary, adducing in favor of a realist conception of the cumulative hierarchy of sets.
Hasen Khudairi

Chapter 5. What Arrow’s Information Paradox Says (to Philosophers)

Arrow’s information paradox features the most radical kind of information asymmetry by diagnosing an inherent conflict between two parties inclined to exchange information. In this paper, we argue that this paradox is more richly textured than generally supposed by current economic discussion on it and that its meaning encroaches on philosophy. In particular, we uncovers the ‘epistemic’ and more genuine version of the paradox, which looms on our cognitive lives like a sort of tax on curiosity. Finally, we sketch the relation between Arrow’s information paradox and the notion of zero-knowledge proofs in cryptography: roughly speaking, zero-knowledge proofs are protocols that enable a prover to convince a verifier that a statement is true, without conveying any additional information.
Mario Piazza, Marco Pedicini

Epistemology and Science


Chapter 6. Antimodularity: Pragmatic Consequences of Computational Complexity on Scientific Explanation

This work is concerned with hierarchical modular descriptions, their algorithmic production, and their importance for certain types of scientific explanations of the structure and dynamical behavior of complex systems. Networks are taken into consideration as paradigmatic representations of complex systems. It turns out that algorithmic detection of hierarchical modularity in networks is a task plagued in certain cases by theoretical intractability (NP-hardness) and in most cases by the still high computational complexity of most approximated methods. A new notion, antimodularity, is then proposed, which consists in the impossibility to algorithmically obtain a modular description fitting the explanatory purposes of the observer for reasons tied to the computational cost of typical algorithmic methods of modularity detection, in relation to the excessive size of the system under assessment and to the required precision. It turns out that occurrence of antimodularity hinders both mechanistic and functional explanation, by damaging their intelligibility. Another newly proposed more general notion, explanatory emergence, subsumes antimodularity under any case in which a system resists intelligible explanations because of the excessive computational cost of algorithmic methods required to obtain the relevant explanatory descriptions from the raw data. The possible consequences, and the likelihood, of incurring in antimodularity or explanatory emergence in the actual scientific practice are finally assessed, concluding that this eventuality is possible, at least in disciplines which are based on the algorithmic analysis of big data. The present work aims to be an example of how certain notions of theoretical computer science can be fruitfully imported into philosophy of science.
Luca Rivelli

Chapter 7. A Software-Inspired Constructive View of Nature

In their review article on “Scientific Reduction” Van Riel and Van Gulick (Scientific reduction. In: Zalta EN (ed) The Stanford encyclopedia of philosophy (Spring 2016 edition). Stanford University, Stanford, 2016) write,
Saying that x reduces to y typically implies that x is nothing more than y or nothing over and above y.
The y to which an x reduces consists most often of x’s components. But virtually nothing can be reduced if to be “nothing more than” or “nothing over and above” its components means to have no properties other than those of its components, individually or aggregated. An atom has properties other than those of its quarks and electrons. A protein, a biological cell, and a hurricane—not to mention such man-made entities as houses, mobile phones, and automobiles—all have properties over and above their components. The properties of most entities depend on both those of the entity’s components and on how those components are put together. (That would seem obvious, but perhaps it’s not.)
One of the defining characteristics of what might be referred to as the creative disciplines—computer science, engineering, the creative arts, etc.—is a focus on understanding and using the effects of putting things together. They ask what new (and in human terms interesting and useful) properties can be realized by putting things together in new ways. Using software as an example I explore software construction, and I ask what, if anything, one gains by thinking of it reductively.
Reduction as nothing-more-than-ism tends to blind one to nature’s constructive aspects. I discuss nature’s tools for creating new phenomena, including negative interactive energy, means for creating and tapping stores of usable energy, autopoiesis, and biological evolution.
Russ Abbott

Chapter 8. Politics and Epistemology of Big Data: A Critical Assessment

In this paper I will discuss Big Data as a suite of new methods for social and political research. I will start by tracing a genealogy of the idea that machine can perform better than human beings in managing extremely huge quantity of data, and that the quantity of information could change the quality of the interrogation posed to those data.
In the second part of the paper I will analyse Big Data as a social and rhetorical construction of the politics of research, claiming in favour of a more detailed account of the consequences for its progressive institutionalization. Without a serious methodological assessment of the changes that these new methods produce in the scientific epistemology of social and political sciences, we risk to underestimate the distortive or uncontrollable effects of the massive use of computer techniques. The challenge is how to avoid situations in which it is very difficult to reproduce the designed experiment, and it is arduous to explain the theories that can justify the output of researches. As an exemplification of the problem I will discuss the work on emotional contagion led by Facebook and published on PNAS in 2014.
Until now it was difficult to explore all the Big Data projects’ consequences on the perception of human intelligence and on the future of social research methods. The vision that there is no way to manage social data than to follow the results of a machine learning algorithm that works on inaccessible, epistemologically opaque and uncontrollable systems is rather problematic and deserve some extra consideration.
Teresa Numerico

Cognition and Mind


Chapter 9. Telepresence and the Role of the Senses

The telepresence experience can be evoked in a number of ways. A well-known example is a player of videogames who reports about a telepresence experience, a subjective experience of being in one place or environment, even when physically situated in another place. In this paper we set the phenomenon of telepresence into a theoretical framework. As people react subjectively to stimuli from telepresence, empirical studies can give more evidence about the phenomenon. Thus, our contribution is to bridge the theoretical with the empirical. We discuss theories of perception with an emphasis on Heidegger, Merleau-Ponty and Gibson, the role of the senses and the Spinozian belief procedure. The aim is to contribute to our understanding of this phenomenon. A telepresence-study that included the affordance concept is used to empirically study how players report sense-reactions to virtual sightseeing in two cities. We investigate and explore the interplay of the philosophical and the empirical. The findings indicate that it is not only the visual sense that plays a role in this experience, but all senses.
Ingvar Tjostheim, Wolfgang Leister, J. A. Waterworth

Chapter 10. Ontologies, Mental Disorders and Prototypes

As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language – OWL) do not allow for the representation of concepts in terms of typical traits. However, the need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific mental disorders. On this respect, we favor a hybrid approach to the representation of psychiatric concepts, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual spaces.
Maria Cristina Amoretti, Marcello Frixione, Antonio Lieto, Greta Adamo

Chapter 11. Large-Scale Simulations of the Brain: Is There a “Right” Level of Detail?

A number of research projects have recently taken up the challenge of formulating large-scale models of brain mechanisms at unprecedented levels of detail. These research enterprises have raised lively debates in the press and in the scientific and philosophical literature, some of them revolving around the question whether the incorporation of so many details in a theoretical model and in a computer simulations of it is really needed for the model to be explanatory. Is there a “right” level of detail? In this article I analyse the claim, made by two leading neuroscientists, according to which the content of the why-question addressed and the amount of computational resources available constrains the choice of the most appropriate level of detail in brain modelling. Based on the recent philosophical literature on (neuro)scientific explanation, I distinguish between two kinds of details, called here mechanistic decomposition and property details, and argue that the nature of the why-question provides only partial constraints to the choice of the most appropriate level of detail under the two interpretations of the term considered here.
Edoardo Datteri

Chapter 12. Virtual Information in the Light of Kant’s Practical Reason

In (D’Agostino M, Floridi L, Synthese 167:271–315, 2009) the authors face the so-called “scandal of deduction” (Hintikka J, Logic, language games and information. Kantian themes in the philosophy of logic. Clarendon Press, Oxford, 1973). This lies in the fact that the Bar-Hillel and Carnap theory of semantic information implies that tautologies carry no information. Given that any mathematical demonstration and more in general every logical inference in a first-order language can be reduced to a tautology; this would imply, that demonstrations bring no fresh information at all.
Addressing this question (D’Agostino M, Floridi L, Synthese 167:271–315, 2009) offers both: (i) a logical model for a strictly analytical reasoning, where the conclusions depend just on the information explicitly present in the premises; and (ii) a proposal for the ranking of the informativeness of deductions according to their increasing recourse to so called “virtual information”, namely information that is temporarily assumed but not contained in the premises.
In this paper I will focus on the status of virtual information in its connection with the Kantian philosophical spirit. Exploiting the standard Kantian difference between theoretical and practical reason, my aim is to show that the access to virtual information is due to what Kant calls practical reason rather then to the theoretical one, even though the effects of its deployment are purely theoretical, i.e. don’t lead an agent to any moral action but just to acquiring new information.
Matteo Vincenzo d’Alfonso

Chapter 13. A Kantian Cognitive Architecture

In this paper, I reinterpret Kant’s Transcendental Analytic as a description of a cognitive architecture. I describe a computer implementation of this architecture, and show how it has been applied to two unsupervised learning tasks. The resulting program is very data efficient, able to learn from a tiny handful of examples. I show how the program achieves data-efficiency: the constraints described in the Analytic of Principles are reinterpreted as strong prior knowledge, constraining the set of possible solutions.
Richard Evans

Moral Dimensions of Human-Machine Interaction


Chapter 14. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems

Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy.
Owen C. King

Chapter 15. Robotic Responsibility

This paper considers the question of whether humanoid robots may legitimately be viewed as moral agents capable of participating in the moral community. I defend the view that, in a strict sense, i.e., one informed by the fundamental criteria for moral agency, they cannot, but that they may, nonetheless, be incorporated into the moral community in another way. Specifically, I contend that they can be considered to be responsible for moral action upon an expanded view of collective responsibility, which I develop in the paper.
Anna Frammartino Wilks

Chapter 16. Robots, Ethics, and Intimacy: The Need for Scientific Research

Intimate relationships between robots and human beings may begin to form in the near future. Market forces, customer demand, and other factors may drive the creation of various forms of robots to which humans may form strong emotional attachments. Yet prior to the technology becoming fully actualized, numerous ethical, legal, and social issues must be addressed. This could be accomplished in part by establishing a rigorous scientific research agenda in the realm of intimate robotics, the aim of which would be to explore what effects the technology may have on users and on society more generally. Our goal is not to resolve whether the development of intimate robots is ethically appropriate. Rather, we contend that if such robots are going to be designed, then an obligation emerges to prevent harm that the technology could cause.
Jason Borenstein, Ronald Arkin

Chapter 17. Applying a Social-Relational Model to Explore the Curious Case of hitchBOT

This paper applies social-relational models of moral standing of robots to cases where the encounters between the robot and humans are relatively brief. Our analysis spans the spectrum of non-social robots to fully-social robots. We consider cases where the encounters are between a stranger and the robot and do not include its owner or operator. We conclude that the developers of robots that might be encountered by other people when the owner is not present cannot wash their hands of responsibility. They must take care with how they develop the robot’s interface with people and take into account how that interface influences the social relationship between it and people, and, thus, the moral standing of the robot with each person it encounters. Furthermore, we claim that developers have responsibility for the impact social robots have on the quality of human social relationships.
Frances Grodzinsky, Marty J. Wolf, Keith Miller

Chapter 18. Against Human Exceptionalism: Environmental Ethics and the Machine Question

This paper offers an approach for addressing the question of how to deal with artificially intelligent entities, such as robots, mindclones, androids, or any other entity having human features. I argue that to this end we can draw on the insights offered by environmental ethics, suggesting that artificially intelligent entities ought to be considered not as entities that are extraneous to the human social environment, but as forming an integral part of that environment. In making this argument I take a radical strand of environmental ethics, namely, Deep Ecology, which sees all entities as existing in an inter-relational environment: I thus reject any “firm ontological divide in the field of existence” (Fox W, Deep ecology: A new philosophy of our time? In: Light A, Rolston III H (eds) Environmental ethics: An anthologyBlackwell, Oxford, 252–261, 2003) and on that basis I introduce principles of biospherical egalitarianism, diversity, and symbiosis (Naess A, Inquiry 16(1):95–100, 1973). Environmental ethics makes the case that humans ought to “include within the realms of recognition and respect the previously marginalized and oppressed” ((Gottlieb RS, Introduction. In: Merchant C (ed) Ecology. Humanity Books, Amherst, pp ix–xi, 1999)). I thus consider (a) whether artificially intelligent entities can be described along these lines, as somehow “marginalized” or “oppressed,” (b) whether there are grounds for extending to them the kind of recognition that such a description would seem to call for, and (c) whether Deep Ecology could reasonably be interpreted in such a way that it apply to artificially intelligent entities.
Migle Laukyte

Chapter 19. The Ethics of Choice in Single-Player Video Games

Video games are a specific kind of virtual world which many engage with on a daily basis; as such, we cannot ignore the values they embody. In this paper I argue that it is possible to cause moral harm or benefit within a video game, specifically by drawing attention to the nature of the choices both players and designers make. I discuss ways in which games attempt to represent morality, arguing that while flawed, even games with seemingly superficial devices such as morality meters can attempt to promote moral reflection. Ultimately, I argue that the moral status of the actions depends on the effects of those actions on the player herself; if those actions make us less ethical then the actions are wrong. Unfortunately, it is not clear to me that players are always in a position to tell whether this is the case.
Erica L. Neely

Trust, Privacy, and Justice


Chapter 20. Obfuscation and Strict Online Anonymity

The collection, aggregation, analysis, and dissemination of personal information permit unnerving inferences about our characters, preferences, and future behavior that were inconceivable just a couple of decades ago. This paper looks primarily at online searching and the commercial harvesting of personal information there. I argue that our best hope for protecting privacy online is anonymity through obfuscation. Obfuscation attempts to throw data collectors off one’s digital trail by making personal data less useful. However, anonymous web searching has costs. I examine two of the most serious and urge that they are worth paying in the light of the heavy toll the commercial gathering and analysis of our information takes on privacy and autonomy. I close with some thoughts on (1) how individual, rational decisions have led to a surveillance regime that few would have chosen beforehand and (2) the alleged autonomy of information technology.
Tony Doyle

Chapter 21. Safety and Security in the Digital Age. Trust, Algorithms, Standards, and Risks

Security is a crucial issue of our society, which is accordingly defined as a risk society. However, in a complex risk society, citizens cannot tackle and manage the issue of risk by themselves. The risk is therefore more and more delegated to processes and mechanisms that take care of risk management. Today, the risk against which society claims to be immunized-increasingly mediated by technologies and less and less politically legitimized-reemerges with new forms of fiduciary management, raising the possibility of weakening rights and diluting political responsibility.
Massimo Durante

Chapter 22. The Challenges of Digital Democracy, and How to Tackle Them in the Information Era

Scholars examine legal hard cases either in the name of justice, or in accordance with the principle of tolerance. In the case of justice, scholars aim to determine the purposes that all the norms of the system are envisaged to fulfil. In the second case, tolerance is conceived as the right kind of foundational principle for the design of the right kinds of norms in the information era, because such norms have to operate across a number of different cultures, societies and states vis-à-vis an increasing set of issues that concern the whole infrastructure and environment of current information and communication technology-driven societies. Yet the information revolution is triggering an increasing set of legal cases that spark general disagreement among scholars: Matters of accessibility and legal certainty, equality and fair power, protection and dispute resolution, procedures and compliance, are examples that stress what is new under the legal sun of the information era. As a result, justice needs tolerance in order to attain the reasonable compromises that at times have to be found in the legal domain. Yet, tolerance needs justice in order to set its own limits and determine whether a compromise should be deemed as reasonable.
Ugo Pagallo


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!