Skip to main content
main-content

Über dieses Buch

This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing.

Inhaltsverzeichnis

Frontmatter

Cognition - Reasoning - Consciousness

Frontmatter

Artificial Consciousness: From Impossibility to Multiplicity

How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on Block’s Chinese Nation and Chalmers’ Hard Problem. To defuse such challenges, theorists of artificial consciousness can appeal to empirical methods and models of explanation. Second, I explain why this naturalistic approach produces an epistemological puzzle on the role of biological properties in phenomenal consciousness. Neither behavioural tests nor theoretical inferences seem to settle whether our machines are conscious. Third, I evaluate whether the new challenge can be managed through a more fine-grained taxonomy of conscious states. This strategy is supported by the development of similar taxonomies for biological species and animal consciousness. Although it makes sense of some current models of artificial consciousness, it raises questions about their subjective and moral significance.

Chuanfei Chin

Cognition as Embodied Morphological Computation

Cognitive science is considered to be the study of mind (consciousness and thought) and intelligence in humans. Under such definition variety of unsolved/unsolvable problems appear. This article argues for a broad understanding of cognition based on empirical results from i.a. natural sciences, self-organization, artificial intelligence and artificial life, network science and neuroscience, that apart from the high level mental activities in humans, includes sub-symbolic and sub-conscious processes, such as emotions, recognizes cognition in other living beings as well as extended and distributed/social cognition. The new idea of cognition as complex multiscale phenomenon evolved in living organisms based on bodily structures that process information, linking cognitivists and EEEE (embodied, embedded, enactive, extended) cognition approaches with the idea of morphological computation (info-computational self-organisation) in cognizing agents, emerging in evolution through interactions of a (living/cognizing) agent with the environment.

Gordana Dodig-Crnkovic

“The Action of the Brain”

Machine Models and Adaptive Functions in Turing and Ashby

Given the personal acquaintance between Alan M. Turing and W. Ross Ashby and the partial proximity of their research fields, a comparative view of Turing’s and Ashby’s works on modelling “the action of the brain” (in a 1946 letter from Turing to Ashby) will help to shed light on the seemingly strict symbolic/embodied dichotomy: while it is a straightforward matter to demonstrate Turing’s and Ashby’s respective commitments to formal, computational and material, analogue methods of modelling, there is no unambiguous mapping of these approaches onto symbol-based AI and embodiment-centered views respectively. Instead, it will be argued that both approaches, starting from a formal core, were at least partly concerned with biological and embodied phenomena, albeit in revealingly distinct ways.

Hajo Greif

An Epistemological Approach to the Symbol Grounding Problem

The Difference Between Perception and Demonstrative Knowledge and Two Ways of Being Meaningful

I propose a formal approach towards solving Harnad’s “Symbol Grounding Problem” (SGP) through epistemological analogy. (Sect. 1) The SGP and Taddeo and Floridi’s “Zero Semantical Commitment Condition” (z-condition) for its solution are both revisited using Frege’s philosophy of language, in such a way that the SGP is converted into two circumscribed tasks. (Sect. 2) The ground for studying these tasks within human cognition is that both the human mind and AI are conceivable, as in Newell’s “physical symbol systems” (PSSs), and that they share the core of the SGP: the problem of constructing an objective reference. (Sect. 3) After two forms of reference have been identified in the human mind, I then show why the latter may constitute a model for facing the SGP.

Jodi Guazzini

An Enactive Theory of Need Satisfaction

In this paper, based on the predictive processing approach to cognition, an enactive theory of need satisfaction is discussed. The theory can be seen as a first step towards a computational cognitive model of need satisfaction.

Soheil Human, Golnaz Bidabadi, Markus F. Peschl, Vadim Savenkov

Agency, Qualia and Life: Connecting Mind and Body Biologically

Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does not motivate as a force, but as a control signal. From brain research and the locality principle of physics, I surmise that qualia are a fundamental, biological form of energy generated in specialized neurons. Subjectivity is explained as thermodynamically necessary on the supposition that, by converting action potentials to feelings, the neural cells avert damage from the electrochemical pulses. In exchange for this entropic benefit, phenomenal energy is spent as and where it is produced - which precludes the objective observation of qualia.

David Longinotti

Dynamic Concept Spaces in Computational Creativity for Music

I argue for a formal specification as a working understanding of ‘computational creativity’. Geraint A. Wiggins proposed a formalised framework for ‘computational creativity’, based on Margaret Boden’s view of ‘creativity’ defined as searches in concept spaces. I argue that the epistemological basis for delineated ‘concept spaces’ is problematic: instead of Wiggins’s bounded types or sets, such theoretical spaces can represent traces of creative output. To address this problem, I propose a revised specification which includes dynamic concept spaces, along with formalisations of memory and motivations, which allow iteration in a time-based framework that can be aligned with learning models (e.g., John Dewey’s experiential model). This supports the view of computational creativity as product of a learning process. My critical revision of the framework, applied to the case of computer systems that improvise music, achieves a more detailed specification and better understanding of potentials in computational creativity.

René Mogensen

Creative AI: Music Composition Programs as an Extension of the Composer’s Mind

I discuss the question “Can a computer create a musical work?” in the light of recent developments in AI music generation. In attempting to provide an answer, further questions about the creativity and intentionality exhibited by AI will emerge. In the first part of the paper I propose to replace the question of whether a computer can be creative with questions over the intentionality displayed by the system. The notion of creativity is indeed embedded with our subjective judgement and this prevents us from giving an objective evaluation of an idea or product as creative. In Sect. 2, I suggest to shift the focus of the inquiry to the autonomy possessed by the software. I finally argue that the application of generative adversarial networks to music generators provides the software with a level of autonomy sufficient to deem it able to create musical works.

Caterina Moruzzi

How Are Robots’ Reasons for Action Grounded?

This paper defends the view that (non-conscious) robots’ reasons for action can only be grounded externally, in the qualitative character of the conscious affective experience of their programmers or users. Within reasoning, reasons for action need to be evaluated in a way that provides immediate non-inferential justification of one’s reasons to oneself, in order to stop a potential regress of whys. Robots devoid of consciousness and thus incapable of feeling emotion cannot process information about reasons for action in a way that is subjectively meaningful. Different types of grounding will be discussed, together with the question of relativism about fundamentality in the context of grounding. The concluding discussion will consider the case of hypothetical conscious robots with internally grounded reasons for action, arguing that it would be unethical for such robots to be created, as they would either effectively be brought into slavery or, if developing AI rather than human-centred values, would potentially represent a threat to human life.

Bryony Pierce

Artificial Brains and Hybrid Minds

The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the respective ‘components’ of thinking systems, while the ‘stuff’ of material implementation is of secondary importance. Thus the paper extends the received animat paradigm, and investigates some intriguing consequences of expanding the conception of bio-machine hybrids to include abstract functional and semantic structure. In particular, the thought experiments consider cases of mind-machine merger where there is no physical Brain-Machine Interface: indeed, the material human body and brain have been removed from the picture altogether. The first experiment illustrates some intrinsic theoretical difficulties in attempting to replicate the human mind in an alternative material medium, while the second reveals some deep conceptual problems in attempting to create a form of truly Artificial General Intelligence.

Paul Schweizer

Huge, but Unnoticed, Gaps Between Current AI and Natural Intelligence

Despite AI’s enormous practical successes, some researchers focus on its potential as science and philosophy: providing answers to ancient questions about what minds are, how they work, how multiple varieties of minds can be produced by biological evolution, including minds at different stages of evolution, and different stages of development in individual organisms. AI cannot yet replicate or faithfully model most of these, including ancient, but still widely used, mathematical discoveries described by Kant as non-empirical, non-logical and non-contingent. Automated geometric theorem provers start from externally provided logical axioms, whereas for ancient mathematicians the axioms in Euclid’s Elements were major discoveries, not arbitrary starting points. Human toddlers and other animals spontaneously make similar but simpler topological and geometrical discoveries, and use them in forming intentions and planning or controlling actions. The ancient mathematical discoveries were not results of statistical/probabilistic learning, because, as noted by Kant, they provide non-empirical knowledge of possibilities, impossibilities and necessary connections. Can gaps between natural and artificial reasoning in topology and geometry be bridged if future AI systems use previously unknown forms of information processing machinery – perhaps “Super-Turing Multi-Membrane” machinery?

Aaron Sloman

Social Cognition and Artificial Agents

Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading and commitment present a promising starting point from which one can expand the field of application not only to infants and non-human animals but also to artificial agents. Developing a minimal approach to the socio-cognitive ability of acting jointly, I present a foundation for future discussions about the question of how our conception of sociality can be expanded to artificial agents.

Anna Strasser

Computation - Intelligence - Machine Learning

Frontmatter

Mapping Intelligence: Requirements and Possibilities

New types of artificial intelligence (AI), from cognitive assistants to social robots, are challenging meaningful comparison with other kinds of intelligence. How can such intelligent systems be catalogued, evaluated, and contrasted, with representations and projections that offer meaningful insights? To catalyse the research in AI and the future of cognition, we present the motivation, requirements and possibilities for an atlas of intelligence: an integrated framework and collaborative open repository for collecting and exhibiting information of all kinds of intelligence, including humans, non-human animals, AI systems, hybrids and collectives thereof. After presenting this initiative, we review related efforts and present the requirements of such a framework. We survey existing visualisations and representations, and discuss which criteria of inclusion should be used to configure an atlas of intelligence.

Sankalp Bhatnagar, Anna Alexandrova, Shahar Avin, Stephen Cave, Lucy Cheke, Matthew Crosby, Jan Feyereisl, Marta Halina, Bao Sheng Loe, Seán Ó hÉigeartaigh, Fernando Martínez-Plumed, Huw Price, Henry Shevlin, Adrian Weller, Alan Winfield, José Hernández-Orallo

Do Machine-Learning Machines Learn?

We answer the present paper’s title in the negative. We begin by introducing and characterizing “real learning” ($$\mathcal {RL}$$) in the formal sciences, a phenomenon that has been firmly in place in homes and schools since at least Euclid. The defense of our negative answer pivots on an integration of reductio and proof by cases, and constitutes a general method for showing that any contemporary form of machine learning (ML) isn’t real learning. Along the way, we canvass the many different conceptions of “learning” in not only AI, but psychology and its allied disciplines; none of these conceptions (with one exception arising from the view of cognitive development espoused by Piaget), aligns with real learning. We explain in this context by four steps how to broadly characterize and arrive at a focus on $$\mathcal {RL}$$.

Selmer Bringsjord, Naveen Sundar Govindarajulu, Shreya Banerjee, John Hummel

Where Intelligence Lies: Externalist and Sociolinguistic Perspectives on the Turing Test and AI

Turing’s Imitation Game (1950) is usually understood to be a test for machines’ intelligence; I offer an alternative interpretation. Turing, I argue, held an externalist-like view of intelligence, according to which an entity’s being intelligent is dependent not just on its functions and internal structure, but also on the way it is perceived by society. He conditioned the determination that a machine is intelligent upon two criteria: one technological and one sociolinguistic. The Technological Criterion requires that the machine’s structure enables it to imitate the human brain so well that it displays intelligent-like behavior; the Imitation Game tests if this Technological Criterion was fulfilled. The Sociolinguistic Criterion requires that the machine be perceived by society as a potentially intelligent entity. Turing recognized that in his day, this Sociolinguistic Criterion could not be fulfilled due to humans’ chauvinistic prejudice towards machines; but he believed that future development of machines displaying intelligent-like behavior would cause this chauvinistic attitude to change. I conclude by discussing some implications Turing’s view may have in the fields of AI development and ethics.

Shlomo Danziger

Modelling Machine Learning Models

Machine learning (ML) models make decisions for governments, companies, and individuals. Accordingly, there is the increasing concern of not having a rich explanatory and predictive account of the behaviour of these ML models relative to the users’ interests (goals) and (pre-)conceptions (ontologies). We argue that the recent research trends in finding better characterisations of what a ML model does are leading to the view of ML models as complex behavioural systems. A good explanation for a model should depend on how well it describes the behaviour of the model in simpler, more comprehensible, or more understandable terms according to a given context. Consequently, we claim that a more contextual abstraction is necessary (as is done in system theory and psychology), which is very much like building a subjective mind modelling problem. We bring some research evidence of how this partial and subjective modelling of machine learning models can take place, suggesting that more machine learning is the answer.

Raül Fabra-Boluda, Cèsar Ferri, José Hernández-Orallo, Fernando Martínez-Plumed, M. José Ramírez-Quintana

Is Programming Done by Projection and Introspection?

Often people describe the creative act of programming as mysterious (Costa 2015). This paper explores the phenomenology of programming, and examines the following proposal: Programming is a log of actions one would imagine oneself to be doing (in order to achieve a task) after one projects oneself into a world consisting of software mechanisms, such as “the Python environment”. Programming is the formal logging of our imagined actions, in such an imagined world. Our access to our imagination is introspective.

Sam Freed

Supporting Pluralism by Artificial Intelligence: Conceptualizing Epistemic Disagreements as Digital Artifacts

A crucial concept in philosophy and social sciences, epistemic disagreement, has not yet been adequately reflected in the Web. In this paper, we call for development of intelligent tools dealing with epistemic disagreements on the Web to support pluralism. As a first step, we present Polyphony, an ontology for representing and annotating epistemic disagreements.

Soheil Human, Golnaz Bidabadi, Vadim Savenkov

The Frame Problem, Gödelian Incompleteness, and the Lucas-Penrose Argument: A Structural Analysis of Arguments About Limits of AI, and Its Physical and Metaphysical Consequences

The frame problem is a fundamental challenge in AI, and the Lucas-Penrose argument is supposed to show a limitation of AI if it is successful at all. Here we discuss both of them from a unified Gödelian point of view. We give an informational reformulation of the frame problem, which turns out to be tightly intertwined with the nature of Gödelian incompleteness in the sense that they both hinge upon the finitarity condition of agents or systems, without which their alleged limitations can readily be overcome, and that they can both be seen as instances of the fundamental discrepancy between finitary beings and infinitary reality. We then revisit the Lucas-Penrose argument, elaborating a version of it which indicates the impossibility of information physics or the computational theory of the universe. It turns out through a finer analysis that if the Lucas-Penrose argument is accepted then information physics is impossible too; the possibility of AI or the computational theory of the mind is thus linked with the possibility of information physics or the computational theory of the universe. We finally reconsider the Penrose’s Quantum Mind Thesis in light of recent advances in quantum modelling of cognition, giving a structural reformulation of it and thereby shedding new light on what is problematic in the Quantum Mind Thesis. Overall, we consider it promising to link the computational theory of the mind with the computational theory of the universe; their integration would allow us to go beyond the Cartesian dualism, giving, in particular, an incarnation of Chalmers’ double-aspect theory of information.

Yoshihiro Maruyama

Quantum Pancomputationalism and Statistical Data Science: From Symbolic to Statistical AI, and to Quantum AI

The rise of probability and statistics is striking in contemporary science, ranging from quantum physics to artificial intelligence. Here we discuss two issues: one is the computational theory of mind as the fundamental underpinning of AI, and the quantum nature of computation therein; the other is the shift from symbolic to statistical AI, and the nature of truth in data science as a new kind of science. In particular we argue as follows: if the singularity thesis is true the computational theory of mind must ultimately be quantum in light of recent findings in quantum biology and cognition; data science is concerned with a new form of scientific truth, which may be called “post-truth”; whereas conventional science is about establishing idealised, universal truths on the basis of pure data carefully collected in a controlled situation, data science is about indicating useful, existential truths on the basis of real-world data gathered in contingent real-life and contaminated in different ways.

Yoshihiro Maruyama

Getting Clarity by Defining Artificial Intelligence—A Survey

Intelligence remains ill-defined. Theories of intelligence and the goal of Artificial Intelligence (A.I.) have been the source of much confusion both within the field and among the general public. Studies that contribute to a well-defined goal of the discipline and spread a stronger, more coherent message, to the mainstream media, policy-makers, investors, and the general public to help dispel myths about A.I. are needed. We present the preliminary results of our research survey “Defining (machine) Intelligence.” Opinions, from a cross sector of professionals, to help create a unified message on the goal and definition of A.I.

Dagmar Monett, Colin W. P. Lewis

Epistemic Computation and Artificial Intelligence

AI research is continually challenged to explain cognitive processes as being computational. Whereas existing notions of computing seem to have their limits for it, we contend that the recent, epistemic approach to computations may hold the key to understanding cognition from this perspective. In this approach, computations are seen as processes generating knowledge over a suitable knowledge domain, within the framework of a suitable knowledge theory. This, machine-independent, understanding of computation allows us to explain a variety of higher cognitive functions such as accountability, self-awareness, introspection, free will, creativity, anticipation and curiosity in computational terms. It also opens the way to understanding the self-improving mechanisms behind the development of intelligence. The argumentation does not depend on any technological analogies.

Jiří Wiedermann, Jan van Leeuwen

Will Machine Learning Yield Machine Intelligence?

This paper outlines the non-behavioral Algorithmic Similarity criterion for machine intelligence, and assesses the likelihood that it will eventually be satisfied by computers programmed using Machine Learning (ML). Making this assessment requires overcoming the Black Box Problem, which makes it difficult to characterize the algorithms that are actually acquired via ML. This paper therefore considers Explainable AI’s prospects for solving the Black Box Problem, and for thereby providing a posteriori answers to questions about the possibility of machine intelligence. In addition, it suggests that the real-world nurture and situatedness of ML-programmed computers constitute a priori reasons for thinking that they will not only learn to behave like humans, but that they will also eventually acquire algorithms similar to the ones that are implemented in human brains.

Carlos Zednik

Ethics - Law

Frontmatter

In Critique of RoboLaw: The Model of SmartLaw

This research develops a new regulatory framework for analyzing the probable upcoming technological singularity. First, an analysis of the standard regulation framework is provided, describing its elements and explaining its failures as applied to the singularity. Next, using a transaction cost approach a new, conceptual regulation framework is proposed. This work contributes to the understanding of regulation in the context of technological evolution.

Paulius Astromskis

AAAI: An Argument Against Artificial Intelligence

The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI.

Sander Beckers

Institutional Facts and AMAs in Society

Which moral principles should the artificial moral agents, AMAs, act upon? This is not an easy problem. But, even harder is the problem of identifying and differentiating the elements of any moral event; and then finding out how those elements relate to your preferred moral principle, if any. This is because of the very nature of morally relevant phenomena, social facts. As Searle points out, unlike the brute facts about physical world, the ontology of the facts about social reality -which he calls institutional facts- is subjective and they exist within a social environment. The appropriate way to learn these facts is by interaction. But, what should this interaction be like and with whom, especially in the case of artificial agents, before they become ‘mature’? This implies that we are to face a very similar problem like raising a child.

Arzu Gokmen

A Systematic Account of Machine Moral Agency

In this paper, I develop a preliminary framework that permits groups (or ‘systems’) to be moral agents. I show that this has advantages over traditional accounts of moral agency when applied to cases where machines are involved in moral actions. I appeal to two thought experiments to show that the traditional account can lead us to counterintuitive consequences. Then I present what I call the ‘systematic account’ which I argue avoids these counterintuitive consequences. On my account, machines can be partial moral agents currently fulfilling some but not all of the conditions required for moral agency. Thus, when a machine is part of a group of agents, it can be part of a system that is a moral agent. This framework is a useful starting point as it preserves aspects of traditional accounts of moral agency while also including machines in our moral deliberations.

Mahi Hardalupas

A Framework for Exploring Intelligent Artificial Personhood

The paper presents a framework for examining the human use of, and the activities of, artificial persons. This paper applies Hobbesian methodology to ascribe artificial personhood to business organisations, professional persons and algorithmic artificial intelligence services. A modification is made to Heidegger’s ontological framework so that it can accommodate these artificial persons in a space between tools and human beings. The extended framework makes it possible not only to explore human uses of tools, but also to pose questions on the relationships, obligations and operations that transfer between humans and artificial persons.

Thomas B. Kane

Against Leben’s Rawlsian Collision Algorithm for Autonomous Vehicles

Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.

Geoff Keeling

Moral Status of Digital Agents: Acting Under Uncertainty

This paper addresses how to act towards digital agents while uncertain about their moral status. It focuses specifically on the problem of how to act towards simulated minds operated by an artificial superintelligence (ASI). This problem can be treated as a sub-set of the larger problems of AI-safety (how to ensure a desirable outcome after the emergence of ASI) and also invokes debates about the grounds of moral status. The paper presents a formal structure for solving the problem by first constraining it as a sub-problem to the AI-safety problem, and then suggesting a decision-theoretic approach to how this problem can be solved under uncertainty about what the true grounds of moral status are, and whether such simulations do possess these relevant grounds. The paper ends by briefly suggesting a way to generalize the approach.

Abhishek Mishra

Friendly Superintelligent AI: All You Need Is Love

There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become “superintelligent”, vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure—long before one arrives—that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most of the final goals we could give an AI admit of so-called “perverse instantiations”. I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky’s Coherent Extrapolated Volition, and Bostrom’s Moral Modeling proposals.

Michael Prinzing

Autonomous Weapon Systems - An Alleged Responsibility Gap

In an influential paper Sparrow argues that it is immoral to deploy autonomous weapon systems (AWS) in combat. The general idea is that nobody can be held responsible for wrongful actions committed by an AWS because nobody can predict or control the AWS. I argue that this view is incorrect. The programmer remains in control when and how an AWS learns from experience. Furthermore, the programmer can predict the non-local behaviour of the AWS. This is sufficient to ensure that the programmer can be held responsible. I present a consequentialist argument arguing in favour of using AWS. That is, when an AWS classifies non-legitimate targets less often as legitimate targets, compared to human soldiers, then it is to be expected that using the AWS saves lives. However, there are also a number of reasons, e.g. risk of hacking, why we should still be cautious about the idea of introducing AWS to modern warfare.

Torben Swoboda

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise