Skip to main content
Top

2012 | Book

Singularity Hypotheses

A Scientific and Philosophical Assessment

Editors: Amnon H. Eden, James H. Moor, Johnny H. Søraker, Eric Steinhart

Publisher: Springer Berlin Heidelberg

Book Series : The Frontiers Collection

insite
SEARCH

About this book

Singularity Hypotheses: A Scientific and Philosophical Assessment offers authoritative, jargon-free essays and critical commentaries on accelerating technological progress and the notion of technological singularity. It focuses on conjectures about the intelligence explosion, transhumanism, and whole brain emulation. Recent years have seen a plethora of forecasts about the profound, disruptive impact that is likely to result from further progress in these areas. Many commentators however doubt the scientific rigor of these forecasts, rejecting them as speculative and unfounded. We therefore invited prominent computer scientists, physicists, philosophers, biologists, economists and other thinkers to assess the singularity hypotheses. Their contributions go beyond speculation, providing deep insights into the main issues and a balanced picture of the debate.

Table of Contents

Frontmatter
Chapter 1. Singularity Hypotheses: An Overview
Introduction to: Singularity Hypotheses: A Scientific and Philosophical Assessment
Abstract
Bill Joy in a widely read but controversial article claimed that the most powerful 21st century technologies are threatening to make humans an endangered species. Indeed, a growing number of scientists, philosophers and forecasters insist that the accelerating progress in disruptive technologies such as artificial intelligence, robotics, genetic engineering, and nanotechnology may lead to what they refer to as the technological singularity: an event or phase that will radically change human civilization, and perhaps even human nature itself, before the middle of the 21st century.
Amnon H. Eden, Eric Steinhart, David Pearce, James H. Moor

A Singularity of Artificial Superintelligence

Frontmatter
Chapter 2. Intelligence Explosion: Evidence and Import
Abstract
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.
Luke Muehlhauser, Anna Salamon
Chapter 3. The Threat of a Reward-Driven Adversarial Artificial General Intelligence
Abstract
Once introduced, Artificial General Intelligence (AGI) will undoubtedly become humanity’s most transformative technological force. However, the nature of such a force is unclear with many contemplating scenarios in which this novel form of intelligence will find humans an inevitable adversary. In this chapter, we argue that if one is to consider reinforcement learning principles as foundations for AGI, then an adversarial relationship with humans is in fact inevitable. We further conjecture that deep learning architectures for perception in concern with reinforcement learning for decision making pave a possible path for future AGI technology and raise the primary ethical and societal questions to be addressed if humanity is to evade catastrophic clashing with these AGI beings.
Itamar Arel
Chapter 4. New Millennium AI and the Convergence of History: Update of 2012
Abstract
Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. There also has been rapid progress in not quite universal but still rather general and practical artificial recurrent neural networks for learning sequence-processing programs, now yielding state-of-the-art results in real world applications. And the computing power per Euro is still growing by a factor of 100–1,000 per decade, greatly increasing the feasibility of neural networks in general, which have started to yield human-competitive results in challenging pattern recognition competitions. Finally, a recent formal theory of fun and creativity identifies basic principles of curious and creative machines, laying foundations for artificial scientists and artists. Here I will briefly review some of the new results of my lab at IDSIA, and speculate about future developments, pointing out that the time intervals between the most notable events in over 40,000 years or \(2^9\) lifetimes of human history have sped up exponentially, apparently converging to zero within the next few decades. Or is this impression just a by-product of the way humans allocate memory space to past events?
Jürgen Schmidhuber
Chapter 5. Why an Intelligence Explosion is Probable
Abstract
The hypothesis is considered that: Once an AI system with roughly human-level general intelligence is created, an “intelligence explosion” involving the relatively rapid creation of increasingly more generally intelligent AI systems will very likely ensue, resulting in the rapid emergence of dramatically superhuman intelligences. Various arguments against this hypothesis are considered and found wanting.
Richard Loosemore, Ben Goertzel

Concerns About Artificial Superintelligence

Frontmatter
Chapter 6. The Singularity and Machine Ethics
Abstract
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence”, we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity”.
Luke Muehlhauser, Louie Helm
Chapter 7. Artificial General Intelligence and the Human Mental Model
Abstract
When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already affected by anthropomorphic bias, which leads to erroneous analogies with human minds. In this chapter, we apply a goal-oriented understanding of intelligence to show that humanity occupies only a tiny portion of the design space of possible minds. This space is much larger than what we are familiar with from the human example; and the mental architectures and goals of future superintelligences need not have most of the properties of human minds. A new approach to cognitive science and philosophy of mind, one not centered on the human example, is needed to help us understand the challenges which we will face when a power greater than us emerges.
Roman V. Yampolskiy, Joshua Fox
Chapter 8. Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity
Abstract
A business that created an artificial general intelligence (AGI) could earn trillions for its investors, but might also bring about a “technological Singularity” that destroys the value of money. Such a business would face a unique set of economic incentives that would likely push it to behave in a socially sub-optimal way by, for example, deliberately making its software incompatible with a friendly AGI framework.
James D. Miller
Chapter 9. Rational Artificial Intelligence for the Greater Good
Abstract
Today’s technology is mostly preprogrammed but the next generation will make many decisions autonomously. This shift is likely to impact every aspect of our lives and will create many new benefits and challenges. A simple thought experiment about a chess robot illustrates that autonomous systems with simplistic goals can behave in anti-social ways. We summarize the modern theory of rational systems and discuss the effects of bounded computational power. We show that rational systems are subject to a variety of “drives” including self-protection, resource acquisition, replication, goal preservation, efficiency, and self-improvement. We describe techniques for counteracting problematic drives. We then describe the “Safe-AI Scaffolding” development strategy and conclude with longer term strategies for ensuring that intelligent technology contributes to the greater human good.
Steve Omohundro
Chapter 10. Friendly Artificial Intelligence
Abstract
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: “A curious aspect of the theory of evolution is that everybody thinks he understands it”. Nonetheless the problem seems to be unusually acute in Artificial Intelligence.
Eliezer Yudkowsky

A Singularity of Posthuman Superintelligence

Frontmatter
Chapter 11. The Biointelligence Explosion
How Recursively Self-Improving Organic Robots will Modify their Own Source Code and Bootstrap Our Way to Full-Spectrum Superintelligence
Abstract
This essay explores how recursively self-improving organic robots will modify their own genetic source code and bootstrap our way to full-spectrum superintelligence. Starting with individual genes, then clusters of genes, and eventually hundreds of genes and alternative splice variants, tomorrow’s biohackers will exploit “narrow” AI to debug human source code in a positive feedback loop of mutual enhancement. Genetically enriched humans can potentially abolish aging and disease; recalibrate the hedonic treadmill to enjoy gradients of lifelong bliss, and phase out the biology of suffering throughout the living world.
David Pearce
Chapter 12. Embracing Competitive Balance: The Case for Substrate-Independent Minds and Whole Brain Emulation
Abstract
More important than debates about the nature of a possible singularity is that we successfully navigate the balance of opportunities and risks that our species is faced with. In this context, we present the objective to upload to substrate-independent minds (SIM). We emphasize our leverage along this route, which distinguishes it from proposals that are mired in debates about optimal solutions that are unclear and unfeasible. We present a theorem of cosmic dominance for intelligence species based on principles of universal Darwinism, or simply, on the observation that selection takes place everywhere at every scale. We show that SIM embraces and works with these facts of the physical world. And we consider the existential risks of a singularity, particularly where we may be surpassed by artificial intelligence (AI). It is unrealistic to assume the means of global cooperation needed to the create a putative “friendly” super-intelligent AI. Besides, no one knows how to implement such a thing. The very reasons that motivate us to build AI lead to machines that learn and adapt. An artificial general intelligence (AGI) that is plastic and at the same time implements an unchangeable “friendly” utility function is an oxymoron. By contrast, we note that we are living in a real world example of a Balance of Intelligence between members of a dominant intelligent species. We outline a concrete route to SIM through a set of projects on whole brain emulation (WBE). The projects can be completed in the next few decades. So, when we compare this with plans to “cure aging” in human biology, SIM is clearly as feasible in the foreseeable future—or more so. In fact, we explain that even in the near term life extension will require mind augmentation. Rationality is a wonderful tool that helps us find effective paths to our goals, but the goals arise from a combination of evolved drives and interests developed through experience. The route to a new Balance of Intelligence by SIM has this additional benefit, that it does acknowledges our emancipation and does not run counter to our desire to participate in advances and influence future directions.
Randal A. Koene
Chapter 13. Brain Versus Machine
Abstract
Many biologists, especially those who study the biochemistry or cell biology of neural tissue are sceptical about claims to build a human brain on a computer. They know from first hand how complicated living tissue is and how much there is that we still do not know. Most importantly a biologist recognizes that a real brain acquires its functions and capabilities through a long period of development. During this time molecules, connections, and large scale features of anatomy are modified and refined according to the person’s environment. No present-day simulation approaches anything like the complexity of a real brain, or provides the opportunity for this to be reshaped over a long period of development. This is not to deny that machines can achieve wonders: they can perform almost any physical or mental task that we set them—faster and with greater accuracy than we can ourselves. However, in practice present day intelligent machines still fall behind biological brains in a variety of tasks, such as those requiring flexible interactions with the surrounding world and the performance of multiple tasks concurrently. No one yet has any idea how to introduce sentience or self-awareness into a machine. Overcoming these deficits may require novel forms of hardware that mimic more closely the cellular machinery found in the brain as well as developmental procedures that resemble the process of natural selection.
Dennis Bray
Chapter 14. The Disconnection Thesis
Abstract
In this essay I claim that Vinge’s idea of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one. What is the “humanity” to which the posthuman is “post”? Does the possibility of a posthumanity presuppose that there is a ‘human essence’, or is there some other way of conceiving the human-posthuman difference? I argue that the difference should be conceived as an emergent disconnection between individuals, not in terms of the presence or lack of essential properties.
David Roden

Skepticism

Frontmatter
Chapter 15. Interim Report from the Panel Chairs: AAAI Presidential Panel on Long-Term AI Futures
Abstract
The AAAI 2008-09 Presidential Panel on Long-Term AI Futures was organized by the president of the Association for the Advancement of Artificial Intelligence (AAAI) to bring together a group of thoughtful computer scientists to explore and reflect about societal aspects of advances in machine intelligence (computational procedures for automated sensing, learning, reasoning, and decision making). The panelists are leading AI researchers, well known for their significant contributions to AI theory and practice. Although the final report of the panel has not yet been issued, we provide background and high-level summarization of several findings in this interim report.
Eric Horvitz, Bart Selman
Chapter 16. Why the Singularity Cannot Happen
Abstract
The concept of a Singularity as described in Ray Kurzweil’s book cannot happen for a number of reasons. One reason is that all natural growth processes that follow exponential patterns eventually reveal themselves to be following S-curves thus excluding runaway situations. The remaining growth potential from Kurzweil’s “knee”, which could be approximated as the moment when an S-curve pattern begins deviating from the corresponding exponential, is a factor of only one order of magnitude greater than the growth already achieved. A second reason is that there is already evidence of a slowdown in some important trends. The growth pattern of the U.S. GDP is no longer exponential. Had Kurzweil been more rigorous in his fitting procedures, he would have recognized it. Moore’s law and the Microsoft Windows operating systems are both approaching end-of-life limits. The Internet rush has also ended—for the time being—as the number of users stopped growing; in the western world because of saturation and in the underdeveloped countries because infrastructures, education, and the standard of living there are not yet up to speed. A third reason is that society is capable of auto-regulating runaway trends as was the case with deadly car accidents, the AIDS threat, and rampant overpopulation. This control goes beyond government decisions and conscious intervention. Environmentalists who fought nuclear energy in the 1980s, may have been reacting only to nuclear energy’s excessive rate of growth, not nuclear energy per se, which is making a comeback now. What may happen instead of a Singularity is that the rate of change soon begins slowing down. The exponential pattern of change witnessed up to now dictates more milestone events during year 2025 than witnessed throughout the entire 20th century! But such events are already overdue today. If, on the other hand, the change growth pattern has indeed been following an S-curve, then the rate of change is about to enter a declining trajectory; the baby boom generation will have witnessed more change during their lives than anyone else before or after them.
Theodore Modis
Chapter 17. The Slowdown Hypothesis
Abstract
The so-called singularity hypothesis embraces the most ambitious goal of Artificial Intelligence: the possibility of constructing human-like intelligent systems. The intriguing addition is that once this goal is achieved, it would not be too difficult to surpass human intelligence. While we believe that none of the philosophical objections against strong AI are really compelling, we are skeptical about a singularity scenario associated with the achievement of human-like systems. Several reflections on the recent history of neuroscience and AI, in fact, seem to suggest that the trend is going in the opposite direction.
Alessio Plebe, Pietro Perconti
Chapter 18. Software Immortals: Science or Faith?
Abstract
According to the early futurist Julian Huxley, human life as we know it is ‘a wretched makeshift, rooted in ignorance’. With modern science, however, ‘the present limitations and miserable frustrations of our existence could be in large measure surmounted’ and human life could be ‘transcended by a state of existence based on the illumination of knowledge’ (1957b, p. 16).
Diane Proudfoot
Chapter 19. Belief in The Singularity is Fideistic
Abstract
We deploy a framework for classifying the bases for belief in a category of events marked by being at once weighty, unseen, and temporally removed (wutr, for short). While the primary source of wutr events in Occidental philosophy is the list of miracle claims of credal Christianity, we apply the framework to belief in The Singularity, surely—whether or not religious in nature—a wutr event. We conclude from this application, and the failure of fit with both rationalist and empiricist argument schemas in support of this belief, not that The Singularity won’t come to pass, but rather that regardless of what the future holds, believers in the “machine intelligence explosion” are simply fideists. While it’s true that fideists have been taken seriously in the realm of religion (e.g. Kierkegaard in the case of some quarters of Christendom), even in that domain the likes of orthodox believers like Descartes, Pascal, Leibniz, and Paley find fideism to be little more than wishful, irrational thinking—and at any rate it’s rather doubtful that fideists should be taken seriously in the realm of science and engineering.
Selmer Bringsjord, Alexander Bringsjord, Paul Bello
Chapter 20. A Singular Universe of Many Singularities: Cultural Evolution in a Cosmic Context
Abstract
Nature’s myriad complex systems—whether physical, biological or cultural—are mere islands of organization within increasingly disordered seas of surrounding chaos. Energy is a principal driver of the rising complexity of all such systems within the expanding, ever-changing Universe; indeed energy is as central to life, society, and machines as it is to stars and galaxies. Energy flow concentration—in contrast to information content and negentropy production—is a useful quantitative metric to gauge relative degree of complexity among widely diverse systems in the one and only Universe known. In particular, energy rate densities for human brains, society collectively, and our technical devices have now become numerically comparable as the most complex systems on Earth. Accelerating change is supported by a wealth of data, yet the approaching technological singularity of 21st century cultural evolution is neither more nor less significant than many other earlier singularities as physical and biological evolution proceeded along an undirectional and unpredictable path of more inclusive cosmic evolution, from big bang to humankind. Evolution, broadly construed, has become a powerful unifying concept in all of science, providing a comprehensive worldview for the new millennium—yet there is no reason to claim that the next evolutionary leap forward beyond sentient beings and their amazing gadgets will be any more important than the past emergence of increasingly intricate complex systems. Nor is new science (beyond non-equilibrium thermodynamics) necessarily needed to describe cosmic evolution’s interdisciplinary milestones at a deep and empirical level. Humans, our tools, and their impending messy interaction possibly mask a Platonic simplicity that undergirds the emergence and growth of complexity among the many varied systems in the material Universe, including galaxies, stars, planets, life, society, and machines.
Eric J. Chaisson
Metadata
Title
Singularity Hypotheses
Editors
Amnon H. Eden
James H. Moor
Johnny H. Søraker
Eric Steinhart
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-32560-1
Print ISBN
978-3-642-32559-5
DOI
https://doi.org/10.1007/978-3-642-32560-1

Premium Partner