Skip to main content
Top

2012 | OriginalPaper | Chapter

2. Intelligence Explosion: Evidence and Import

Authors : Luke Muehlhauser, Anna Salamon

Published in: Singularity Hypotheses

Publisher: Springer Berlin Heidelberg

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
We will define “human-level AI” more precisely later in the chapter.
 
2
Chalmers (2010) suggested that AI will lead to intelligence explosion if an AI is produced by an “extendible method,” where an extendible method is “a method that can easily be improved, yielding more intelligent systems.” McDermott (2012a, b) replies that if P≠NP (see Goldreich 2010 for an explanation) then there is no extendible method. But McDermott’s notion of an extendible method is not the one essential to the possibility of intelligence explosion. McDermott’s formalization of an “extendible method” requires that the program generated by each step of improvement under the method be able to solve in polynomial time all problems in a particular class—the class of solvable problems of a given (polynomially step-dependent) size in an NP-complete class of problems. But this is not required for an intelligence explosion in Chalmers’ sense (and in our sense). What intelligence explosion (in our sense) would require is merely that a program self-improve to vastly outperform humans, and we argue for the plausibility of this in section From AI to Machine Superintelligence of our chapter. Thus while we agree with McDermott that it is probably true that P≠NP, we do not agree that this weighs against the plausibility of intelligence explosion. (Note that due to a miscommunication between McDermott and the editors, a faulty draft of McDermott (McDermott 2012a) was published in Journal of Consciousness Studies. We recommend reading the corrected version at http://​cs-www.​cs.​yale.​edu/​homes/​dvm/​papers/​chalmers-singularity-response.​pdf.).
 
3
This definition is a useful starting point, but it could be improved. Future work could produce a definition of intelligence as optimization power over a canonical distribution of environments, with a penalty for resource use—e.g. the “speed prior” described by Schmidhuber (2002). Also see Goertzel (2006, p. 48, 2010), Hibbard (2011).
 
4
To take one of many examples, Simon (1965, p. 96) predicted that “machines will be capable, within twenty years, of doing any work a man can do.” Also see Crevier (1993).
 
5
Armstrong (1985), Woudenberg (1991), Rowe and Wright (2001). But, see Parente and Anderson-Parente (2011).
 
6
Bostrom (2003), Bainbridge (2006), Legg (2008), Baum et al. (2011), Sandberg and Bostrom (2011), Nielsen (2011).
 
7
A software bottleneck may delay AI but create greater risk. If there is a software bottleneck on AI, then when AI is created there may be a “computing overhang”: large amounts of inexpensive computing power which could be used to run thousands of AIs or give a few AIs vast computational resources. This may not be the case if early AIs require quantum computing hardware, which is less likely to be plentiful and inexpensive than classical computing hardware at any given time.
 
8
We can make a simple formal model of this evidence by assuming (with much simplification) that every year a coin is tossed to determine whether we will get AI that year, and that we are initially unsure of the weighting on that coin. We have observed more than 50 years of “no AI” since the first time serious scientists believed AI might be around the corner. This “56 years of no AI” observation would be highly unlikely under models where the coin comes up “AI” on 90 % of years (the probability of our observations would be 10^-56), or even models where it comes up “AI” in 10 % of all years (probability 0.3 %), whereas it’s the expected case if the coin comes up “AI” in, say, 1 % of all years, or for that matter in 0.0001 % of all years. Thus, in this toy model, our “no AI for 56 years” observation should update us strongly against coin weightings in which AI would be likely in the next minute, or even year, while leaving the relative probabilities of “AI expected in 200 years” and “AI expected in 2 million years” more or less untouched. (These updated probabilities are robust to choice of the time interval between coin flips; it matters little whether the coin is tossed once per decade, or once per millisecond, or whether one takes a limit as the time interval goes to zero). Of course, one gets a different result if a different “starting point” is chosen, e.g. Alan Turing’s seminal paper on machine intelligence (Turing 1950) or the inaugural conference on artificial general intelligence (Wang et al. 2008). For more on this approach and Laplace’s rule of succession, see Jaynes (2003), Chap. 18. We suggest this approach only as a way of generating a prior probability distribution over AI timelines, from which one can then update upon encountering additional evidence.
 
9
Relatedly, Good (1970) tried to predict the first creation of AI by surveying past conceptual breakthroughs in AI and extrapolating into the future.
 
10
The technical measure predicted by Moore’s law is the density of components on an integrated circuit, but this is closely tied to the price-performance of computing power.
 
11
For important qualifications, see Nagy et al. (2010), Mack (2011).
 
12
Quantum computing may also emerge during this period. Early worries that quantum computing may not be feasible have been overcome, but it is hard to predict whether quantum computing will contribute significantly to the development of machine intelligence because progress in quantum computing depends heavily on relatively unpredictable insights in quantum algorithms and hardware (Rieffel and Polak 2011).
 
13
On the other hand, some worry (Pan et al. 2005) that the rates of scientific fraud and publication bias may currently be higher in China and India than in the developed world.
 
14
Also, a process called "iterated embryo selection" (Uncertain Future 2012) could be used to produce an entire generation of scientists with the cognitive capabilities of Albert Einstein or John von Neumann, thus accelerating scientific progress and giving a competitive advantage to nations which choose to make use of this possibility.
 
15
In our two quotes from Hutter (2012b) we have replaced Hutter’s AMS-style citations with Chicago-style citations.
 
16
The creation of AI probably is not, however, merely a matter of finding computationally tractable AIXI approximations that can solve increasingly complicated problems in increasingly complicated environments. There remain many open problems in the theory of universal artificial intelligence (Hutter 2009). For problems related to allowing some AIXI-like models to self-modify, see Orseau and Ring (2011), Ring and Orseau (2011), Orseau (2011); Hibbard (Forthcoming). Dewey (2011) explains why reinforcement learning agents like AIXI may pose a threat to humanity.
 
17
Note that given the definition of intelligence we are using, greater computational resources would not give a machine more “intelligence” but instead more “optimization power”.
 
18
For example see Omohundro (1987).
 
19
If the first self-improving AIs at least partially require quantum computing, the system states of these AIs might not be directly copyable due to the no-cloning theorem (Wooters and Zurek 1982).
 
20
Something similar is already done with technology-enabled business processes. When the pharmacy chain CVS improves its prescription-ordering system, it can copy these improvements to more than 4,000 of its stores, for immediate productivity gains (McAfee and Brynjolfsson 2008).
 
21
Many suspect that the slowness of cross-brain connections has been a major factor limiting the usefulness of large brains (Fox 2011).
 
22
Bostrom (2012) lists a few special cases in which an AI may wish to modify the content of its final goals.
 
23
When the AI can perform 10 % of the AI design tasks and do them at superhuman speed, the remaining 90 % of AI design tasks act as bottlenecks. However, if improvements allow the AI to perform 99 % of AI design tasks rather than 98 %, this change produces a much larger impact than when improvements allowed the AI to perform 51 % of AI design tasks rather than 50 % (Hanson, forthcoming). And when the AI can perform 100 % of AI design tasks rather than 99 % of them, this removes altogether the bottleneck of tasks done at slow human speeds.
 
24
This may be less true for early-generation WBEs, but Omohundro (2008) argues that AIs will converge upon being optimizing agents, which exhibit a strict division between goals and cognitive ability.
 
25
Hanson (2012) reframes the problem, saying that “we should expect that a simple continuation of historical trends will eventually end up [producing] an ‘intelligence explosion’ scenario. So there is little need to consider [Chalmers’] more specific arguments for such a scenario. And the inter-generational conflicts that concern Chalmers in this scenario are generic conflicts that arise in a wide range of past, present, and future scenarios. Yes, these are conflicts worth pondering, but Chalmers offers no reasons why they are interestingly different in a ‘singularity’ context.” We briefly offer just one reason why the “inter-generational conflicts” arising from a transition of power from humans to superintelligent machines are interestingly different from previous the inter-generational conflicts: as Bostrom (2002) notes, the singularity may cause the extinction not just of people groups but of the entire human species. For a further reply to Hanson, see Chalmers (Forthcoming).
 
26
A utility function assigns numerical utilities to outcomes such that outcomes with higher utilities are always preferred to outcomes with lower utilities (Mehta 1998).
 
27
It may also be an option to constrain the first self-improving AIs just long enough to develop a Friendly AI before they cause much damage.
 
28
Our thanks to Nick Bostrom, Steve Rayhawk, David Chalmers, Steve Omohundro, Marcus Hutter, Brian Rabkin, William Naaktgeboren, Michael Anissimov, Carl Shulman, Eliezer Yudkowsky, Louie Helm, Jesse Liptrap, Nisan Stiennon, Will Newsome, Kaj Sotala, Julia Galef, and anonymous reviewers for their helpful comments.
 
Literature
go back to reference Armstrong, J. S. (1985). Long-range forecasting: from crystal ball to computer (2nd ed.). New York: Wiley. Armstrong, J. S. (1985). Long-range forecasting: from crystal ball to computer (2nd ed.). New York: Wiley.
go back to reference Armstrong, S., Sandberg, A., & Bostrom N. Forthcoming. Thinking inside the box: using and controlling an Oracle AI. Minds and Machines. Armstrong, S., Sandberg, A., & Bostrom N. Forthcoming. Thinking inside the box: using and controlling an Oracle AI. Minds and Machines.
go back to reference Ashby, F. G., & Helie S. (2011). A tutorial on computational cognitive neuroscience: modeling the neurodynamics of cognition. Journal of Mathematical Psychology, 55(4), 273–289. doi:10.1016/j.jmp.2011.04.003. Ashby, F. G., & Helie S. (2011). A tutorial on computational cognitive neuroscience: modeling the neurodynamics of cognition. Journal of Mathematical Psychology, 55(4), 273–289. doi:10.​1016/​j.​jmp.​2011.​04.​003.
go back to reference Bainbridge, W. S., & Roco, M. C. (Eds.). (2006). Managing nano-bio-info-cogno innovations: converging technologies in society. Dordrecht: Springer. Bainbridge, W. S., & Roco, M. C. (Eds.). (2006). Managing nano-bio-info-cogno innovations: converging technologies in society. Dordrecht: Springer.
go back to reference Bellman, R. E. (1957). Dynamic programming. Princeton: Princeton University Press.MATH Bellman, R. E. (1957). Dynamic programming. Princeton: Princeton University Press.MATH
go back to reference Berger, J. O. (1993). Statistical decision theory and bayesian analysis (2nd edn). Springer Series in Statistics. New York: Springer. Berger, J. O. (1993). Statistical decision theory and bayesian analysis (2nd edn). Springer Series in Statistics. New York: Springer.
go back to reference Bertsekas, D. P. (2007). Dynamic programming and optimal control (Vol. 2). Nashua: Athena Scientific. Bertsekas, D. P. (2007). Dynamic programming and optimal control (Vol. 2). Nashua: Athena Scientific.
go back to reference Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In I. Smit & G. E. Lasker (Eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence. Windsor: International Institute of Advanced Studies in Systems Research/Cybernetics. Vol. 2. Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In I. Smit & G. E. Lasker (Eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence. Windsor: International Institute of Advanced Studies in Systems Research/Cybernetics. Vol. 2.
go back to reference Bostrom, N. (2006). What is a singleton? Linguistic and Philosophical Investigations, 5(2), 48–54. Bostrom, N. (2006). What is a singleton? Linguistic and Philosophical Investigations, 5(2), 48–54.
go back to reference Bostrom, N. (2007). Technological revolutions: Ethics and policy in the dark. In M. Nigel, S. de Cameron, & M. E. Mitchell (Eds.), Nanoscale: Issues and perspectives for the nano century (pp. 129–152). Hoboken: Wiley. doi:10.1002/9780470165874.ch10. Bostrom, N. (2007). Technological revolutions: Ethics and policy in the dark. In M. Nigel, S. de Cameron, & M. E. Mitchell (Eds.), Nanoscale: Issues and perspectives for the nano century (pp. 129–152). Hoboken: Wiley. doi:10.​1002/​9780470165874.​ch10.
go back to reference Bostrom, N. Forthcoming(a). Superintelligence: A strategic analysis of the coming machine intelligence revolution. Manuscript, in preparation. Bostrom, N. Forthcoming(a). Superintelligence: A strategic analysis of the coming machine intelligence revolution. Manuscript, in preparation.
go back to reference Bostrom, N., & Ćirković, M. M. (Eds.). (2008). Global catastrophic risks. New York: Oxford University Press. Bostrom, N., & Ćirković, M. M. (Eds.). (2008). Global catastrophic risks. New York: Oxford University Press.
go back to reference Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington: Digital Frontier Press. Kindle edition. Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington: Digital Frontier Press. Kindle edition.
go back to reference Caplan, B. (2008). The totalitarian threat. In Bostrom and Ćirković 2008, 504–519. Caplan, B. (2008). The totalitarian threat. In Bostrom and Ćirković 2008, 504–519.
go back to reference Cartwright, E. (2011). Behavioral economics. New York: Routledge Advanced Texts in Economics and Finance. Cartwright, E. (2011). Behavioral economics. New York: Routledge Advanced Texts in Economics and Finance.
go back to reference Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford University Press. (Philosophy of Mind Series).MATH Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford University Press. (Philosophy of Mind Series).MATH
go back to reference Chalmers, D. J. Forthcoming. The singularity: A reply. Journal of Consciousness Studies 19. Chalmers, D. J. Forthcoming. The singularity: A reply. Journal of Consciousness Studies 19.
go back to reference Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books. Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books.
go back to reference Dennett, D. C. (1996). Kinds of minds: Toward an understanding of consciousness., Science Master New York: Basic Books. Dennett, D. C. (1996). Kinds of minds: Toward an understanding of consciousness., Science Master New York: Basic Books.
go back to reference Dewey, D. (2011). Learning what to value. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 309–314. Dewey, D. (2011). Learning what to value. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 309–314.
go back to reference Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper & Row. Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper & Row.
go back to reference Eden, A., Søraker, J., Moor, J. H., & Steinhart, E. (Eds.). (2012). The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer. Eden, A., Søraker, J., Moor, J. H., & Steinhart, E. (Eds.). (2012). The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.
go back to reference Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6(3), 205–254. doi:10.1207/s15516709cog0603_1.CrossRef Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6(3), 205–254. doi:10.1207/s15516709cog0603_1.CrossRef
go back to reference Floreano, D., & Mattiussi, C. (2008). Bio-inspired artificial intelligence: Theories, methods, and technologies. Intelligent Robotics and Autonomous Agents. MIT Press: Cambridge. Floreano, D., & Mattiussi, C. (2008). Bio-inspired artificial intelligence: Theories, methods, and technologies. Intelligent Robotics and Autonomous Agents. MIT Press: Cambridge.
go back to reference Fox, D. (2011). The limits of intelligence. Scientific American, July, 36–43. Fox, D. (2011). The limits of intelligence. Scientific American, July, 36–43.
go back to reference Fregni, F., Boggio, P. S., Nitsche, M., Bermpohl, F., Antal, A., Feredoes, E., et al. (2005). Anodal transcranial direct current stimulation of prefrontal cortex enhances working memory. Experimental Brain Research, 166(1), 23–30. doi:10.1007/s00221-005-2334-6.CrossRef Fregni, F., Boggio, P. S., Nitsche, M., Bermpohl, F., Antal, A., Feredoes, E., et al. (2005). Anodal transcranial direct current stimulation of prefrontal cortex enhances working memory. Experimental Brain Research, 166(1), 23–30. doi:10.​1007/​s00221-005-2334-6.CrossRef
go back to reference Friedman, M. (1953). The methodology of positive economics. In Essays in positive economics (pp. 3–43). Chicago: Chicago University Press. Friedman, M. (1953). The methodology of positive economics. In Essays in positive economics (pp. 3–43). Chicago: Chicago University Press.
go back to reference Friedman, James W., (Ed.) (1994). Problems of coordination in economic activity (Vol. 35). Recent Economic Thought. Boston: Kluwer Academic Publishers. Friedman, James W., (Ed.) (1994). Problems of coordination in economic activity (Vol. 35). Recent Economic Thought. Boston: Kluwer Academic Publishers.
go back to reference Gödel, K. (1931). Über formal unentscheidbare sätze der Principia Mathematica und verwandter systeme I. Monatshefte für Mathematik, 38(1), 173–198. doi:10.1007/BF01700692.CrossRef Gödel, K. (1931). Über formal unentscheidbare sätze der Principia Mathematica und verwandter systeme I. Monatshefte für Mathematik, 38(1), 173–198. doi:10.​1007/​BF01700692.CrossRef
go back to reference Goertzel, B. (2006). The hidden pattern: A patternist philosophy of mind. Boco Raton: BrownWalker Press. Goertzel, B. (2006). The hidden pattern: A patternist philosophy of mind. Boco Raton: BrownWalker Press.
go back to reference Goertzel, B. (2010). Toward a formal characterization of real-world general intelligence. In E. Baum, M. Hutter, & E. Kitzelmann (Eds.) Artificial general intelligence: Proceedings of the third conference on artificial general intelligence, AGI 2010, Lugano, Switzerland, March 5–8, 2010, 19–24. Vol. 10. Advances in Intelligent Systems Research. Amsterdam: Atlantis Press. doi:10.2991/agi.2010.17. Goertzel, B. (2010). Toward a formal characterization of real-world general intelligence. In E. Baum, M. Hutter, & E. Kitzelmann (Eds.) Artificial general intelligence: Proceedings of the third conference on artificial general intelligence, AGI 2010, Lugano, Switzerland, March 58, 2010, 19–24. Vol. 10. Advances in Intelligent Systems Research. Amsterdam: Atlantis Press. doi:10.​2991/​agi.​2010.​17.
go back to reference Goldreich, O. (2010). P, NP, and NP-Completeness: The basics of computational complexity. New York: Cambridge University Press.MATHCrossRef Goldreich, O. (2010). P, NP, and NP-Completeness: The basics of computational complexity. New York: Cambridge University Press.MATHCrossRef
go back to reference Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.) Advances in computers (pp. 31–88. Vol. 6). New York: Academic Press. doi:10.1016/S0065-2458(08)60418-0. Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.) Advances in computers (pp. 31–88. Vol. 6). New York: Academic Press. doi:10.​1016/​S0065-2458(08)60418-0.
go back to reference Good, I. J. (1982). Ethical machines. In J. E. Hayes, D. Michie, & Y.-H. Pao (Eds.) Machine intelligence (pp. 555–560, Vol. 10). Intelligent Systems: Practice and Perspective. Chichester: Ellis Horwood. Good, I. J. (1982). Ethical machines. In J. E. Hayes, D. Michie, & Y.-H. Pao (Eds.) Machine intelligence (pp. 555–560, Vol. 10). Intelligent Systems: Practice and Perspective. Chichester: Ellis Horwood.
go back to reference Hanson, R. Forthcoming. Economic growth given machine intelligence. Journal of Artificial Intelligence Research. Hanson, R. Forthcoming. Economic growth given machine intelligence. Journal of Artificial Intelligence Research.
go back to reference Hibbard, B. (2011). Measuring agent intelligence via hierarchies of environments. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 303–308. Hibbard, B. (2011). Measuring agent intelligence via hierarchies of environments. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 303–308.
go back to reference Hibbard, B. Forthcoming. Model-based utility functions. Journal of Artificial General Intelligence. Hibbard, B. Forthcoming. Model-based utility functions. Journal of Artificial General Intelligence.
go back to reference Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Texts in Theoretical Computer Science. Berlin: Springer. doi:10.1007/b138233. Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Texts in Theoretical Computer Science. Berlin: Springer. doi:10.​1007/​b138233.
go back to reference Hutter, M. (2012b). One decade of universal artificial intelligence. In P. Wang & B. Goertzel (eds.) Theoretical foundations of artificial general intelligence (Vol. 4). Atlantis Thinking Machines. Paris: Atlantis Press. Hutter, M. (2012b). One decade of universal artificial intelligence. In P. Wang & B. Goertzel (eds.) Theoretical foundations of artificial general intelligence (Vol. 4). Atlantis Thinking Machines. Paris: Atlantis Press.
go back to reference Jaynes, E. T., & Bretthorst, G. L. (Eds.) (2003). Probability theory: The logic of science. New York: Cambridge University Press. doi:10.2277/0521592712. Jaynes, E. T., & Bretthorst, G. L. (Eds.) (2003). Probability theory: The logic of science. New York: Cambridge University Press. doi:10.​2277/​0521592712.
go back to reference Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (Eds.). (2000). Principles of neural science. New York: McGraw-Hill. Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (Eds.). (2000). Principles of neural science. New York: McGraw-Hill.
go back to reference Krichmar, J. L., & Wagatsuma, H. (Eds.). (2011). Neuromorphic and brain-based robots. New York: Cambridge University Press. Krichmar, J. L., & Wagatsuma, H. (Eds.). (2011). Neuromorphic and brain-based robots. New York: Cambridge University Press.
go back to reference Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking.
go back to reference Legg, S., & Hutter, M. (2007). A collection of definitions of intelligence. In B. Goertzel & P. Wang (Eds.) Advances in artificial general intelligence: Concepts, architectures and algorithms—proceedings of the AGI workshop 2006 (Vol. 157). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press. Legg, S., & Hutter, M. (2007). A collection of definitions of intelligence. In B. Goertzel & P. Wang (Eds.) Advances in artificial general intelligence: Concepts, architectures and algorithmsproceedings of the AGI workshop 2006 (Vol. 157). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.
go back to reference Lichtenstein, S., Fischoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 306–334). New York: Cambridge University Press. Lichtenstein, S., Fischoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 306–334). New York: Cambridge University Press.
go back to reference Marcus, G. (2008). Kluge: The haphazard evolution of the human mind. Boston: Houghton Mifflin. Marcus, G. (2008). Kluge: The haphazard evolution of the human mind. Boston: Houghton Mifflin.
go back to reference McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence (2nd ed.). Natick: A. K. Peters. McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence (2nd ed.). Natick: A. K. Peters.
go back to reference Mehta, G. B. (1998). Preference and utility. In S. Barbera, P. J. Hammond, & C. Seidl (Eds.), Handbook of utility theory (Vol. I, pp. 1–47). Boston: Kluwer Academic Publishers. Mehta, G. B. (1998). Preference and utility. In S. Barbera, P. J. Hammond, & C. Seidl (Eds.), Handbook of utility theory (Vol. I, pp. 1–47). Boston: Kluwer Academic Publishers.
go back to reference Modis, T. (2012). There will be no singularity. In Eden, Søraker, Moor, & Steinhart 2012. Modis, T. (2012). There will be no singularity. In Eden, Søraker, Moor, & Steinhart 2012.
go back to reference Moravec, H. (1999). Rise of the robots. Scientific American, Dec., 124–135. Moravec, H. (1999). Rise of the robots. Scientific American, Dec., 124–135.
go back to reference Muehlhauser, L., & Helm, L. (2012). The singularity and machine ethics. In Eden, Søraker, Moor, & Steinhart 2012. Muehlhauser, L., & Helm, L. (2012). The singularity and machine ethics. In Eden, Søraker, Moor, & Steinhart 2012.
go back to reference Murphy, A. H., & Winkler, R. L. (1984). Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387), 489–500. Murphy, A. H., & Winkler, R. L. (1984). Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387), 489–500.
go back to reference Nagy, B., Farmer, J. D., Trancik, J. E., & Bui, QM. (2010). Testing laws of technological progress. Santa Fe Institute, NM, Sept. 2. http://tuvalu.santafe.edu/ bn/workingpapers/NagyFarmerTrancikBui.pdf. Nagy, B., Farmer, J. D., Trancik, J. E., & Bui, QM. (2010). Testing laws of technological progress. Santa Fe Institute, NM, Sept. 2. http://​tuvalu.​santafe.​edu/​ bn/workingpapers/NagyFarmerTrancikBui.pdf.
go back to reference Nilsson, N. J. (2009). The quest for artificial intelligence: A history of ideas and achievements. New York: Cambridge University Press. Nilsson, N. J. (2009). The quest for artificial intelligence: A history of ideas and achievements. New York: Cambridge University Press.
go back to reference Omohundro, S. M. (2008). The basic AI drives. In Wang, Goertzel, & Franklin 2008, 483–492. Omohundro, S. M. (2008). The basic AI drives. In Wang, Goertzel, & Franklin 2008, 483–492.
go back to reference Omohundro, S. M. 2012. Rational artificial intelligence for the greater good. In Eden, Søraker, Moor, & Steinhart 2012. Omohundro, S. M. 2012. Rational artificial intelligence for the greater good. In Eden, Søraker, Moor, & Steinhart 2012.
go back to reference Orseau, L. (2011). Universal knowledge-seeking agents. In Algorithmic learning theory: 22nd international conference, ALT 2011, Espoo, Finland, October 5–7, 2011. Proceedings, ed. Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann. Vol. 6925. Lecture Notes in Computer Science. Berlin: Springer. doi 10.1007/978-3-642-24412-4_28. Orseau, L. (2011). Universal knowledge-seeking agents. In Algorithmic learning theory: 22nd international conference, ALT 2011, Espoo, Finland, October 57, 2011. Proceedings, ed. Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann. Vol. 6925. Lecture Notes in Computer Science. Berlin: Springer. doi 10.1007/978-3-642-24412-4_28.
go back to reference Orseau, L., & Ring, M. (2011). Self-modification and mortality in artificial agents. In Schmidhuber, Thórisson, and Looks 2011, 1–10. Orseau, L., & Ring, M. (2011). Self-modification and mortality in artificial agents. In Schmidhuber, Thórisson, and Looks 2011, 1–10.
go back to reference Pan, Z., Trikalinos, T. A., Kavvoura, F. K., Lau, J., & Ioannidis, J. P. A. (2005). Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature. PLoS Medicine, 2(12), e334. doi:10.1371/journal.pmed.0020334.CrossRef Pan, Z., Trikalinos, T. A., Kavvoura, F. K., Lau, J., & Ioannidis, J. P. A. (2005). Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature. PLoS Medicine, 2(12), e334. doi:10.​1371/​journal.​pmed.​0020334.CrossRef
go back to reference Pennachin, C, & Goertzel, B. (2007). Contemporary approaches to artificial general intelligence. In Goertzel & Pennachin 2007, 1–30. Pennachin, C, & Goertzel, B. (2007). Contemporary approaches to artificial general intelligence. In Goertzel & Pennachin 2007, 1–30.
go back to reference Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. New York: Oxford University Press. Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. New York: Oxford University Press.
go back to reference Plebe, A., & Perconti, P. (2012). The slowdown hypothesis. In Eden, Søraker, Moor, & Steinhart 2012. Plebe, A., & Perconti, P. (2012). The slowdown hypothesis. In Eden, Søraker, Moor, & Steinhart 2012.
go back to reference Posner, R. A. (2004). Catastrophe: Risk and response. New York: Oxford University Press. Posner, R. A. (2004). Catastrophe: Risk and response. New York: Oxford University Press.
go back to reference Proudfoot, D., & Jack Copeland, B. (2012). Artificial intelligence. In E. Margolis, R. Samuels, & S. P. Stich (Eds.), The Oxford handbook of philosophy of cognitive science. New York: Oxford University Press. Proudfoot, D., & Jack Copeland, B. (2012). Artificial intelligence. In E. Margolis, R. Samuels, & S. P. Stich (Eds.), The Oxford handbook of philosophy of cognitive science. New York: Oxford University Press.
go back to reference Richards, M. A., & Shaw, G. A. (2004). Chips, architectures and algorithms: Reflections on the exponential growth of digital signal processing capability. Unpublished manuscript, Jan. 28. http://users.ece.gatech.edu/ mrichard/Richards&Shaw_Algorithms01204.pdf (accessed Mar. 20, 2012). Richards, M. A., & Shaw, G. A. (2004). Chips, architectures and algorithms: Reflections on the exponential growth of digital signal processing capability. Unpublished manuscript, Jan. 28. http://​users.​ece.​gatech.​edu/​ mrichard/Richards&Shaw_Algorithms01204.pdf (accessed Mar. 20, 2012).
go back to reference Rieffel, E., & Polak, W. (2011). Quantum computing: A gentle introduction. Scientific and Engineering Computation. Cambridge: MIT Press. Rieffel, E., & Polak, W. (2011). Quantum computing: A gentle introduction. Scientific and Engineering Computation. Cambridge: MIT Press.
go back to reference Ring, M., & Orseau, L. (2011). Delusion, survival, and intelligent agents. In Schmidhuber, Thórisson, & Looks 2011, 11–20. Ring, M., & Orseau, L. (2011). Delusion, survival, and intelligent agents. In Schmidhuber, Thórisson, & Looks 2011, 11–20.
go back to reference Rowe, G., & Wright, G. (2001). Expert opinions in forecasting: The role of the Delphi technique. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook for researchers and practitioners, (Vol. 30). International Series in Operations Research & Management Science. Boston: Kluwer Academic Publishers. Rowe, G., & Wright, G. (2001). Expert opinions in forecasting: The role of the Delphi technique. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook for researchers and practitioners, (Vol. 30). International Series in Operations Research & Management Science. Boston: Kluwer Academic Publishers.
go back to reference Russell, S. J., & Norvig, P. (2009). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River: Prentice-Hall. Russell, S. J., & Norvig, P. (2009). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River: Prentice-Hall.
go back to reference Sandberg, A. (2011). Cognition enhancement: Upgrading the brain. In J. Savulescu, R. ter Meulen, & G. Kahane (Eds.), Enhancing human capacities (pp. 71–91). Malden: Wiley-Blackwell. Sandberg, A. (2011). Cognition enhancement: Upgrading the brain. In J. Savulescu, R. ter Meulen, & G. Kahane (Eds.), Enhancing human capacities (pp. 71–91). Malden: Wiley-Blackwell.
go back to reference Schierwagen, A. (2011). Reverse engineering for biologically inspired cognitive architectures: A critical analysis. In C. Hernández, R. Sanz, J. Gómez-Ramirez, L. S. Smith, A. Hussain, A. Chella, & I. Aleksander (Eds.), From brains to systems: Brain-inspired cognitive systems 2010, (pp. 111–121, Vol. 718). Advances in Experimental Medicine and Biology. New York: Springer. doi:10.1007/978-1-4614-0164-3_10. Schierwagen, A. (2011). Reverse engineering for biologically inspired cognitive architectures: A critical analysis. In C. Hernández, R. Sanz, J. Gómez-Ramirez, L. S. Smith, A. Hussain, A. Chella, & I. Aleksander (Eds.), From brains to systems: Brain-inspired cognitive systems 2010, (pp. 111–121, Vol. 718). Advances in Experimental Medicine and Biology. New York: Springer. doi:10.1007/978-1-4614-0164-3_10.
go back to reference Schmidhuber, J. (2002). The speed prior: A new simplicity measure yielding near-optimal computable predictions. In J. Kivinen & R. H. Sloan, Computational learning theory: 5th annual conference on computational learning theory, COLT 2002 Sydney, Australia, July 8–10, 2002 proceedings, (pp. 123–127, Vol. 2375). Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/3-540-45435-7_15. Schmidhuber, J. (2002). The speed prior: A new simplicity measure yielding near-optimal computable predictions. In J. Kivinen & R. H. Sloan, Computational learning theory: 5th annual conference on computational learning theory, COLT 2002 Sydney, Australia, July 810, 2002 proceedings, (pp. 123–127, Vol. 2375). Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/3-540-45435-7_15.
go back to reference Schmidhuber, J. (2007). Gödel machines: Fully self-referential optimal universal self-improvers. In Goertzel & Pennachin 2007, 199–226. Schmidhuber, J. (2007). Gödel machines: Fully self-referential optimal universal self-improvers. In Goertzel & Pennachin 2007, 199–226.
go back to reference Schmidhuber, J., Thórisson, K. R., & Looks, M. (Eds.) (2011). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings (Vol. 6830). Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/978-3-642-22887-2. Schmidhuber, J., Thórisson, K. R., & Looks, M. (Eds.) (2011). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 36, 2011. Proceedings (Vol. 6830). Lecture Notes in Computer Science. Berlin: Springer. doi:10.​1007/​978-3-642-22887-2.
go back to reference Schoenemann, P. T. (1997). An MRI study of the relationship between human neuroanatomy and behavioral ability. PhD diss., University of California, Berkeley. http://mypage.iu.edu/ toms/papers/dissertation/Dissertation_title.htm. Schoenemann, P. T. (1997). An MRI study of the relationship between human neuroanatomy and behavioral ability. PhD diss., University of California, Berkeley. http://​mypage.​iu.​edu/​ toms/papers/dissertation/Dissertation_title.htm.
go back to reference Schwartz, J. T. (1987). Limits of artificial intelligence. In S. C. Shapiro & D. Eckroth (Eds.), Encyclopedia of artificial intelligence (pp. 488–503, Vol. 1). New York: Wiley. Schwartz, J. T. (1987). Limits of artificial intelligence. In S. C. Shapiro & D. Eckroth (Eds.), Encyclopedia of artificial intelligence (pp. 488–503, Vol. 1). New York: Wiley.
go back to reference Shulman, C., & Bostrom, N. (2012). How hard is artificial intelligence? Evolutionary arguments and selection effects. Journal of Consciousness Studies 19. Shulman, C., & Bostrom, N. (2012). How hard is artificial intelligence? Evolutionary arguments and selection effects. Journal of Consciousness Studies 19.
go back to reference Shulman, C., & Sandberg, A. (2010). Implications of a software-limited singularity. Paper presented at the 8th European Conference on Computing and Philosophy (ECAP), Munich, Germany, Oct. 4–6. Shulman, C., & Sandberg, A. (2010). Implications of a software-limited singularity. Paper presented at the 8th European Conference on Computing and Philosophy (ECAP), Munich, Germany, Oct. 4–6.
go back to reference Simon, H. A. (1965). The shape of automation for men and management. New York: Harper & Row. Simon, H. A. (1965). The shape of automation for men and management. New York: Harper & Row.
go back to reference Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149–153. Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149–153.
go back to reference Sotala, K. (2012). Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness 4. Sotala, K. (2012). Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness 4.
go back to reference Stanovich, K. E. (2010). Rationality and the reflective mind. New York: Oxford University Press.CrossRef Stanovich, K. E. (2010). Rationality and the reflective mind. New York: Oxford University Press.CrossRef
go back to reference Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know?. Princeton: Princeton University Press. Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know?. Princeton: Princeton University Press.
go back to reference Trappenberg, T. P. (2009). Fundamentals of computational neuroscience (2nd ed.). New York: Oxford University Press. Trappenberg, T. P. (2009). Fundamentals of computational neuroscience (2nd ed.). New York: Oxford University Press.
go back to reference Turing, A. M. (1951). Intelligent machinery, a heretical theory. A lecture given to `51 Society’ at Manchester. Turing, A. M. (1951). Intelligent machinery, a heretical theory. A lecture given to `51 Society’ at Manchester.
go back to reference Van der Velde, F. (2010). Where artificial intelligence and neuroscience meet: The search for grounded architectures of cognition. Advances in Artificial Intelligence, no. 5. doi:10.1155/2010/918062. Van der Velde, F. (2010). Where artificial intelligence and neuroscience meet: The search for grounded architectures of cognition. Advances in Artificial Intelligence, no. 5. doi:10.​1155/​2010/​918062.
go back to reference Van Gelder, T., & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In R. F. Port & T. van Gelder. Mind as motion: Explorations in the dynamics of cognition, Bradford Books. Cambridge: MIT Press. Van Gelder, T., & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In R. F. Port & T. van Gelder. Mind as motion: Explorations in the dynamics of cognition, Bradford Books. Cambridge: MIT Press.
go back to reference Von Neumann, J., & Burks, A. W. (Eds.) (1966). Theory of self-replicating automata. Urbana: University of Illinois Press. Von Neumann, J., & Burks, A. W. (Eds.) (1966). Theory of self-replicating automata. Urbana: University of Illinois Press.
go back to reference Wang, P., Goertzel, B., & Franklin, S. (Eds.). (2008). Artificial General Intelligence 2008: Proceedings of the First AGI Conference (Vol. 171). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press. Wang, P., Goertzel, B., & Franklin, S. (Eds.). (2008). Artificial General Intelligence 2008: Proceedings of the First AGI Conference (Vol. 171). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.
go back to reference Williams, L. V. (Ed.). (2011). Prediction markets: Theory and applications (Vol. 66). Routledge International Studies in Money and Banking. New York: Routledge. Williams, L. V. (Ed.). (2011). Prediction markets: Theory and applications (Vol. 66). Routledge International Studies in Money and Banking. New York: Routledge.
go back to reference Yates, J. F., Lee, J.-W., Sieck, W. R., Choi, I., & Price, P. C. (2002). Probability judgment across cultures. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 271–291). New York: Cambridge University Press.CrossRef Yates, J. F., Lee, J.-W., Sieck, W. R., Choi, I., & Price, P. C. (2002). Probability judgment across cultures. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 271–291). New York: Cambridge University Press.CrossRef
go back to reference Yudkowsky, E. (2008a). Artificial intelligence as a positive and negative factor in global risk. In Bostrom & Ćirković 2008, 308–345. Yudkowsky, E. (2008a). Artificial intelligence as a positive and negative factor in global risk. In Bostrom & Ćirković 2008, 308–345.
go back to reference Yudkowsky, E. (2011). Complex value systems in friendly AI. In Schmidhuber, Thórisson, & Looks 2011, 388–393. Yudkowsky, E. (2011). Complex value systems in friendly AI. In Schmidhuber, Thórisson, & Looks 2011, 388–393.
Metadata
Title
Intelligence Explosion: Evidence and Import
Authors
Luke Muehlhauser
Anna Salamon
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-642-32560-1_2

Premium Partner