Facing the Intelligence Explosion
Luke Muehlhauser
Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute and the author of many articles on AI safety and the cognitive science of rationality.
Written by Luke Muehlhauser.
Published in 2013 by the
Machine Intelligence Research Institute,
Berkeley 94705
United States of America
intelligence.org
Released under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.
CC BY-NC-SA 3.0
ISBN-10: 1939311012
ISBN-13: 978-1-939311-01-6
(EPUB)
The Machine Intelligence Research Institute gratefully acknowledges the generous support of all those involved in the publication of this book.
All images are copyright their respective owners. Avatar image is 2009 Twentieth Century Fox. Special thanks to Ray Kurzweil for the use of his cartoon from The Age of Spiritual Machines. Cover created by Eran Cantrell, Stanislaw Boboryk, and Alex Vermeer.
Chapter 1
My Own Story
Sometime this century, machines will surpass human levels of intelligence and ability. This eventthe intelligence explosionwill be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
Luminaries from Alan Turing have warned us about this. Why do I think Hawking and company are right, and what can we do about it?
Facing the Intelligence Explosion is my attempt to answer these questions.
Personal Background
Ill begin with my personal background. It will help to know who I am and where Im coming from. That information is some evidence about how you should respond to the other things I say.
When my religious beliefs finally succumbed to reality, I deconverted and started a blog to explain atheism and naturalism to others. Common Sense Atheism became one of the most popular atheism blogs on the internet. I enjoyed translating the papers of professional philosophers into understandable English, and I enjoyed speaking with experts in the field for my podcast Conversations from thePale Blue Dot. Moreover, losing my religion didnt tell me what I should believe or should be doing with my life, and I used my blog to search for answers.
Ive also been interested in rationality, at least since my deconversion, during which I discovered that I could easily be strongly confident of things that I had no evidence for, things that had been shown false, and even total nonsense. How could the human brain be so incredibly misled? Obviously, I wasnt Aristotles rational animal. Instead, I was Gazzanigas rationalizing animal. Critical thinking was a major focus of Common Sense Atheism, and I spent as much time criticizing poor thinking in atheists as I did criticizing poor thinking in theists.
Intelligence Explosion
My interest in rationality inevitably led me (in mid-2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind.... Thus the first ultraintelligent machine is the last invention that man need ever make.
I tell the story of my first encounter with this famous paragraph here. In short:
Goods paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadnt noticed! Humans do not automatically propagate their beliefs, so I hadnt noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadnt read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.
My response to this discovery was immediate and transforming:
I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Nick Bostrom and Steve Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to [the Machine Intelligence Research Institute's] Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.
As my friend Will Newsome once said, Luke seems to have two copies of the Take Ideas Seriously gene.
Fanaticism?
Of course, what some people laud as taking serious ideas seriously, others see as an innate tendency toward fanaticism. Heres a comment I could imagine someone making:
Im not surprised. Luke grew up believing that he was on a cosmic mission to save humanity before the world ended with the arrival of a superpowerful being (the return of Christ). He lost his faith and, with it, his sense of epic purpose. His fear of nihilism made him susceptible to seduction by something that felt like moral realism, and his need for an epic purpose made him susceptible to seduction by existential risk reduction.
One response I could make to this would be to say that this is just psychologizing and doesnt address the state of the evidence for the claims I now defend concerning intelligence explosion. Thats true, but again: Plausible facts about my psychology do provide some Bayesian evidence about how you should respond to the words Im writing in this book.
Another response I could make would be to explain why I dont think this is quite what happened, though elements of it are certainly true. (For example, I dont recall feeling that the return of Christ was imminent or that I was on a cosmic mission to save every last soul, though as an evangelical Christian I was theologically committed to those positions. But its certainly the case that I am drawn to epic things, like the rock band Muse and the movie Avatar.) But I dont want to make this chapter even more about my personal psychology.
A third response would be to appeal to social proof. There seems to be a class of Common Sense Atheism readers that have read my writing so closely that they have developed a strong respect for my serious commitment to intellectual self-honesty and changing my mind when Im wrong, and so when I started writing about intelligence explosion issues they thought, Well, I used to think this intelligence explosion stuff was pretty kooky, but if Luke is taking it seriously then maybe theres more to it than Im realizing, and they followed me to Less Wrong (where I was now posting regularly). Ill also mention that a significant causal factor in my being made Executive Director of the Machine Intelligence Research Institute after so little time with the organization was that the staff could see that I was seriously devoted to rationality and debiasing, seriously devoted to saying oops and changing my mind and responding to argument, and seriously devoted to acting on decision theory as often as I could, rather than on habit and emotion as I would be inclined to.
Next page