To Marvin Minsky
CONTENTS
Guide
My thanks to Peter Hubbard of HarperCollins and my agent, Max Brockman, for their continued encouragement. A special thanks, once again, to Sara Lippincott for her thoughtful attention to the manuscript.
In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)whether computers can really think, be conscious, and so onhave led to new conversations about how we should deal with the forms of artificial intelligence that many argue have already been implemented. These AIs, if they achieve superintelligence (per Nick Bostroms 2014 book of that name), could pose existential risks, leading to what Martin Rees has termed our final hour. Stephen Hawking recently made international headlines when he told the BBC that the development of full artificial intelligence could spell the end of the human race.
THE EDGE QUESTION2015
WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
But wait! Shouldnt we also ask what machines that think might think about? Will they want, will they expect, civil rights? Will they have consciousness? What kind of government would an AI choose for us? What kind of society would they want to structure for themselves? Or is their society our society? Will we and the AIs include each other within our respective circles of empathy?
Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or their writings. AI was front and center in conversations between Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And such conversations have continued unabated, as is evident in the recent Edge feature The Myth of AI, a conversation with Virtual Reality pioneer Jaron Lanier, whose explication of the fallacies involved and fears evoked by conceiving of computers as people evoked rich and provocative commentaries.
Is AI becoming increasingly real? Are we now in a new era of intelligent machines? Its time to grow up as we consider this issue. This years contributors to the Edge Question (there are close to 200 of them!) are a grown-up bunch and have eschewed mention of all that science fiction and all those movies: Star Maker, Forbidden Planet, Colossus: The Forbin Project, Blade Runner, 2001, Her, The Matrix, The Borg. And eighty years after Alan Turing introduced his Universal Machine, its time to honor Turing and other AI pioneers by giving them a well-deserved rest. We know the history. (See, for instance, George Dysons 2004 Edge feature, Turings Cathedral.) Whats going on NOW?
So, once again, with appropriate rigor, the Edge Question, 2015: What do you think about machines that think?
JOHN BROCKMAN
Publisher & Editor, Edge
Professor of cognitive robotics, Imperial College London; author, Embodiment and the Inner Life
Just suppose we could endow a machine with human-level intelligence, that is to say, with the ability to match a typical human being in every (or almost every) sphere of intellectual endeavor, and perhaps to surpass every human being in a few. Would such a machine necessarily be conscious? This is an important question, because an affirmative answer would bring us up short. How would we treat such a thing if we built it? Would it be capable of suffering or joy? Would it deserve the same rights as a human being? Should we bring machine consciousness into the world at all?
The question of whether a human-level AI would necessarily be conscious is also a difficult one. One source of difficulty is the fact that multiple attributes are associated with consciousness in humans and other animals. All animals exhibit a sense of purpose. All (awake) animals are, to a greater or lesser extent, aware of the world they inhabit and the objects it contains. All animals, to some degree or other, manifest cognitive integration, which is to say they can bring all their mental resourcesperceptions, memories, and skillsto bear on the ongoing situation in pursuit of their goals. In this respect, every animal displays a kind of unity, a kind of selfhood. Some animals, including humans, are also aware of themselvesof their bodies and the flow of their thoughts. Finally, most, if not all, animals are capable of suffering, and some are capable of empathy with the suffering of others.
In (healthy) humans, all these attributes come together as a package. But in an AI they can potentially be separated. So our question must be refined. Which, if any, of the attributes we associate with consciousness in humans is a necessary accompaniment to human-level intelligence? Well, each of the attributes listed (and the list is surely not exhaustive) deserves a lengthy treatment of its own. So let me pick just twonamely, awareness of the world and the capacity for suffering. Awareness of the world, I would argue, is indeed a necessary attribute of human-level intelligence.
Surely nothing would count as having human-level intelligence unless it had language, and the chief use of human language is to talk about the world. In this sense, intelligence is bound up with what philosophers call intentionality. Moreover, language is a social phenomenon, and a primary use of language within a group of people is to talk about the things they can all perceive (such as this tool or that piece of wood), or have perceived (yesterdays piece of wood), or might perceive (tomorrows piece of wood, maybe). In short, language is grounded in awareness of the world. In an embodied creature or a robot, such an awareness would be evident from its interactions with the environment (avoiding obstacles, picking things up, and so on). But we might widen the conception to include a distributed, disembodied artificial intelligence equipped with suitable sensors.
To convincingly count as a facet of consciousness, this sort of world-awareness would perhaps have to go hand-in-hand with a manifest sense of purpose and a degree of cognitive integration. So perhaps this trio of attributes will come as a package even in an AI. But lets put that question aside for a moment and get back to the capacity for suffering and joy. Unlike world-awareness, theres no obvious reason to suppose that human-level intelligence must have this attribute, even though its intimately associated with consciousness in humans. We can imagine a machine carrying out, coldly and without feeling, the full range of tasks requiring intellect in humans. Such a machine would lack the attribute of consciousness that counts most when it comes to according rights. As Jeremy Bentham noted, when considering how to treat nonhuman animals, the question is not whether they can reason or talk but whether they can suffer.
Theres no suggestion here that a mere machine could never be capable of suffering or joythat theres something special about biology in this respect. The point, rather, is that the capacity for suffering and joy can be dissociated from other psychological attributes bundled together in human consciousness. But lets examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animals awareness of the world, of what the world affords for good or ill (in J. J. Gibsons terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving toward it. Against the backdrop of a set of goals and needs, an animals behavior makes sense. And against such a backdrop, an animal can be thwarted, its goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.