Introduction
By Ben Goertzel and Ted Goertzel
Ted Goertzel is professor of sociology emeritus at Rutgers University. He recently published "
Ben Goertzel is Chief Scientist of the financial prediction firm Aidyia Holdings; Chairman of AI software company Novamente LLC and bioinformatics company Biomind LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation; Vice Chairman of futurist nonprofit Humanity+; Scientific Advisor of biopharma firm Genescient Corp.; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles. He can be reached at .
The coming of a Technological Singularity is one of the most exciting and controversial predictions to emerge in recent decades. As posited by influential writers and thinkers such as Ray Kurzweil (2006), Vernor Vinge (1993), and Peter Diamandis (Diamandis and Kotler 2012), this will be a point in time when revolutionary advances in science and technology happen too rapidly for the human mind to comprehend. After the Singularity, these pundits predict, robots or other machines will have greater general intelligence than humans. These post-human intelligences would be able to 3D print any form of ordinary matter at low cost. They could cure diseases and perhaps even abolish aging. On the other hand, there are also darker possibilities they could decide to wipe out human beings altogether, or just keep a few of us in a zoo for their amusement.
This sounds like science fiction, and one reasonable approach to exploring these issues is to write science fiction books or stories, or make science fiction movies. SF has proved remarkably prescient at foreseeing the advent of new technologies. However, fiction is generally based on the need to hold the readers or viewers attention, and manipulate their emotions. It is not necessarily the best lens through which to view the actual future. One finds, for instance, a preponderance of stories in which a human hero struggles valiantly against the insidious or seductive machines, which appear suddenly on the scene as the result of an evil genius, or time travel, or some other story mechanism. The sudden emergence of radical new technologies moves a story along excitingly. In reality, though, we think it more likely that a Singularity, if it happens, will unfold step by step over a period of at least years and probably decades. A time-span of decades is effectively instantaneous on the time-scale of human history, let alone the scale of terrestrial geology or the evolution of the universe. But yet, it feels like a long time to human beings as they grow up and go through their lifespans. In terms of the advance of AI, robotics, nanotech, synthetic biology and other radical technologies, an unfolding over decades is long enough for a broad spectrum of humans and human institutions to study whats going on and take action.
In the past, humanity waited until a new technology was introduced before taking steps to adjust to it. No one really prepared for the effects of the steam engine, the telephone, the automobile or the personal computer. But if the Singularity is anything like the most enthusiastic futurists predict, that wont be possible. A Singularity, by its nature, would be too fast moving and too complex for humans to control. Just as a cockroach has little chance of predicting the next step in consumer electronics, we humans, in our current form, would have little chance of predicting any of the particulars of post-Singularity intelligence and society.
But even if this is true we still have time, perhaps a generation or two, to use our human intelligence to shape the future. That is the challenge we posed to the authors in this book. We invited a group of leading scientists and scholars to share their thoughts about the years before the postulated Singularity. Even if the Singularity does not materialize quite as expected, which is altogether possible, few knowledgeable people doubt that the next few decades will involve very rapid advances in artificial intelligence (AI), biotechnology, nanotechnology, and other futuristic domains. The authors in this book approach this critical period from the perspectives of philosophy, biology, computer science, economics, politics, psychology, sociology and other fields of human inquiry. They do not always agree, and we have included dialogues with many of the authors to build on and learn from the differences.
Much of our reason for putting together this book is our awareness that, as Abraham Lincoln said, The best way to predict the future is to create it. While certain historical trends may be hard to avoid, its still true that we, together, are the ones creating the future. The Singularity we experience will be the Singularity we shape, and will be guided by the Singularity we envision. Exploring different visions of how the Singularity may unfold is one way of collectively guiding the process of creating our future.
Questions Addressed by the Chapters Herein
Some of the authors presented here articulate broad general visions of life on the brink of the Singularity. Others focus on specific topics within their areas of expertise and special interest. Topics addressed by many of the authors include:
- Computing and communication technologies with capabilities far beyond todays
- Technology enabling hybridization of humans and machines in various ways
- The drastic reduction of material scarcity, such that the obtaining of the goods and services we work to afford today, will be a trivial matter for nearly everyone
- Medical advances that dramatically extend lifespan and reduce the prevalence of disease
- AI technology with problem-solving capability and general intelligence at least rivaling that of the human mind
The relatively rapid advent of these and other allied phenomena is what characterizes the core concept of a Technological Singularity, regardless of which future visionary youre listening to.
But beyond these commonalities, there are significant differences in the way the various chapter authors envision the Singularity and the path theretoward. Among other possibilities, some key questions on which the authors differ are:
- Artificial General Intelligence (AGI) fairly rapidly achieves massively superhuman intelligence, or does it remain somewhere in the vicinity of the human level?
- Will some sort of global AGI Nanny emerge, providing control or regulation of intelligence on the planet, or does governance remain in the hands of (some form of) humans?