Also by Guy Rundle
The Opportunist:
John Howard and the Triumph of Reaction
Your Dreaming (with Max Gillies):
Poets, Pontificators and Expatriates
Down to the Crossroads:
On the Trail of the 2008 US Election
The Shellacking:
Obama, the 2010 Midterms and the Rise of the Tea Party
Fifty People Who Stuffed Up Australia
Got Zip! (with First Dog on the Moon):
Australias 2013 Election Live from the Campaign Trail
Contents
Introduction
The World Is About to Change. Again.
T wo stories, both famous, to begin. The first is that of the telephone, and the widespread scepticism that greeted its introduction. In a world of telegrams, delivery boys and vacuum flasks the world of the 1880s and 1890s most people could not readily see how a message could be separated from its physical form. Talking was something done between people; messages were things and therefore had to have physical form. What use, beyond novelty, could this new device have? But there were other, more visionary thinkers. A number of them were assembled for the 1893 Worlds Fair in Chicago. The Worlds Fair Committee were in doubt that the telephone would be a big part of the future: a century hence, by 1993, they announced, there would be one telephone in every city.
Not everyone gets these things wrong, however. Some people understand when a technology is on the advance, and changing the world. Like artificial intelligence: Marvin Minsky, one of the fields leading researchers, declared that those who believed that AI would not make a breakthrough were foolish, and the problem of artificial intelligence would be solved within a generation. Inspiring words all the more so considering they were uttered in 1967.
Between these two poles the total failure of the imagination to comprehend new technologys potential for transformation (both social and political), and the naive belief that the uptake of such new technology will be frictionless and constantly accelerating this is where one will find the last half-century of the online and information revolution.
More stories and quotes: the head of Digitial Equipment Corporation declaring, in 1977, that there was no market for home computers. At the other end, techno-guru Alvin Tofflers prophecies in that 1970s classic Future Shock , that we would soon be living and working with robots. Through those decades we have teetered between hopes of total, rapid transformation of the way we live and the way we are, and a more pessimistic belief that nothing really changes and that any new technology is either frivolous or likely to be reabsorbed by the status quo. Such extremes of hope and despair were present at the beginning of the information revolution, and they have echoed through it since, making it a congenial environment for visionaries, practical men and women, charlatans and heroes alike.
Born before World War II, the origins of the IT revolution were in mathematical philosophy: in Godels incompleteness theorem and Alan Turings adaptation of it to a theory of computable problems. World War II and the Nazis use of the Enigma machine gave early computers both the ideal test problem (to decode a cryptographic system beyond the ability of human decoders) and an unmatchable motivation (to save Britain from extinction in war). With massive funds, the Bletchley Park code breakers were able to create the first genuine digital computing machines: room-sized, vacuum-tube-powered beasts performing equations that could today be done on a programmable calculator.
At the same time, in the sealed city of Los Alamos, New Mexico, where thousands of scientists were confined to create the atomic bomb, European John von Neumann (a one-time boy genius of the 1920s) supercharged the science of cybernetics of information flow with a host of new concepts drawn from his war work. Separately, Claude Shannon was developing his extraordinary insight, made in 1937, that electric circuits had the same form as diagrammed symbolic logic. Early proto-computers (analogue machines, because the world is, after all, continuous) could be replaced by digital ones, which chopped the world up into discrete units, thus making it possible to build the difference engine a purely mechanical prototype of the electronic computer that Charles Babbage and Ada Lovelace had envisioned in the nineteenth century.
Fifteen years later, at the labs of the telephone monopoly Bell (founded by the shyster who stole the hapless Meuccis invention), Shockley, Brittain and Bardeen created the first transistor, a switch based not on bulky and temperamental valves, but on semiconductors: metals whose conductivity turned on or off depending on the amount of energy pulsing through them. Out of that discovery, Shockley founded his own company, and when his impossible temperament caused eight of his staff to jump out and found another company Fairchild Semiconductor in Palo Alto, California Silicon Valley was born. They werent the first there, of course. Two Stanford grads, Hewlett and Packard, had set up an electronics company there in the late 1930s, but it had been slow going into the 1940s, tinkering with numerous products.
One thing the new entrepreneurs noticed was how damn fast the transistor and semiconductor development process was. The initial transistor had been some rough sheets of germanium held together with alligator clips. By the late 1950s, multiple transistors could fit on a simple chip, making possible an exponential growth in computing power. Gordon Moore, one of the Fairchild eight, restated earlier observations that the capacity of these chips doubled in power and halved in price every eighteen months to two years. This rough conjecture became Moores Law, a self-fulfilling prophecy, and the industry began using it as a guide. It quickly became clear that there was potential there to develop computer power for decades to come.