Contents
Daniel C. Dennett
Philip Tetlock (with an introduction by Daniel Kahneman)
Gerd Gigerenzer (with an introduction by John Brockman)
Daniel Gilbert (with an introduction by John Brockman)
Vilayanur Ramachandran
Timothy D. Wilson (with an introduction by Daniel Gilbert)
Sarah-Jayne Blakemore (with an introduction by Simon Baron-Cohen)
Bruce Hood
Simon Baron-Cohen (with an introduction by John Brockman)
Gary Klein (with an introduction by Daniel Kahneman)
Simone Schnall
Nassim Nicholas Taleb (with an introduction by John Brockman)
Alva No
Daniel L. Everett
Jonathan Haidt, Joshua Greene, Sam Harris, Roy Baumeister, Paul Bloom, David Pizarro, Joshua Knobe (with an introduction by John Brockman)
Daniel Kahneman
Philosopher; Austin B. Fletcher Professor of Philosophy and Codirector of the Center for Cognitive Studies, Tufts University; author, Darwins Dangerous Idea, Breaking the Spell, and Intuition Pumps.
Im trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you break the whole person down into two or three or four or seven subpersons who are basically agents. Theyre homunculi, and this looks like a regress, but its only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and keep going until you arrive at parts that you can replace with a machine, and thats a great way of thinking about cognitive science. Its what good old-fashioned AI tried to do and is still trying to do.
The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring oversimplification. Everybody knew it was an oversimplification, but people didnt realize how much, and more recently its become clear to me that its a dramatic oversimplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.
The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and its fed by a lot of different currents.
Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where hes talking about how even at the level of the geneticseven at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genesthose are in opponent relations, and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.
Were beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, its much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then its always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.
You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well. But still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and Im just trying to get my head around how to think about that.
What were seeing right now in cognitive science is something that Ive been anticipating for years, and now its happening, and its happening so fast I cant keep up with it. Were now drowning in data, and were also happily drowning in bright young people who have grown up with this stuff and for whom its just second nature to think in these quite abstract computational terms, and it simply wasnt possible even for experts to get their heads around all these different topics 30 years ago. Now a suitably motivated kid can arrive at college already primed to go on these issues. Its very exciting, and theyre just going to run away from us, and its going to be fun to watch.
The vision of the brain as a computer, which I still champion, is changing so fast. The brains a computer, but its so different from any computer that youre used to. Its not like your desktop or your laptop at all, and its not like your iPhone, except in some ways. Its a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldnt do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until the late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. Its just mind-boggling.
You couldnt do it, but computer science gives us the ideas, the concepts of levelsvirtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example, and a very structured and very rigid one, at that.
Were getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesnt work. Now, we know why it doesnt work pretty well. So youre going to have a parallel architecture because, after all, the brain is obviously massively parallel.
Its going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Whos in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.
You really dont have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesnt want to do. No, theyre all slaves. If theyre agents, theyre slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They dont have to worry about where the energys coming from, and theyre not ambitious. They just do what theyre asked to do, and they do it brilliantly, with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but thats not the way your brain is organized.
Next page