Expert Political Judgment
Expert Political Judgment
HOW GOOD IS IT? HOW CAN WE KNOW?
New Edition
Philip E. Tetlock
With a new preface by the author
PRINCETON UNIVERSITY PRESS
PRINCETON AND OXFORD
Copyright 2005 by Princeton University Press
Preface to the new edition 2017 by Princeton University Press
Published by Princeton University Press, 41 William Street,
Princeton, New Jersey 08540
In the United Kingdom: Princeton University Press, 6 Oxford Street,
Woodstock, Oxfordshire OX20 1TR
All Rights Reserved
First published in 2005
New edition, with a new preface by the author, 2017
Cloth ISBN 978-0-691-17828-8
Paperback ISBN 978-0-691-17597-3
THE LIBRARY OF CONGRESS HAS CATALOGED THE FIRST EDITION OF THIS BOOK AS FOLLOWS
Tetlock, Philip.
Expert political judgment : how good is it? how can we know? / Philip E. Tetlock.
p. cm.
Includes bibliographical references and index.
1. Political psychology. 2. Ideology. I. Title.
JA74.5.T38 2005
320'.01'9dc22 2004061694
British Library Cataloging-in-Publication Data is available
This book has been composed in Sabon
Printed on acid-free paper.
press.princeton.edu
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
To Jenny, so alive then;
So alive in our hearts now
Contents
CHAPTER 1 Quantifying the Unquantifiable |
CHAPTER 2 The Ego-deflating Challenge of Radical Skepticism |
CHAPTER 3 Knowing the Limits of Ones Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs |
CHAPTER 4 Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs |
CHAPTER 5 Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios |
CHAPTER 6 The Hedgehogs Strike Back |
CHAPTER 7 Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness? |
CHAPTER 8 Exploring the Limits on Objectivity and Accountability |
Technical Appendix Phillip Rescober and Philip E. Tetlock |
Acknowledgments
THE JURY is out on just how much bad judgment I showed by undertaking the good-judgment project. The project dates back to the year I gained tenure and lost my generic excuse for postponing projects that I knew were worth doing, worthier than anything I was doing back then, but also knew would take a long time to come to fruition. As I write twenty years later, the data are still trickling in and the project now threatens to outlast not just my career but me. Some long-term forecasts that experts offered will not come due until 2026. But most of the data are tabulated, some surprising patterns have emerged, and I see no reason for delaying the write-up into my retirement.
Of course, a project of this duration requires the cooperation of many people over many years. My greatest collective debt is to the thoughtful professionals who patiently worked through the often tedious batteries of questions on what could have been, what is, and what might yet be. I told them at the outset that I did not intend to write a book that named names, or that, by exploiting hindsight bias, incited readers to glorify those who got it right or ridicule those who got it wrong. I promised strict confidentiality. The book that would emerge from this effort would be variable-centered, not person-centered. The focus would be on the links between how people think and what they get right or wrong, at various junctures, in a kaleidoscopically shifting world. I realize that the resulting cognitive portrait of expert political judgment is not altogether flattering, but I hope that research participants, even the hedgehogs among them, do not feel shabbily treated. I level no charges of judgmental flaws that do not also apply to me.
Another great debt is to the many colleagues who offered methodological and theoretical advice that saved me from making an even bigger fool of myself than I may have already done. Barbara Mellers, Paul Tetlock, and Phillip Rescober offered invaluable guidance on how to design measures of forecasting skill that were sensitive to the variety of ingenious objections that forecasters raised when either high probability events failed to materialize or low probability events did materialize. And colleagues from several disciplinesincluding psychology, political science, economics, history, and the hybrid field of intelligence analysismade suggestions at various junctures in this long journey that, in my opinion at least, improved the final product. I cannot remember the source of every insightful observation at every stage of this project, but this list should include in roughly chronological order from 1984 to 2004: Peter Suedfeld, Aaron Wildavsky, Alexander George, George Breslauer, Danny Kahneman, Robyn Dawes, Terry Busch, Yuen Foong Khong, John Mercer, Lynn Eden, Amos Tversky, Ward Edwards, Ron Howard, Arie Kruglanski, James March, Joel Mokyr, Richard Herrmann, Geoffrey Parker, Gary Klein, Steve Rieber, Yaacov Vertzberger, Jim Goldgeier, Erika Henik, Rose McDermott, Peter Scoblic, Cass Sunstein, and Hal Arkes. In the final phases of this project, Paul Sniderman and Bob Jervis played a particularly critical role in helping to sharpen the central arguments of the book. Needless to say, though, none of the aforementioned bears responsibility for those errors of fact or interpretation that have persisted despite their perceptive advice.
I also owe many thanks to the many former and current students who have worked, in one capacity or another, on various components of this project. They include Charles McGuire, Kristen Hannum, Karl Dake, Jane Bernzweig, Richard Boettger, Dan Newman, Randall Peterson, Penny Visser, Orie Kristel, Beth Elson, Aaron Belkin, Megan Berkowitz, Sara Hohenbrink, Jeannette Porubin, Meaghan Quinn, Patrick Quinn, Brooke Curtiss, Rachel Szteiter, Elaine Willey, and Jason Mills. I also greatly appreciate the staff support of Deborah Houy and Carol Chapman.
Turning to institutional sponsors, this project would have been impossible but for generous financial and administrative support from the following: the John D. and Catherine T. MacArthur Foundation Program in International Security, the Open Philanthropy Project, the Institute of Personality and Social Research of the University of California, Berkeley, the Institute on Global Conflict and Cooperation at the University of California, the Center for Advanced Study in the Behavioral Sciences in Palo Alto, the Mershon Center of the Ohio State University, the Social Science Research Council, the National Science Foundation, the United States Institute of Peace, the Burtt Endowed Chair in the Psychology Department at the Ohio State University, and the Mitchell Endowed Chair at the Haas School of Business at the University of California, Berkeley.
Finally, I thank my familyespecially, Barb, Jenny, and Paulfor their infinite forbearance with my workaholic ways.
Preface
AUTOBIOGRAPHICAL exercises that explore why the researcher opted to go forward with one project rather than another have often struck me as self-dramatizing. What matters is the evidence, not why one collected it. Up to now, therefore, I have hewed to the just-the-facts conventions of my profession: state your puzzle, your methods, and your answers, and exit the stage.
I could follow that formula again. I have long been puzzled by why so many political disagreementsbe they on national security or trade or welfare policyare so intractable. I have long been annoyed by how rarely partisans admit error even in the face of massive evidence that things did not work out as they once confidently declared. And I have long wondered what we might learn if we approached these disputes in a more aggressively scientific spiritif, instead of passively watching warring partisans score their own performance and duly pronounce themselves victorious, we presumed to take on the role of epistemological referees: soliciting testable predictions, scoring accuracy ourselves, and checking whether partisans change their minds when they get it wrong.
Next page