• Complain

Philip E. Tetlock - Expert Political Judgment: How Good Is It? How Can We Know?

Here you can read online Philip E. Tetlock - Expert Political Judgment: How Good Is It? How Can We Know? full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2006, publisher: Princeton University Press, genre: Politics. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Philip E. Tetlock Expert Political Judgment: How Good Is It? How Can We Know?
  • Book:
    Expert Political Judgment: How Good Is It? How Can We Know?
  • Author:
  • Publisher:
    Princeton University Press
  • Genre:
  • Year:
    2006
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Expert Political Judgment: How Good Is It? How Can We Know?: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Expert Political Judgment: How Good Is It? How Can We Know?" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.

Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlins prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat.

Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.

Philip E. Tetlock: author's other books


Who wrote Expert Political Judgment: How Good Is It? How Can We Know?? Find out the surname, the name of the author of the book and a list of all author's works by series.

Expert Political Judgment: How Good Is It? How Can We Know? — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Expert Political Judgment: How Good Is It? How Can We Know?" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make

Expert Political Judgment Expert Political Judgment HOW GOOD IS IT HOW CAN WE - photo 1

Expert Political Judgment

Expert Political Judgment

HOW GOOD IS IT? HOW CAN WE KNOW?

Philip E. Tetlock

PRINCETON UNIVERSITY PRESS

PRINCETON AND OXFORD

Copyright 2005 by Princeton University Press

Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540

In the United Kingdom: Princeton University Press, 3 Market Place, Woodstock, Oxfordshire OX20 1SY

All Rights Reserved

Sixth printing, and first paperback printing, 2006
Paperback ISBN-13: 978-0-691-12871-9
Paperback ISBN-10: 0-691-12871-5

THE LIBRARY OF CONGRESS HAS CATALOGED THE CLOTH EDITION OF THIS BOOK AS FOLLOWS

Tetlock, Philip.

Expert political judgment : how good is it? how can we know? / Philip E. Tetlock.

p. cm.

Includes bibliographical references and index.

ISBN-13: 978-0-691-12302-8 (alk. paper)

ISBN-10: 0-691-12302-0 (alk. paper)

1. Political psychology. 2. Ideology. I. Title.

JA74.5.T38 2005

320'.01'9dc22 2004061694

British Library Cataloging-in-Publication Data is available

This book has been composed in Sabon

Printed on acid-free paper.

pup.princeton.edu

Printed in the United States of America

10 9 8 7 6

To Jenny, Paul, and Barb

Contents

CHAPTER 1
Quantifying the Unquantifiable

CHAPTER 2
The Ego-deflating Challenge of Radical Skepticism

CHAPTER 3
Knowing the Limits of Ones Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs

CHAPTER 4
Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs

CHAPTER 5
Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios

CHAPTER 6
The Hedgehogs Strike Back

CHAPTER 7
Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness?

CHAPTER 8
Exploring the Limits on Objectivity and Accountability

Technical Appendix
Phillip Rescober and Philip E. Tetlock

Acknowledgments

THE JURY is out on just how much bad judgment I showed by undertaking the good-judgment project. The project dates back to the year I gained tenure and lost my generic excuse for postponing projects that I knew were worth doing, worthier than anything I was doing back then, but also knew would take a long time to come to fruition. As I write twenty years later, the data are still trickling in and the project now threatens to outlast not just my career but me. Some long-term forecasts that experts offered will not come due until 2026. But most of the data are tabulated, some surprising patterns have emerged, and I see no reason for delaying the write-up into my retirement.

Of course, a project of this duration requires the cooperation of many people over many years. My greatest collective debt is to the thoughtful professionals who patiently worked through the often tedious batteries of questions on what could have been, what is, and what might yet be. I told them at the outset that I did not intend to write a book that named names, or that, by exploiting hindsight bias, incited readers to glorify those who got it right or ridicule those who got it wrong. I promised strict confidentiality. The book that would emerge from this effort would be variable-centered, not person-centered. The focus would be on the links between how people think and what they get right or wrong, at various junctures, in a kaleidoscopically shifting world. I realize that the resulting cognitive portrait of expert political judgment is not altogether flattering, but I hope that research participants, even the hedgehogs among them, do not feel shabbily treated. I level no charges of judgmental flaws that do not also apply to me.

Another great debt is to the many colleagues who offered methodological and theoretical advice that saved me from making an even bigger fool of myself than I may have already done. Barbara Mellers, Paul Tetlock, and Phillip Rescober offered invaluable guidance on how to design measures of forecasting skill that were sensitive to the variety of ingenious objections that forecasters raised when either high probability events failed to materialize or low probability events did materialize. And colleagues from several disciplinesincluding psychology, political science, economics, history, and the hybrid field of intelligence analysismade suggestions at various junctures in this long journey that, in my opinion at least, improved the final product. I cannot remember the source of every insightful observation at every stage of this project, but this list should include in roughly chronological order from 1984 to 2004: Peter Suedfeld, Aaron Wildavsky, Alexander George, George Breslauer, Danny Kahneman, Robyn Dawes, Terry Busch, Yuen Foong Khong, John Mercer, Lynn Eden, Amos Tversky, Ward Edwards, Ron Howard, Arie Kruglanski, James March, Joel Mokyr, Richard Herrmann, Geoffrey Parker, Gary Klein, Steve Rieber, Yaacov Vertzberger, Jim Goldgeier, Erika Henik, Rose McDermott, Cass Sunnstein, and Hal Arkes. In the final phases of this project, Paul Sniderman and Bob Jervis played a particularly critical role in helping to sharpen the central arguments of the book. Needless to say, though, none of the aforementioned bears responsibility for those errors of fact or interpretation that have persisted despite their perceptive advice.

I also owe many thanks to the many former and current students who have worked, in one capacity or another, on various components of this project. They include Charles McGuire, Kristen Hannum, Karl Dake, Jane Bernzweig, Richard Boettger, Dan Newman, Randall Peterson, Penny Visser, Orie Kristel, Beth Elson, Aaron Belkin, Megan Berkowitz, Sara Hohenbrink, Jeannette Porubin, Meaghan Quinn, Patrick Quinn, Brooke Curtiss, Rachel Szteiter, Elaine Willey, and Jason Mills. I also greatly appreciate the staff support of Deborah Houy and Carol Chapman.

Turning to institutional sponsors, this project would have been impossible but for generous financial and administrative support from the following: the John D. and Catherine T. MacArthur Foundation Program in International Security, the Institute of Personality and Social Research of the University of California, Berkeley, the Institute on Global Conflict and Cooperation at the University of California, the Center for Advanced Study in the Behavioral Sciences in Palo Alto, the Mershon Center of the Ohio State University, the Social Science Research Council, the National Science Foundation, the United States Institute of Peace, the Burtt Endowed Chair in the Psychology Department at the Ohio State University, and the Mitchell Endowed Chair at the Haas School of Business at the University of California, Berkeley.

Finally, I thank my familyespecially, Barb, Jenny, and Paulfor their infinite forbearance with my workaholic ways.

Preface

AUTOBIOGRAPHICAL exercises that explore why the researcher opted to go forward with one project rather than another have often struck me as self-dramatizing. What matters is the evidence, not why one collected it. Up to now, therefore, I have hewed to the just-the-facts conventions of my profession: state your puzzle, your methods, and your answers, and exit the stage.

I could follow that formula again. I have long been puzzled by why so many political disagreementsbe they on national security or trade or welfare policyare so intractable. I have long been annoyed by how rarely partisans admit error even in the face of massive evidence that things did not work out as they once confidently declared. And I have long wondered what we might learn if we approached these disputes in a more aggressively scientific spiritif, instead of passively watching warring partisans score their own performance and duly pronounce themselves victorious, we presumed to take on the role of epistemological referees: soliciting testable predictions, scoring accuracy ourselves, and checking whether partisans change their minds when they get it wrong.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Expert Political Judgment: How Good Is It? How Can We Know?»

Look at similar books to Expert Political Judgment: How Good Is It? How Can We Know?. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Expert Political Judgment: How Good Is It? How Can We Know?»

Discussion, reviews of the book Expert Political Judgment: How Good Is It? How Can We Know? and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.