Forty Years of Forecasting Tournaments: We Can Learn a Lot When We Get Really Serious About Keeping Score

Date: 
May 25, 2023
Location: 
Psychology 1312
Philip Tetlock, University of Pennsylvania

Description

Philip Tetlock is Leonore Annenberg University Professor in Democracy and Citizenship, Professor of Management at the Wharton School, and Professor of Psychology at the University of Pennsylvania. After receiving his PhD in Psychology from Yale University, Dr. Tetlock secured faculty positions at UC Berkeley and The Ohio State University before landing at Penn in 2011. His research program over the last four decades has explored five themes: 

  • the concept of good judgment (with special emphasis on the usefulness of forecasting tournaments in assessing one key component of good judgment: accuracy);
  • the impact of accountability on judgment and choice;
  • the constraints that sacred values place on the boundaries of the thinkable;
  • the difficult-to-define distinction between political versus politicized psychology; and
  • the usefulness of hypothetical-society experiments in disentangling fact and value judgments of the impact of competing policy proposals.

Dr. Tetlock has published over 200 articles in peer-reviewed journals and has edited or written ten books, including Superforecasting: The Art and Science of Prediction (2015); Expert Political Judgment: How Good Is It? How Can We Know? (2005, 1st ed., 2017, 2nd ed.); Unmaking the West: What-if Scenarios that Rewrite World History (2006); and Counterfactual Thought Experiments in World Politics (1996). He is also co-principal investigator of The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments of high-stakes, real-world events. Dr. Tetlock has received awards from scientific societies and foundations, including the American Psychological Association, American Political Science Association, American Association for the Advancement of Science, International Society of Political Psychology, American Academy of Arts and Sciences, the National Academy of Sciences and the MacArthur, Sage, Grawemeyer, and Carnegie Foundations.

Abstract

Thousands of forecasters have made millions of judgments in forecasting tournaments over the last four decades.  I will describe: (a) the main methods for assessing the accuracy of probabilistic judgments of political, economic, military and technological trends; (b) replicable individual-difference correlates of accuracy (the “superforecasting” profile); (c) experimental interventions that reliably boost accuracy; (d) aggregation algorithms that  distill wisdom from crowds while tamping down madness. I will also discuss critiques of this research, including the arguments that tournaments prioritize rigor over relevance and that tournaments have a myopic short-range focus that incentivizes insensitivity to tail risks. And I will close by describing new types of tournaments that address these critiques, with special emphasis on the Hybrid Persuasion-Forecasting Tournaments that pose short- and long-range questions of undeniable policy relevance and that incentivize not only the making of accurate probability judgments but also the crafting of explanations that have two properties: the power to persuade other forecasters to move their opinions and to move them in the right direction. This research program does not deny that human judgment is often noisy and biased but it highlights ways of reducing these weaknesses, building on our cognitive strengths and facilitating the flow of more accurate probability estimates to policy-makers.