Over decades, Philip Tetlock painstakingly collected data on simple long-term forecasts in political economy. He showed that hedgehogs, who focus on one main analytical tool, are less accurate than foxes, who used a wide assortment of analytical tools. Since John Cochrane and Bruce Bueno De Mesquita are both prototypical hedgehogs, I was curious to see their response to Tetlock’s data.
Cochrane argues that no one can do well at the unconditional forecasts that Tetlock studied, and that forecasting is less important than people presume. But he says that hedgehogs shine at important conditional forecasts, such as GDP change given a big stimulus, or the added tax revenue given higher tax levels.
De Mesquita says that while “government, business, and the media” prefer “experts” who know many specific facts, such as “the history, culture, mores, and language of a place,” “scientists” who know “statistical methods, mathematics, and systematic research design” are more accurate. He also notes that his hedgoggy use of game theory is liked by the CIA and by peer review.
Of course even before Tetlock’s study we knew that both peer review and funding patrons bestowed plenty of approval on hedgehogs, who usually claim that they add important forecasting value. Tetlock’s new contribution is to use hard data to question such claims. Yes, Tetlock’s data is hardly universal, so that leaves room for counter-claims about missing important ways in which hedgehogs are more accurate. But I find it disappointing, and also a bit suspicious, that neither Cochrane nor De Mesquita express interest in helping to design better studies, much less in participating in such studies.
De Mesquita is proud that his methods seem to achieve substantial accuracy, but he has not to my knowledge participated in open competitions giving a wide range of foxes a chance to show they could achieve comparable accuracy. Yes, academic journals are “open competitions,” but the competition is not on the basis of forecast accuracy.
Regarding Cochrane’s conditional accuracy claims, it is certainly possible to collect and score accuracy on conditional forecasts. One need only look at the conditions that turned out to be true, and score the forecasts for those conditions. The main problem is that this approach requires more conditional forecasts to get the same statistical power in distinguishing forecast accuracy. Since Tetlock’s study was a monumental effort, a similar study with conditional forecasts would be even more monumental. Would Cochrane volunteer for such a study?
I expect that, like most academics, both Cochrane and De Mesquita would demand high prices to publicly participate in an accuracy-scored forecasting competition in which foxes could also compete. Remember that Tetlock had to promise his experts anonymity to get them to participate in his study. The sad fact is that the many research patrons eager to fund hedgehoggy research by folks like Cochrane and De Mesquita show little interest in funding forecasting competitions at the scale required to get public participation by such prestigious folks. So hedgehogs like Cochrane and De Mesquita can continue to claim superior accuracy, with little fear of being proven wrong anytime soon.
All of which brings us back to our puzzling disinterest in forecast accuracy, which was the subject of my response to the Gardner and Tetlock essay. I still suspect that most who pay to affiliate with prestigious academics care little about their accuracy, though they do like to claim to care.