I have found the bookSuperforecasting: the Art and Science of Prediction by Philip Tetlock to be interesting, provocative and useful. I strongly recommend it.
Philip Tetlock is on the faculty of Wharton in the Management Department, and Dan Gardner is a journalist and author.
The basic story is that Philip Tetlock and his colleagues formed the Good Judgment Project (or “GJP”), and joined a prediction competition sponsored by the Intelligence Advanced Research Project Activity, or IARPA, which is the intelligence community’s version of DARPA. GJP recruited volunteer forecasters, gave them some basic training, and put them into teams. The GJP teams were so successful that eventually the competing groups, including Michigan and MIT, were shut down or merged with Tetlock’s group. As they screened out their most successful participants, Tetlock called them “superforecasters”.
There is an ever-growing corpus of popular books on some aspect of quantitative reasoning/decision science – “pop quant”, if you will – and Gardner, who I assume took on the role of making the book accessible, includes refernces to Surowiceki’s Wisdom of Crowds, Gleick’s Chaos, Zero Dark Thirty, Daniel Kahneman, Michael Moubaisson, Taleb, Robert Rubin, Atul Gawande, and more. The references are never completely gratuitous and will be informative for people unfamiliar with this particular shelf of the bookstore.
Tetlock’s previous high-profile work was Expert Political Judgmentn, a 19-year project where 284 experts made 28,000 predictions “bearing on a diverse array of geopolitical and economic outcomes. The results were sobering. One widely reported finding was that forecasters were often only slightly more accurate than chance, and usually lost to simple extrapolation algorithms. Also, forecasters with the biggest news media profiles tended to lose to their lower profile colleagues, suggesting a rather perverse inverse relationship between fame and accuracy.”
Tetlock did the rounds promoting Superforecasting when it came out, and both Russ Roberts and Stephen Dubner did informative interviews with him:
A trade is a prediction, so the book’s focus is clearly a relevant one for speculators. Here are some of what I found to be the more interesting ideas, observations and results from the book:
– Brier score: The GJP uses Glenn Brier’s scoring function to assess the accuracy of forecasts. While the Brier score itself may be useful, I found myself motivated to improve by the book’s general discussion of the importance of making measurable forecasts and then tracking their accuracy.
– Frequent updating: Bill Rafter’s Cassandra Portfolio puts forward the hypothesis that the more specific one’s predictions are, the more frequently they should be updated. Superforecasting fully supports that idea. The superforecasters updated their forecasts regularly and with decimal precision, and Tetlock shows that the forecasters’ accuracy improved as a result.
– The best forecasting teams had a diversity of experience and opinion. Tetlock goes so far as to say that without diversity, forecasting teams find it difficult to improve their accuracy: “Diversity trumps ability”.
– Extremizing: One of the algorithms they used for large-group forecasting was to take the average prediction of the group and then move it some distance away from 50%, e.g., if the group’s prediction for an event’s likelihood was 30%, the algorithm might “extremize” the forecast to 15%. The reason is that in large groups, individual forecasters did not know what other forecasters knew, and if they did, they would be more confident in their predictions which would push the values closer to 0% or 100%. And the algorithm was very successful in the IARPA forecasting competition.
– One technique for improving accuracy was for the forecaster to make a prediction, then assume that the first prediction was wrong, and then make a second prediction. This falls into the general category of techniques a forecaster might use to dislodge himself from cognitive attachments. Another technique is to invert the question, sometimes simply by inserting “not”. The example Tetlock uses is a change from “What is the likelihood that South Africa will allow the Dalai Lama to visit the country?”, to “What is the likelihood that South Africa will *not* allow the Dalai Lama to visit the country?” Superforecasting argues that forcing oneself to take different points of view on a prediction will improve results.
There’s much more in the book, of course, and it is well written and accessible. Again, strong recommendation, especially for those in the “Counting 101” class.