This is a very interesting book, not just for those interested in forecasting and how different groups tend to use it, but also as a general guidance book for how to think if you want to contribute to change (not just be visible in media that many seem to confuse with actual change).
I would describe the overall theme of the book as a discussion of the value of critical/scientific thinking. These days this alone makes it an unusual and valuable contribution. Even basic knowledge, like having an open mind and learn from those you don’t like, is described in a surprisingly good way. Especially as these are things that are easy to say and might sound obvious if you have never been in the situation, but Tedlock does a good job in explaining why it is difficult to be open and critical both on an individual level and within the structures we have and then provide some guidance on how to address this.
There are a lot of areas that Tedlock covers, but I will focus on four that I found especially interesting.
First, an area that Tedlock covers too briefly, but that I really would like someone to explore further, is the role of "forecasting experts" and media. Tedlock is unusually clear: Media is a big problem when it comes to presenting important issues, as they look for simplicity and drama. Therefore they don’t allow people to hear experts (people who actually know what they talk about and make scientific predictions), instead media is providing a platform for “experts” (people with strong opinions and a capacity to present those opinions in simple and dramatic terms).
I should probably emphasize that by media I - and I think also Tedlock - mean mainstream media, especially the big new papers and TV channels. There is a lot of sophisticated and elaborated discussions in more specialized media, even if the simplicity seem to be spreading as we have seen in the case like Foreign Affairs and Financial Times.
If it was only media that engaged in the simplicity circus together with the media pundits it would not be much to care about. The problem is that it is mainstream media that influence much of policymaking, business decisions and also the public opinion. Much of the discussion focus on individual issues, but an even bigger problem is the long-term trend where actual knowledge and science plays a less and less important role. It would be interesting to explore how a scientific forecasting approach could be used to guide better governance structures, especially for global governance.
Second, another related area that I find interesting is that there are almost never any evaluations of statements and conclusions by different experts, even when it comes to reports and studies. Almost never are the experts, who are asked to comment on different issues and who also suggests actions that influence decision makers, evaluated. Such an evaluation process would show that the experts visible in media are very often wrong (more so than those with a more scientific approach to different areas) or, even more common, make such vague and sweeping statements that they are impossible to evaluate.
I think Björn Lomborg is a very good example (much better than Friedman, that Tedlock uses) of the kind of problem Tedlock highlights with media, experts and evaluation. Lomborg pretends that he is scientific and media often treats him as if he has something serious to say. A good example of his simplistic rhetoric is his approach to climate change. He never clarifies what probabilities he assumes for different climate scenarios. He just keep claiming that too much is being done and in the wrong way, but any serious person would fist clarify what probabilities they assume for different impacts and then talk about reasonable measures. He is also changing his messages without clarifying how and why, but one can assume it is to make donors happy and to make sure that he fits within the existing media narrative.
Third, I like that Tedlock discusses “black swans”, as this is one of the buzzwords that have influenced policymaking and the general discussion about risk in a way that I think is problematic. A black swan event is usually used to refer to an event as something that was (almost) impossible to predict. A closer look makes it obvious that very few of the events that are called “black swans” are anything like black swans. They might be low-probability or not fit in the models influential people like to use, but they are not black swans.
To create a culture where society accepts that there are many “black swan events” that we can never foresee is a dangerous (and in many ways a very unscientific) path to walk down. In reality there are very few black swans and I would have liked Tedlock to spend some more time on how to address this challenge (beyond showing that few black swan events exists). I guess it might because Tedlock tries to expand a quite traditional approach to include also events that are harder to foresee. My focus is more on the low-probability high-impact events, and in that area black swans are a significant problem.
The idea of using “dragon kings” to refer to events that are known outliers I think is a much more interesting and fruitful starting point. With such an a approach it is also possible to discuss how we can gather more data and improve our ways of assessing data to ensure that black swan events are kept to a minimum. There are some earlier potential black swan events that are now well understood and where systems exist to reduce the probabilities and impacts of such events. Often research is the best cure, but we need to create systems where we better understand what data we need to look for and how to process it.
I should point out that I think there are a lot of merits in the approach by the main “brain” behind the black swan concept, Taleb. Especially his idea of “antifragile”, as a way to describe a situation where a system "thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty". This way to approach risk management as an opportunity and think about how we turn problems into something positive. Especially how we can incorporate such an approach in our urge for efficiency as efficiency usually result in very fragile and unstable systems.
Finally, the fact that Tedlock also spends a significant part of the book discussing different tools to evaluate and improve forecasting makes the book very valuable. The area of evaluating experts/foresights is one of the most important areas moving forward. In this process a key priority should be to update/reform the Brier score (and other similar tools) to address the challenge with those who claims 100% certainty as well as the risks that are unique in their impacts (e.g. risks with potentially infinite impacts).
There are a few things that I think are problematic with this book, and it becomes clear that Tedlock’s strength is research about groups doing forecasting in controlled environments. As soon as he starts discussing practical implementation of forecasting the books start to loose focus, and/or feel out-dated and sometimes even wrong. Most of the concrete examples suffer from this. E.g. even if Tedlock makes a point of learning from those we do not sympathise with it is strange when he only include companies like Walmart and 3M as examples of good forecasting. These are companies might have been of interest 10-20 years ago when they broke some new ground.
What I like is that the weakness, when it comes to actual implementation and broader implications, is actually an inspiration to take things further. I think it would have been better if Tedlock made it clear that others must take this further, but for anyone interested in more than academic work it is clear that he has provided us with a very good framework to develop further.
To conclude, this is a book that rests on many years of serious research and is presented in a way that makes it very easy to challenge the different assumptions, the way good scientific work should conducted and presented. The fact that you can actually create structures/cultures to facilitate better forecasting is very interesting. Even more are the structures we have today that are working against a structured and scientific approach to approaching the future. I would have liked to see more discussion about the challenge that low-probability extreme high-impact event present. This in particularly as much of the focus in media/media focused institutions (like WEF and TED) is on the short-term and the aspects that are easy to measure (in economic terms). Hopefully this will be addressed in the next book.