Today’s recommended reading is contributed by Rob Waller, president of the Protect Heritage Corporation. Prior to PHC, Rob spent thirty-three years at the Canadian Museum of Nature, including periods as chief of the Conservation Section and as managing director of the Collection Services Division. In 2005 he and I had a wonderfully paranoid time co-teaching “Museum Facilities and Risk Management,” and Rob contributed a chapter to the AAM publication “Covering Your Assets: Facilities and Risk Management in Museums.”
Future Babble is Dan Gardner’s second popular technical book. Different publisher’s versions are variously subtitled, including: Why Expert Predictions Fail – and Why We Believe Them Anyway, Why Expert Predictions Are Next to Worthless, and You Can Do Better, Why Pundits Are Hedgehogs and Foxes Know Best, and How to Stop Worrying and Love the Unpredictable.
Gardner, in adding this to his earlier book—Risk: The Science and Politics of Fear—is emerging as a master popular science writer along the lines of Daniel Goleman and Malcolm Gladwell. Future Babble is both entertaining and well researched and referenced to original sources. It is important reading for any budding futurist. Much of it is built on and around the work of Philip Tetlock, especially as reported in his book Expert Political Judgment: How Good Is It? How Can We Know? It is, however, more accessible to lay readers and supplemented with anecdotes, stories and additional references.
The first chapter is an overture to the book, sampling content from main chapters. The second chapter, “That unpredictable world” demonstrates the impossibility of accurate predictions in complex systems drawing on examples like trying to predict the price of oil next year or your local weather on the 15th of next month.
Chapters 3 and 4 provide treatments of heuristics and biases now offered in so many popular psychology books. Still, Gardner’s explanations of these are so well woven into the theme of this book that they are interesting rather than tiresome even for someone who has read extensively in this area. Here we learn:
- the illusion of control leads us to think we are more able to control outcomes than we can possibly be;
- overconfidence has us believing our understanding is better than it is;
- hindsight bias will cause us, after the fact, change our memory of our predictions to think we were more prescient than we really were;
- group think allows our opinions to become shaped by common opinion therefore reducing the diversity of our ideas about the future;
- the availability heuristic has us estimating the significance of something simply by how easily we can recall an example of it—certainly not a benefit for understanding a future that differs from our past;
- similarly our near ubiquitous use of the anchoring and adjustment heuristic means we allow a number in mind, even an unrelated random number, to influence our estimate of any quantity, for example, numbers of museum visitors, and costs of preservation;
- finally, through the representative heuristic we allow detailed scenarios, built on “typical” situations and characteristics, to seem more likely to occur than more generally described scenarios. This last heuristic necessitates great care in the use of scenarios. We need to temper the excellent use of scenarios for expanding our thinking with an understanding that contemplating overly detailed scenarios can reduce our ability to contemplate alternate scenarios.
Chapters 5 and 6 explore our anxiety in the face of uncertainty and our craving for certainty and how this leads us to look for predictions, sometimes even from astrologers or superstitions and omens, as well as experts. We do that even though none of those sources, experts included, provide useful predictions as much as they simply reduce our internal sense of uncertainty. Unconsciously we let our sense of certainty serve as a proxy for sense of accuracy. We also let our sense of someone else’s confidence serve a proxy for our sense of their accuracy. The more clear and certain the expert the better the settling effect on us. Unfortunately, the opposite is closer to the truth. The more uncertain an expert is the more likely he/she is nearer the truth. As Bertrand Russell has said “The whole problem with the world is that fools and fanatics are always so certain of themselves, with wiser people so full of doubts.”
At this point you might think “come on, it can’t be that bad—surely we would notice all those wrong predictions.” Just in time, Chapter 7, “When prophets fail” illustrates how we notice fulfilled predictions with the help of both experts wanting as many people as possible knowing he/she had a correct prediction and media seeking experts who succeeded in a prediction. In contrast, we immediately forget failed predictions aided by the media not reporting on them since failed predictions tend to equate with “the absence of news.”
Finally, Chapter 8, titled “The end” could have been titled “So what?” We crave relief from uncertainty that predictions can provide but, importantly, we need predictions as a basis for planning. We can make better predictions. Better predictions are made by:
- being skeptical of our ability to predict,
- seeking as much information as possible,
- taking many divergent perspectives into account, and
- ensuring predictions include stated uncertainties so that decisions and plans can be made as robust and resilient as possible.
So what does all this mean for AAM’s Center for the Future of Museums?
Elizabeth Merritt’s introduction to the Center’s first publication, Museums & Society 2034: Trends and Potential Futures stated:
“humanity has always been obsessed with predicting the future. The unknown scares the pants off us, as well it might! Knowledge is power, and knowing what is coming around the corner would be immensely reassuring. Unfortunately, that isn’t going happen. And predicting the future is not, in fact, the goal of futurism. We can’t determine what will happen, but we can take a thoughtful look at what might happen, and the attendant consequences. This awareness of potential futures enables us to choose which future we most want to live in, and figure out how to bring it into being.”
From MaryJo Lelyveld’s Beyond Swabs and Solvent Gels: Using scenarios to generate, evaluate and navigate conservation futures (reviewed in Merritt’s, Dec. 8, 2011 entry “Navigating Conservation Futures“) one of the concluding remarks is:
“The intention of scenarios planning is not prepare for one future but develop policies and processes that will be robust in many futures and create specific plans to deal with potential risks or take advantage of opportunities.”
In these examples we see that those who are thinking seriously about the future of museums are aware of the need to avoid the trap of believing any particular scenario or prediction. They are sure to have a prominent caveat to that effect at the start, end, or both of any future-discussing documents. This is also the main lesson of Future Babble. It is such an important lesson that, in my humble opinion, it is well worth the time to read the book. That the book is also a pleasant and entertaining read is a bonus.