The rise of the futurists: The perils of predicting with futurethink
By Alexander H. Montgomery and Amy J. Nelson
The Brookings Institution
Dec. 1, 2020 —
Policymakers, facing increasingly uncertain contemporary and future security and technology environments, are engaging in futurethink — using fictional scenarios to make predictions about the results of introducing artificial intelligence (AI) and other emerging technologies into these environments. Futurists engage in this process by providing scenarios to ameliorate uncertainty, drawing on a suite of tools that include simulations, worst-case planning, war-gaming, and even science fiction narratives.
A common futurethink tactic is to switch from risk-based probabilistic thinking, which is vulnerable to various decisionmaking pathologies, to possibilistic thinking — creatively generating scenarios outside of expected outcomes with a focus on impacts rather than probabilities. This move avoids some pathologies but is still subject to many biases and must be implemented judiciously.
It can be usefully harnessed if futurists and policymakers avoid:
- rounding off probabilities, using heuristics as knowledge, and only exploring known outcomes.
- engaging in excessive deviations from reality and exotic or emotionally fraught scenarios.
- anchoring on specific scenarios, allowing embedded assumptions, and making hasty generalizations.
- being creative enough to spark new ideas.
- making explicit ideas inspired by fiction and embedding them in specific scenarios.
- seeking out expert contributions and integrating existing threats into scenarios.
Click here to read more →