Adam Elkus just posted a new piece over at Slate on what artificial does and does not mean for security and geopolitics. It’s a good piece, and professional futurists and history buffs like myself always enjoy reviewing history to help contemplate our futures. His piece is an articulate reminder that: a) we often get wrong our predictions for the impact a new physical technology will have on the world, and; b) history provides useful reminders as to the limits of our ability to accurately predict the future states of complex systems.
Discrete Events versus Open-Ended Uncertainties
Elkus’ paper, dealing as it does with our (in)ability to predict the systemic impacts of even just a single technology, reminded me of another piece I just read, this one by Max Nisen in The Atlantic, where he wrote about the research results arising from a recent geopolitical forecasting tournament. While there are some intuitive and some interesting conclusions reported by the researchers, the important point for current purposes is that the focus of the original contest was for teams to predict 199 world events.
This is where we can examine an important difference in the types of prediction and forecasting activities in which different professionals are engaged. Elkus in his post is talking about big, systemic uncertainties for which there is no serious way for us to develop quantitative model that would be worth anything. Elkus is questioning our ability to accurately foretell the ultimate impact that artificial intelligence – whatever it ends up looking like – will have on global security and geopolitics. In contrast, the geopolitical forecasting tournament was asking people to predict discrete events, such as a particular country successfully conducting a nuclear weapons test.
At first blush these two activities might seem very similar to one another, but in fact they are not. The type of forecasting that Elkus is talking about, in which we are trying to anticipate the “downstream” effects of new technologies as they are developed, discarded, diffused, co-evolved, and laterally evolved across multiple social, economic, and political systems on multiple scales is inherently problematic, if not downright impossible. Discrete events, on the other hand, for which we might have multiple historical precedents and often involving an identifiable cast of characters, are phenomena for which it makes a little more sense to believe that we might be able to develop a better ability to forecast. Events such as military coups and weapons testing are better subjects for gathering “good” data, developing statistical models, and applying other logics to anticipating the decision making and reactions of specific individuals and definable groups of people.
Digital Fabrication versus Industry Consolidation
Why is this important? In part because it is with issues like this that academically trained futurists are constantly wrestling. Because we deal with “the future,” clients and audiences often expect that we can and will speak authoritatively on anything about the future. That, unfortunately, is simply not the case. Our subject matter expertise, if you will, is with studying change (broadly speaking) and with constantly exploring the broader contexts of societal change, within which our clients’ work is occurring. To put it another way, people don’t come to us to ask us about the likelihood of a very specific, fairly near-term event (a particular piece of legislation passing or one competitor buying another). Clients generally come to us to explore the big, open-ended uncertainties confronting them, the landscape shifts that they can feel but can’t yet map. And these uncertainties are the type of forecasting challenge articulated by Elkus in his article, rather than the event-specific type of predictions being made in the forecasting tournament reviewed by Nisen.
To frame it again, this time in more specific terms, it is the difference between being asked to forecast the impacts that digital fabrication will have first on economic life, then on social and political life, in the US and across the globe, and being asked to forecast the likelihood of one 3D printer manufacturer attempting to acquire another manufacturer in the next year. The first is a huge, massive question dealing with a level of complexity that most of us simply shy away from, while the second is much more amenable to creating a forecast on which you might immediately place bets. The two are very different questions and in order to engage each one you need to first understand their difference and then understand the different types of approaches you can useful deploy to engage each.
Ultimately, it would benefit all of us who deal explicitly with forecasting and anticipating the future to develop more nuanced frameworks for distinguishing the different types of forecasting being attempted and the different tools (and approaches, such as true crowdsourcing) useful for each.