The Mismatch Between Conflict Forecasts & The Needs Of Policymakers: Ukraine

I recently received a press blast from a high-profile conflict forecasting organization widely touted by the US Government that it concludes with a 100% probability (not even a fraction of a percent disagreement) that Russia will continue to occupy at least 10% of Ukrainian lands in four months. It offers a 78% chance that at least 15% of Ukrainian lands will remain in Russian hands. Fairly definitive results as conflict forecasts come and presented in a crisp precise barchart that eliminates any ambiguity of interpretation.

Yet, buried in the dense textual narrative below that most readers will likely skim right over is a caveat: "[we] will change [the forecast] if there are signs of a major Ukrainian breakthrough or collapse of Russian troops’ morale." A further note states "if a breakthrough can be achieved, Ukraine can expect to regain a lot more territory more quickly."

Such caveats abound in the DC geopolitical forecasting landscape. Even a company's forecast of less than 5% chance of a Russian invasion of Ukraine is proudly touted as evidence of "success" since the forecasting firm did statistically state that it was possible that Russia would invade Ukraine and the eventual invasion fell under its 5% probability.

Similarly, a US Government agency argued that its flagship forecasting initiative had 100% (not 99%) accuracy at forecasting all national elections worldwide. Yet, nowhere did it publicly acknowledge that this figure was based on just a handful of inevitable outcomes like the reelection of the leaders of Russia, Syria and other strongly autocratic states with few surprises in their elections. When pressed about the rest of the world, the agency noted that it would not be fair to count those forecasting failures since they are difficult to predict.

Companies routinely tout their forecasting wins, but quietly neglect to note that those wins typically come amongst a landscape of losses. When pressed for a complete and exhaustive inventory of all forecasts made over the past 12 months and how closely the forecast matched the actual outcome, few forecasting firms are willing to provide such lists or dismiss their findings by pointing to their accurate forecasts as outweighing their failures. Yet, if a firm is wrong as much as it is right, how can a policymaker depend on any given forecast when there is a 50/50 chance it will be wrong?

The great challenge in forecasting the future of Ukraine lies not in asking if Russia will continue to occupy Ukrainian lands if the counteroffensive stalls, but whether Russia will continue to occupy those lands period, with all dependencies such as the counteroffensive, troop morale, weapons and logistics supplies, sanctions, etc considered. Policymakers care about the final outcome: whether Russia will continue to occupy Ukrainian lands: the myriad factors that will help decide that outcome are what they expect to be built into the forecast.

Like most of the geopolitical forecasts that DC is awash in, this email blast fits a standard narrative: answering a carefully constructed and caveated question (what happens specifically if the counteroffensive stalls and everything else remains stationary) rather than the actual question that is what policymakers care about: what will be the trajectory of the invasion in the coming months, taking into consideration all known factors.