A data science case study applying scenario modelling to 2026 English local elections across 64 authorities. Six scenarios are tested against a historical baseline, revealing that even the strongest scenario shock (a +4pp challenger surge) represents only 13% of the median calibrated uncertainty band. The key insight: scenario shocks are smaller than historical forecast error, making rankings without uncertainty intervals misleading. The methodology uses backtest residuals as an empirical bootstrap distribution for uncertainty bands, mean-centered at the tier level. Two asymmetric scenarios illustrate design principles: isolate one mechanism per scenario for falsifiability (S4), and log guardrails even when they don't bind (S5). The model is frozen, hashed, and reproducible, with a public accuracy audit planned post-election.
Table of contents
What was modelledMethod: backtest errors as the empirical uncertainty distributionThe result: shocks smaller than uncertaintyReading the dashboard: geography and rankingsTwo asymmetric scenarios, two design lessonsReproducibility and limitationsWhat scenario analysis teaches usWhat happens after May 7Sort: