When you say “70% probability,” do things actually happen 70% of the time? This is the track record of your probabilistic predictions — scored with Brier (lower is better) and visualized as a calibration curve.
Total forecasts
1
Resolved
0
Brier score
0.000
(none resolved)
Bias
+0.0%
underconfident
A second pure-play PIM acquisition by NVIDIA, AMD, or a hyperscaler within 18 months of Qualcomm's UPMEM deal (by 2026-12-31)
30%
unresolved
How calibration works
Every forecast is a probability assignment. When one resolves, it contributes (p − actual)² to the Brier score. Lower is better; perfect prediction is 0.0, coin-flip is 0.25. The calibration curve binning shows whether your 70% forecasts actually resolve true 70% of the time. Record forecasts via scripts/research/forecast.ts, resolve them as outcomes land.