1. Abstract
Residuals show how closely a model’s predictions match actual outcomes. Reviewing recent residual behavior helps confirm that a model remains reliable and has not drifted away from current business conditions.
2. Context
Use this best practice after running Predict and before moving a model to Production, as well as during periodic model reviews.
3. Content
3.1 Why It Matters
A model can perform well historically but fail in the most recent periods. Residuals from the last several months provide the clearest signal of whether relationships are still holding.
Large errors far in the past are often less concerning than persistent recent bias, which may indicate:
- Structural business changes
- Missing or mistimed drivers
- Shifts in market conditions
3.2 How to Apply
- Open the model and navigate to the Diagnostics tab.
- View the Residuals chart.
- Focus primarily on the last 6–9 months of training data.
- Look for:
- Residuals centered around zero
- No sustained upward or downward trend
- No recurring seasonal pattern
- If recent residuals show drift, investigate:
- Missing explanatory variables
- Incorrect lead times
- Need for seasonal adjustment or controls
3.3 Example
A revenue model shows strong historical fit, but residuals trend negative over the most recent quarters. Adding a new interest-rate variable corrects the bias and recent residuals stabilize around zero.
3.4 Common Pitfalls
- Judging model quality only by overall R²
- Ignoring recent residual trends
- Overreacting to isolated one-month errors
- Attempting to “fix” random noise
3.5 Expected Results
- Early detection of model drift
- More stable production models
- Greater confidence that forecasts reflect current realities