1. Abstract
All explanatory indicators included in a Workbench should cover the full selected analysis window. Ensuring consistent time coverage prevents unintended truncation of the training period and preserves the statistical strength and interpretability of results.
2. Context
Apply this best practice after setting the Workbench start date and before running Predict. This is especially important when adding newly uploaded internal data or external indicators with shorter histories.
3. Content
3.1 Why It Matters
When one explanatory variable starts later than the Workbench start date, the effective training window is shortened to match that variable’s availability. This reduction in usable observations can:
- Decrease statistical power
- Weaken coefficient stability
- Distort correlation strength
- Create misleading impressions of model performance
Similarly, if an indicator ends earlier than the primary variable, it cannot meaningfully contribute to forecasting and may introduce structural inconsistencies.
The integrity of your analysis depends not only on the quality of individual indicators but also on their temporal alignment.
3.2 How to Apply
After defining your Workbench time window:
- Verify start dates for all explanatory indicators.
Confirm that each indicator begins on or before the selected Workbench start date. - Verify end dates.
Ensure each indicator extends through the most recent observation of the dependent variable. - Adjust or remove truncated indicators.
If an indicator:- Begins too late
- Ends too early
- Requires excessive backfilling
Consider removing or replacing it.
- Review lead-time adjustments.
After setting lead times, confirm that shifting an indicator forward or backward does not unintentionally create coverage gaps. - Re-check observation counts in Predict.
Confirm that the model is using the full intended history.
3.3 Example
A Workbench is set to begin in January 2018. One explanatory income indicator starts in January 2020. Because of this, Predict shortens the effective training window to 2020 onward. Removing the truncated indicator restores the full five-year training window, improving coefficient stability.
3.4 Common Pitfalls
- Overlooking truncated indicators when adding new data
- Relying on partial history because “the model still runs”
- Extending indicators artificially without documented justification
- Forgetting that lead-time adjustments can create hidden coverage gaps
3.5 Expected Results
- Full utilization of the intended historical window
- Stronger and more stable coefficient estimates
- Reduced risk of overfitting
- Cleaner and more interpretable model diagnostics
- Improved confidence in forecast reliability