A demand planner is similar to a bike mechanic in that the value they add to a system generated forecast is proportional to how proficient they are at fine-tuning that forecast. The problem is that fine-tuning forecasts isn’t a subject that you hear all that much about. To change that, I thought I would provide a few tips that might come in handy when trying to fine-tune a system generated statistical forecast.
The most basic and often difficult step in fine-tuning a forecast is to determine the correct forecast model to use. Where one model may fit a certain set of demand patterns it may not fit well for others. To further complicate matters, the best forecast model to use may change over the life cycle of a product. In addition, the most appropriate model type may change as you create forecasts at different levels of aggregation. For example, one model may be more appropriate at the SKU-Location-Customer level while another will best for the same SKU at the SKU-Location Level or at just the SKU level of aggregation. Advanced demand planning solutions can be set up to automatically select and switch to the most appropriate model.
The second area to consider when fine-tuning your forecast is the accuracy of your demand history data. Proactively managing history is a critical step towards generating an accurate forecast. Most companies use shipment history to create their statistical forecast which can inject significant accuracy challenges into the process. For example, often products will ship in a different period then a customer wants or from an alternative location due to shortages. When this happens, inaccuracies as to what was requested and when is introduced into the stream of data used to calculate future demand. Losing or adding customers is another reason to adjust history. The demand planner should manage history adjustments when there are anomalies that would otherwise create unreasonable forecasts. The ability to automatically identify and eliminate anomalies using filters is critical to allowing planners to keep their focus on strategic initiatives to drive profitable growth. Alternatively, when the cause of the anomaly is known a more appropriate practice is to manually correct for the anomaly and capture the reason for the correction.
A very powerful way to fine-tune your forecasts is through adjusting the parameters used to create the statistical forecast. In addition to automatically selecting the best model with the correct parameter set an advanced solution will allow you to adjust the parameters as needed. The best way to do this is by utilizing “what-if” capabilities that allow you to compare alternative parameter settings to determine the outcome on forecast reasonability and accuracy. A few parameters that you should be able to adjust include:
- Demand Filter Factor – based on standard deviation and used to evaluate and filter historical demand anomalies.
- Reasonableness Limit Factor – calculated by dividing the System Forecast by the Adjusted Demand and used to gauge the reasonableness of the forecast. Typically, a reasonableness limit factor outside of a range of 0.5 to 2.0 is suspect.
- Smoothing Factor – used to adjust the relative weight of a seasonal model’s components (permanent, trend, and seasonality). For example, a trend factor of 0.20 means that 20% of the forecast for a new period comes from the new data and 80% comes from prior estimates of trend. Advanced systems will use a configurable set of component smoothing factors, automatically evaluate the various combination of factors in the set, and then select the set that produces the lowest forecast error.
A thorough forecast fine-tuning process should always include the use of multiple forecast accuracy measures. Forecast error is the difference between actual demand and forecasted demand. A common and very useful error measure is Mean Absolute Percentage Error (MAPE) or a weighted version (WMAPE) that includes a way to prioritize and focus on items that are more important (Revenue, Profit, Volume, etc.). However using MAPE as a forecast accuracy measurement has some drawbacks. For example, MAPE cannot be used if there are history periods with zero values, which often happens with low volume products. MAPE calculations are also subject to bias. When MAPE is used to compare the accuracy of forecast models, it is biased in that it will systematically select a method whose forecasts are too low. Therefore, other forecast error/accuracy methods need to be used to evaluate and fine-tune the forecast including Mean Absolute Deviation (MAD), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) to name a few. Forecast accuracy measures should be evaluated across different time lags and horizons to truly evaluate the appropriateness of a forecast model. Usually production or acquisition lead-time dictates what forecast lag to use when computing accuracy. On the other hand, evaluating different lag times (example: 1 month, 2 months, 3 months) can lead to interesting insights in forecast stability and appropriateness. Evaluating forecast accuracy over longer time horizons (Quarter, Yearly, etc.) can also help to select the appropriate model and parameter settings.
You are probably thinking that this seems like a lot to know and a significant amount of work. You would be correct if you were performing all of this fine-tuning manually. However, by using system-enabled ABC analysis techniques and customized alerts, a planner can focus on what is most important. For example, alerts could tell you when a forecast accuracy falls outside of predetermined limits, when a Demand Filter Factor is applied or when a Reasonableness Limit Factor is exceeded. Don’t get lost in the weeds, let the system handle the heavy-lifting while you navigate the course and take advantage of opportunities to drive success.