- Accuracy, transparency, thoroughness of analytical options and results
- Ability to ingest and use a broad range of data; a system that is ‘greedy’ for data that yield new insights
- Ability to update constantly on the most recent data, and models that quickly ’find their level’ and adapt to regime changes
- Processing speed, intuitive speed / accuracy trade-offs, and workflows that yield data science insights on what data to curate, and which feature extraction methods add lift
- Overall operational impact to the business, driven by powerful algorithms that are easy to configure; this ensures robust results when implemented in a well-managed SQL / R environment
Let’s take a closer look at each one.
Accuracy and transparency
Stakeholders who care about forecasting in demand planning care about accuracy, and usually will not accept a new forecasting method unless it is rigorously validated against known forecasting benchmarks with proven accuracy. Even when accuracy to the second decimal place is not critical, accuracy is the benchmark because it is an objective measure, and demand planning executives know the economic impact of inaccuracy. Machine learning for demand forecasting is highly accurate; this is proven over and over again in Kaggle competitions and modeling benchmarking studies.
For the more curious data scientist, machine learning for demand forecasting also has stable accuracy / bias trade-offs that can be adjusted on an ’efficient frontier’ of data science workflow, so that an accurate machine learning forecasting solution can be implemented quickly, and then studied over time to further improve the forecast. Furthermore, machine learning forecasting is not black box; the influence of model inputs can be weighed and understood so that the forecast is intuitive and transparent. Stated simply, accuracy, rigor and speed to solution are three characteristics of the Logility digital supply chain platform that includes machine learning-based demand planning solutions.
Greed for more data
An important contributor to machine learning forecasting accuracy is the ability of machine learning to ingest disparate data and leverage that information at a granular level to improve SKU-level forecasting. Simply stated, if data can be matched to the SKU at the point of sale or the point of distribution, the data can be leveraged with machine learning forecasting. Here, it is important to distinguish between predictive algorithms and the pre-processing of data that feeds them. Forecasting data, at its simplest level, has four data columns: Case ID; Time Series Member (like SKU at point of sale, which is the most granular level of forecasting); date of transaction; and transaction amount (in units or dollars or volume) or transaction event (if the forecast focuses on events rather than amounts).
Logility ingests this information, quickly building highly accurate and highly granular forecasts. All other available data, such as prices and discounts, distribution networks, weather information, social media ‘voice of the consumer’ and advertising impressions that can be correlated with the data at a SKU or location or date level can be blended into the modeling database. These extra predictors are often important drivers of improved forecast accuracy and bias reduction, and machine learning forecasting incorporates these disparate data without manual data exploration and human intervention in the mathematical forecasting process. Again, for that curious data scientist, this is an example of a greedy modeling process because a local optimal solution is attained for every SKU at every location, with an end result that is accurate at a global level. But even more importantly, Logility’s process extracts signals from disparate data in a highly automated yet transparent manner.
Rapid adaptation to change and supply chain disruption
Another quality of machine learning forecasting is the ability to be ‘always on’ in the sense that the forecast can be programmed to update automatically on the most recent data. Typically, this means updating the forecast based on aggregate data on a daily or weekly basis, refreshing the data warehouse with each forecast refresh, and regenerating a running forecast based on the most recent actuals. New forecast accuracy and bias metrics can be calculated, the base and running forecast can be compared, and the updated results presented for review through Logility dashboards.
In this way, forecast accuracy trends can be leveraged in adjusting demand planning. This ‘always on’ forecast monitoring, combined with dynamic and customer-level pricing and promotions, can be tuned to identify price sensitivity among customer segments, products that form a market basket, and thus build the foundations of an online recommender system. Once a daily forecast and customer history is merged with a transactional website recommender system, the value of the recommender system in driving incremental consumer purchases can be unlocked. This is where forecasting truly becomes an automated learning cycle, nearing AI capabilities, and the method is ideally suited to large-scale Fulfillment By Amazon (FBA) businesses.
Analytical processing speed and accelerated corporate learning
An additional advantage of machine learning is data processing speed. Modern machine learning packages in R have been designed to capitalize Intel and GPU chip architecture, squeezing more calculations per second, making the best use of in-memory storage, and propelling machine learning forecasting to light-speed results. Logility has tested machine learning forecast generation processing speeds and has found that 1.5 million forecasts per hour (data extracted, machine learning model fitting, scoring and storage of output scores) are easily attainable with commodity hardware.
Logility scales to the largest modeling tasks in expandable Azure cloud computing environments. When even higher levels of accuracy matter, machine learning can incorporate additional predictors and Deep Learning, with the analytical results demonstrating the speed / accuracy trade-off as an efficient frontier. From that point, decision makers are well informed in making necessary trade-offs on where to invest – more data, faster data processing, a larger computational cluster, and so forth – with all of those issues addressable in the highly scalable Microsoft cloud.
Borrowing from the four Vs of Big Data, Logility enables superior forecast accuracy, based on a greater volume and variety of data that are arriving at higher velocity. Within a well-managed data warehouse, this means higher veracity of the forecast data.
This creates the foundation for rapid acceleration in advanced analytics, the development of proprietary insights into how data can be leveraged to improve business performance, and real dollar-denominated improvements in the bottom line.
Daniel Bachar is a Product Marketing Director for Advanced Analytics for Logility. Daniel brings more than 10 years of experience in sales, marketing, supply chain planning, and advanced analytics. He provides a unique blend of business and industry knowledge, leading successful efforts to integrate new technologies into effective supply chain solutions. His experience includes development, design and go-to-market strategy of supply chain and advanced analytics products, helping clients with complex business problems to achieve complete visibility into their supply chain operations.