When I think about the opportunities and challenges around data analytics in the supply chain, I envision two farmers—one with a couple of small packs of seeds, the other sitting on top of a barn full of seeds. Which one do you think is more likely to produce a larger crop, and therefore make more money? Most people would assume the farmer with the seed stockpile would naturally yield a bigger and more successful harvest. Most people would be wrong. The farmer with all the seeds ends up being much less productive because he did not take the time define the rules and parameters necessary to organize his seeds by type and growing season etc. So, he spends so much time sifting through his cache searching for the right seeds, his often misses the optimal planting window.
Likewise, many enterprises today are discovering that more data does not necessarily equate to better analytics. In fact, it has been argued that the predictive power of data analytics is diminished when the data pool is too dilute. But, the virtually limitless capacity of today’s cloud storage solutions has created a generation of data hoarders. Since supply chains are largely data driven—plans, units, POs/prices, schedules, invoices and inventory—it’s easy to see how enterprises can find themselves in a position of having too much data, while still struggling to identify the right information.
Many data management solutions today focus on common platforms and implementations, which are aimed at reducing operational pain by delivering more interconnected capabilities. This, in turn, requires investment in people to learn to navigate new processes and systems in order to leverage a new way of using information. These implementations often take longer and cost more than originally projected.
One reason for this, I believe, is the failure of many supply chain managers to think through the foundational aspects of how data is managed before they make decisions on what database software and analytics tools to deploy. This assessment must include consideration of three critical upstream factors in the data management process:
- Policies. Our ability to leverage the cloud is restricted by our ability to determine what and how data should be used. Managers need to determine up front which policies are in place. Areas such as legal, HR, finance, and IT need to agree on the expected behaviors of the organizations involved in data management.
- Rules. Data is often cumbersome because nomenclature and part numbering systems are often not common across industries, within industries, and even within companies. Managers need to publish a set of rules to be followed. Each organization needs to agree independently and collectively on the procedures to be used across the enterprise. Agreements on part number management, parameters and protocols are essential. For example: what MOQ (minimum order quantity) should be held in inventory, the number of days payable, and pricing by geography. These choices drive the amount of data that is captured and how it is used.
- Processes. Managers need to produce processes that follow the rules. This effort is a timely reconstruction of how data moves between departments, functions and locations.
These early decisions have a great impact on leveraging data outcomes, because no matter how much technology you throw at the supply chain, if you keep operating with business/process rules that were established years, or even decades, ago, you are not going to be able to affect the kind of supply chain improvements possible through today’s data analytics programs. A proactive data management strategy can mean the difference between counterproductive data overload and the ability to turn raw data into actionable insights that differentiate your business.
Click here to Connect with Tom via LinkedIn.