How Not to Make the Cut for Supply Chain Disasters

The term big data is thrown around a lot these days, but one of the main areas where this term truly applies is large industrial units (manufacturing facilities, refineries, vehicle assembly plants etc.). With the advent of digital technologies and advanced sensors, the amount of data being collected every day is astounding. This poses several challenges: these datasets are prone to numerous errors and issues.

Let’s step back a little. At its core, any industrial unit is meant to do one thing: produce goods. Hidden behind that simplistic description are hundreds of decisions, and those decisions are based on a number of factors. Each of these factors is calculated based on information that is gathered from a wide variety of data sources. For example, the process for a simple product decision is shown below:

The profitability of a business depends on getting these decisions right. The quality of these decisions is based on the quality of data available to the decision makers—this is what makes data quality oversight such a critical part of business. 

Let’s take a look at one aspect of this process: the Supply Chain.

It’s a complicated process that determines the quantities and materials that need to be ordered to ensure sales goals will be met. In today’s global supply chain, the systems in this process rely heavily on automated data collection from various warehouses, stores or stock points. Take a look at some of the stories here.

Some useful insight from these scenarios include:

  • Foxmeyer’s distribution issues as a result of automation of the warehouses could have been avoided if they had a system in place that would instantly highlight anomalies such as partially filled orders or track when a software failure occurred, resulting in incomplete transactions.
  • Hershey’s Halloween Nightmare also points to a problem with automating systems without enough checks in place to catch the errors and glitches that would have highlighted problems almost immediately.
  • Nike’s planning supply chain planning system also highlights the importance of systems that automatically monitor incoming data and highlight deviations from older established systems. This can help find bugs in new deployments and streamline adoption of new technologies. Not to mention prevent a shortfall of $100 Million in revenue.
60% of CPOs cite poor master data quality, standardization, and governance as the biggest 5 challenges to procurement.(Deloitte)

In most cases, the problems arose from the difficulties of monitoring a high volume of data generated from automated systems. One way to work around this is to get the subject matter experts (SMEs) to monitor, clean or report on the data. But that is an expensive, mundane, and time-consuming proposition not too mention hard to scale across multiple workflows. 

For example, a data pipeline created for optimizing prediction of new customer acquisition will involve extensive input from sales team and product managers. Taking this solution and applying it to inventory management or reduction in manufacturing faults will not work without extensive input from experts in those fields.

Fortunately, we are entering a time when the importance of data is known. Using an automated system like Qualytics, you can have confidence in your data without a team responsible for reviewing each and every data point. Imagine a system that infers checks on the data streams and lets your SMEs write checks based on their knowledge. As a result, it becomes easier and more efficient to detect anomalies hidden in millions of lines of data. Catching these anomalies can keep you from ending up on the next top supply chain disaster list. 

There are many things that can go wrong when running a large, complex enterprise— but data quality does not need to be one of them.

A Simple Example Highlighting Benefits of Qualytics:

Let’s take an example of Asos (here). The e-commerce business deployed a new software to track the inventory and update the website accordingly. Asos has a very active list of SKUs (over 85k and adding 5k every week).

The new software was sending out inventory but not replenishing it or taking in new items. As a result there was a huge amount of “ghost” inventory in the system. This resulted in a severe restriction in available stock in Germany, France and USA. The company took a hit to margins and lost 68% YoY revenue for FY2019.

This problem could have easily been avoided by a simple comparison analysis between incoming and outgoing inventory – something that could easily be configured & scheduled using Qualytics to take place daily. Further, volumetric shape checks would automatically be inferred, where differences in shapes & patterns from historic records would be caught automatically.

In this scenario, the differences between data stores would have been caught near real-time with automated checks, giving the opportunity to rectify the issue ahead of potential downstream implications.

Qualytics is the complete solution to instill trust and confidence in your enterprise data ecosystem. It seamlessly connects to your databases, warehouses, and source systems, proactively improving data quality through anomaly detection, signaling, workflow and enrichment. Check out our other blogs to learn more about how you can start trusting your data. Contact us at hello@qualytics.co.

Comments (1)

Comments are closed.