April 12, 2019
Manufacturing Challenges Blog Series: Data Overload
Welcome to the seventh installment in the Manufacturing Challenges
blog series. We’ve looked at a number of challenges to manufacturers over the previous weeks in this series: audits
, defects and recalls
, operator engagement
, waste reduction
, and efficiency
. In this blog, I’m going to focus on data overload. Right off the bat, I’d like to state that, in today’s manufacturing environment, there is oftentimes too much noise to see the signals.
The signals are the messages buried in the data we capture that we hope will tell the story of our manufacturing processes. The noise is everything else that gets in the way of that, all the other stuff—and that includes too much
of the signal.
The Future is Now
We all see it every day in our manufacturing facilities: the future is now. Technology, in general, is advancing at a breathtaking pace. And manufacturing technology is no different. In fact, in many ways, it is probably leading the way—utilizing the cloud, mobility, AI, and more.
Since I’ve been doing this, I’ve witnessed many changes: the slow demise of analog dials; the advent of computers; the progression from computer monitors to laptops, tablets, and phones; and the explosion of types of manufacturing equipment, including data collection devices and databases. It’s all happening so fast and is so prevalent in today’s manufacturing world.
Heads are Spinning
In terms of data, it’s been a strange, sometimes confusing progression. At some point, equipment manufacturers started coming out with their own versions of data output; sometimes these new machines even came with their own databases to which they could write. Others had the ability to do a bit of configuration and export spreadsheet files. In a nutshell: data began to exist in abundant formats. And that’s a challenge.
So, it is becoming clear that integration is going to be an issue if these various machines and databases don’t all speak the same language or output the same format. I mean, even the smallest devices seem to have some sort of data output. That’s what every equipment manufacturer is pushing: stream your data from this new device into your system.
As if all that weren’t enough, manufacturers are also dealing with all these different generations
of equipment, and they all speak their own language, too.
It’s easy to see why quality professionals are scratching their heads and wondering what to do with all this data—from all these disparate machines. Can we make sense of it all?
Enter the Process Model
There’s an aspect of Enact®
, the InfinityQS Quality Intelligence platform, that interconnects all these disparate data streams: the process model. My colleagues have spoken in great depth about the process model, most recently Eric Weisbrod, VP of Product Management, dove into the subject in his blog, The Joys of a Process Model
What’s a process model, you ask? In short, the Enact process model helps you visualize your processes, make sense of the sequence of things like data collections, and illustrates very clearly the interconnectivity of your data streams.
To quote Eric: “It’s functional, it allows you to build things out in step-by-step fashion (which is convenient), it’s expandable, it helps you error-proof your processes, and it centralizes and simplifies the ways in which you manage your operations. We designed Enact’s process model functionality to greatly benefit end-users, and that is a major part of the game in today’s software world.”
Lacking a tool like a process model, it’s easy to fall into a sort of manufacturing “paralysis,” whereby you’re overwhelmed by the data coming in and not sure what’s useful and what isn’t.
Can Does Not Equal Should
An aspect of data collection that is very prevalent in today’s manufacturing world is collecting all the data you can—without really considering if you should. Another colleague, Doug Fair, our COO, has written extensively on this topic, most recently for Food Quality & Safety: Data Collection: Is Too Much Data a Bad Thing?
Check it out.
Doug discusses how “data gluttony” is an expensive, bad habit for many organizations. There are hidden costs associated with it, and it distracts you from the important things you can discover about your processes if you focus on the right data. Indeed, can
does not equal should
when it comes to data collection.
The key is to bypass data gluttony and separate the noise from the signal, to collect only what you need and not everything you want, just because you can.
Where Do We Start?
All this data overload talk reminds me of a story. The big question many manufacturers face when they think about simplifying things, focusing on their data, and getting to the root of their problems is this: Where do we start?
Years ago, I think it was in the 80s, Dr. W. Edwards Deming—the master of continuing quality improvement—was speaking at an event I attended. This was back in the time of buzzwords like Total Quality Management (TQM), zero defects, and the like. Six Sigma wasn’t really even a thing yet. He was talking about something fairly new at the time—the plethora of data coming in from all these many directions and devices. And what a challenge it was for manufacturers. He took some questions at the end.
“Where do we start?” asked one participant. His response was “Start with the data. Look at what the data is telling you. Analyze the data and it will tell you what direction to go.”
Pick Up the Stream
I interpreted that to mean the following: It doesn’t matter where you start, just pick up a stream of data.
Pump it into something, anything, that can help you make sense of it. And then, if there’s anything of interest there, follow it. Follow that path to where maybe something good can come of it.
Or, if nothing is interesting there, stop looking at it…and don't worry about it. Go pick up another piece of data or stream of data and look at that.
That’s a roundabout way of saying I think Deming is right. If you have no idea where to start, then just grab some data, look at it, and see if that tells you something. At the very least, it will help you to start asking better questions than you could without the data.
It really doesn’t matter where you start; just get started. The data will then drive the direction in which you end up going.
So, you’ve got all this data coming in, and you’ve started analyzing it. Well, how do you know what’s important? How do you know you’re starting with the right data and not wasting your time on something that’s not so important?
It’s common knowledge that fixing a problem upstream is the cheapest way to go. Fix the problem before it gets further and further down the value-add stream. This mindset is “let’s start at the beginning.”
Start at the End?
I prefer to start at the end. That way, I can see where the problems are that might cause escapes, that might cause issues with customer relationships. Start there, grab the data, analyze it, and it will probably drive you upstream to the right spots to fix.
For example, you’ve got an overweight issue at final inspection. What’s going on upstream that keeps adding unwanted weight? You pick the spots during production where weight in some form is added to the product—then you analyze input/output delta weight changes to isolate the most likely sources of the problem. This advice is fundamentally obvious but, in reality, methodical problem-solving does not happen as much as it should.
Once you’ve implemented some changes that effect weight, watch what happens downstream and see if you fixed the problem or not. You start at the end, and the data inevitably drives you upstream. The important thing is to get started, pick a place to begin analyzing the data. It will drive you where it drives you. But the important thing is just get started.
Too Much Noise to Hear the Signal
Getting started shouldn’t be too much of an issue for quality pros. They love the data. They usually can’t wait to get their hands on it, so they can begin figuring things out. But I think it’s safe to say that sometimes getting started seems daunting because there is just so darned much data.
As each new generation of measurement device is introduced, they seem more and more complex. These days, it’s all about automation, the Internet of Things (IoT), and the flow of data that these devices can create. It all adds up fast. And before you know it, you’ve got all this data. The health of the equipment; the health of the parts that are running, or that we are measuring on this equipment. And you have to look at it all. It’s all important. And you don’t want to miss anything. Ah, the noise!
So, our job is to help customers pull the vital information out from all this data. That’s the job.
In the End, You Have to Trust the Statistics
If you trust the data, trust the statistics
, then you will get past all the noise in the signal. And the way you get to a point where you trust the data is to make sure that all the underlying elements are in place that will ensure that the data is trustworthy: all the data are collected for the right things, by the right devices, in the right sequence, and on time.
If you believe in the science of statistical analysis, then you are going to be able to get past all the noise of all that data. Sounds a little preachy, but it’s true.
Everyone’s goal in manufacturing is to make a better product. It’s why we look at the numbers in the first place. And if/when you get past all the noise in the data, to a point at which you really understand all the nuances of your manufacturing processes, then you—and your organization—will make better business decisions based on process knowledge.
Please read the other Manufacturing Challenges