Tales from the Trenches – 3: Data Entry Errors

Ian Farrell
By Ian Farrell | May 18, 2021
Lead Consultant, Key Performance Quality Consulting, LLC
Tales from the Trenches is an ongoing series of blogs and videos designed to help manufacturing quality professionals deal with the issues that arise on the plant floor...no matter what industry you are in.

In the first and second blogs in this series we discussed reducing customer complaints and using assignable cause and corrective action codes, respectively. In this entry to the series, we'll discuss reducing data entry errors.
 
Do data entry errors plague your operations? Is there an easy fix? Are you sure you’re collecting the right data? Do you have a data collection plan?
 
Knowing the right data to collect and the right way to analyze it is critical. Without a focused, intentional data collection process, you can end up with a lot of data, but nothing very useful.
 
As alluded to above, for this article, I’ll be focusing on how I improved existing data collection projects to reduce data entry errors—specifically, how those efforts yielded better data analysis, improved efficiency, and ultimately led to a higher-quality product for my consumers. We’ll look at how, by putting in a little bit of front-end work to prevent data entry errors, just like me, you too can save yourself from manual data cleansing work on the back end.
 
And, lastly, we’ll look at how I used InfinityQS statistical process control (SPC) software to collect accurate data, thus improving turnaround time on product quality and process efficiency decisions.
Reducing Data Entry Errors

The Key to Process Improvement

You cannot re-invent the wheel. How many times have you heard that one? In today’s manufacturing industry that old adage rings true. And, even if you could re-invent it, there just isn’t time to re-invent data collection and data analysis for every new roadblock; efficient manufacturing relies on a quick turnaround from data collection to data analysis to process improvement
 
As we’ve all seen, the expectation is that the continuous improvement cycle (to which I alluded in the last blog) is expected to be repeated time and time again, each time faster and more efficient than the last. It’s a fever pitch, and you’ve got to keep up…or get left behind.
High Quality Food Production

DMAIC

Because of this expectation, it’s important that we quality professionals work to streamline data collection and data analysis, so we don’t become the “rate-limiting step” in our companies’ DMAIC cycles [Remember this acronym? Six Sigma’s “Define, Measure, Analyze, Improve, Control.” Ring a bell?].
 
In all the years I’ve been in manufacturing, I’ve never had “extra” time to perform data cleansing while the rest of the team sits idle, waiting to complete a root cause analysis or process improvement.
 
On the contrary, the “analyze” phase of a DMAIC process is often the most frustrating for action-oriented manufacturing professionals. Where “define,” “measure,” “improve,” and “control” are active, collaborative, and decisive routines, “analyze” is more passive, and often relies on one person or a small team working with data sets in software such as Minitab or Microsoft Excel. 
 

Data Errors are the Worst…and So Avoidable

Re-checking DataIf you’ve never had the experience of having your plant manager sitting in your office, impatiently waiting while you analyze a set of data, consider yourself lucky! Nerve-wracking doesn’t begin to describe it. If you have had that experience, then you know that the last thing you want to do in that situation is add extra time by manually sorting and cleansing the data you’re working with.
 
Another situation I’ve seen, and hopefully one you’ve not had the pleasure of experiencing, is pulling up an automated report in a meeting and finding that the data entered since the last time you pulled it up is clearly erroneous.
 
I’ve seen horribly skewed graphs, auto-scaled y-axes so large that your spec range seems like it’s only a few nanometers wide on the screen, and sometimes graphs that are just blank, leading to equally blank stares from your coworkers (and upper management). That’s embarrassing stuff.
 
All of these scenarios are easily preventable, and with just a few easy tips and tricks you can go from data goat to data guru.
 

We Need a Hero

When thinking about data entry errors and the resulting delays and embarrassment they can bring, a couple of scenarios come to mind.
 
The first type of data error I want to relate to you has to do with recorded data that has to be weight-corrected, like a metric measuring defects per pound. In popcorn production, the dreaded “un-popped kernel” is a defect to be avoided. 
Hunting for the Dreaded Un-popped Kernel 
When a defect is either plentiful or rare, the sample size won’t necessarily match up with the units in which the defect is reported. For instance, those un-popped kernels are quite rare, so sampling one pound of popcorn is statistically unlikely to have any present. 
 
As a result, five pounds of popcorn must be sampled to get a statistically valid measurement. The resulting count is then divided by five to report the result in units of defects-per-pound. 
 
Another way to think about the problem is a plentiful, but time-consuming, count.  Inclusions in cereal, such as nuts or raisins, need to be counted, but that takes a long time. Counting the raisins in a pound of raisin bran could be a full-time job, so a smaller sample is taken and multiplied to get a raisins-per-pound value.
 
Discussing QualityWhen a test result has a simple calculation as part of the test, operators can sometimes do the math in their head, rather than relying on the calculation features built into quality management software. But this can result in a doubling of the correction factor and values that not only don’t make sense but can lead to erroneous data analysis and incorrect process improvement decisions.
 
In another situation, a simple mix-up between metric and standard units can cause more than just data analysis errors. When coupled with automated reports, I’ve seen this sort of mix-up generate Certificates of Analysis that show product as out of spec. Granted, it’s not as critical as when NASA crashed a Mars probe by mixing up their units of measure, but I wouldn’t use that as an excuse when your boss asks you to explain why a customer thinks they received out-of-spec product!
 

Error-Proof, Don’t Data Cleanse

In all of these situations where time was of the essence, the delays were due to a reliance on data cleansing instead of error-proofing. Known by a variety of names, the concept of mistake-proofing has been around for a long time. Mistake-proofing aims to first prevent defects from occurring, and if not possible, to detect defects upon occurrence.
 
Shigeo ShingoIn 1960s Japan, Shigeo Shingo applied the name “poka yoke,” a nearly literal translation of “error-proofing,” to quality assurance practices. Frustrated by operators forgetting to install a spring into the small electrical components they were building, Shingo realized that if he modified the workflow, the lack of the spring would become immediately apparent to the operators, resulting in a drop in defective switches.
 
The same principle of a physical barrier to mistakes—or an automatically triggered warning notification—has become pervasive in manufacturing. From the two-handed operation of a machine press preventing injury to designing jigs that only fit in specific orientations, error-proofing shows up throughout factories across the world.
 
From a quality perspective, poka yoke supports a reduction of defects and a transition from quality control’s reliance on final inspection to quality assurance’s built-in quality and “zero defects” mentality.
 
With the help of electronic sensors, PLCs, and computer software, the principles of poka yoke continue to expand in their applications throughout the manufacturing industry.
 

Error-Proofing with InfinityQS Quality Management Solutions

InfinityQS software solutions make it easy to error-proof the data entry process. In ProFicient, from the Specification Limits window, you can input reasonable limits to your data collection. The software will even suggest reasonable limit values for your data, but you’re free to manually input your own values. After that, check the boxes to trigger when a violation occurs, and you’re all set!
Specification Limits 
Now, whenever an operator enters data for that part/test combination, the system automatically confirms that the data is within the reasonable limits. Any data outside those limits brings up a warning screen, asking the operator to confirm or correct the suspect data.
Spec Limit Violation 
With reasonable limits enabled, your data is now much more reliable. You no longer have to struggle with charts—attempting to auto-scale to fat-fingered results. You will no longer be spending your time checking your data line-by-line looking, for that one entry of 1,250 grams in a long row of 1.250-gram results.
 
Just like Shingo’s spring insertion solution, you’re shifting your quality efforts further and further upstream in the process, catching and correcting defects in real time, not with added labor after the fact.
 
Another nice feature of reasonable limits and error-proofing your data entry is that it’s a win-win for you and your operators. Whereas sometimes finesse is required to get operators on board with an idea, I suspect they will quickly agree to have this feature enabled, because it serves as an easy double-check on their data entry. With reasonable limits, there’s only upside!
 
Professional embarrassment and duplication of effort are both powerful motivators for process improvement. Because of this, the decision to enable reasonable limit detection on our processes was truly a no-brainer.
Process Improvements

ProFicient Makes Error-Proofing Easy

In another instance of error-proofing, discussions of our manufacturing processes yielded that we were not interested in removing the automatic calculation features from our data entry configurations. Sure, the math was easy, but each calculation given to a human vs a computer is just another place for data errors to occur.
 
Rather than solve the problem by removing error-proofing, we doubled down, utilizing both the calculation and reasonable limit features in our ProFicient implementation. 
 
After that, when an over-achieving operator did the math in his head or found a stray calculator in the lab (removing calculators: another example of error-proofing!) they were greeted with a bright yellow warning on their data entry screen, asking them to confirm or correct the value.
 
I think it’s clear that ProFicient quality management software can make error proofing much easier to implement in your quality program.
 
To see details of the ProFicient software in action, please check out the Tales from the Trenches video series here.
 

Feel free to check out the other blogs in this series:  
Take advantage of the technology at your fingertips today: contact one of our account managers (1.800.772.7978 or via our website) for more information.
 

InfinityQS Fact Checking Standards

InfinityQS is committed to delivering content that adheres to the highest editorial standards for objective analysis, accuracy, and sourcing.

  • We have a zero-tolerance policy regarding any level of plagiarism or malicious intent from our writers and contributors.
  • All referenced articles, research, and studies must be from reputable publications, relevant organizations, or government agencies.
  • Where possible, studies, quotes, and statistics used in a blog article contain a reference to the original source. The article must also clearly indicate why any statistics presented are relevant.
  • We confirm the accuracy of all original insights, whether our opinion, a source’s comment, or a third-party source so as not to perpetuate myth or false statements.

 

SUBSCRIBE TO OUR BLOG

Never miss a post. Sign up to receive a weekly roundup of the latest Quality Check blogs.

Archives