Ian Farrel Series

Reasonable Limits & Data Entry Errors

Part 3 of 5

Videos in Ian Farrel Series

alt
Episode 1 - Reducing Customer Complaints
Watch Time: 9:00
Watch
alt
Episode 2 - Assignable Cause & Corrective Action – Good Data In, Good Data Out
Watch Time: 10:56
Watch
alt
Episode 3 - Reasonable Limits & Data Entry Errors
Watch Time: 11:29
Now Playing
alt
Episode 4 - Overfill & Product Giveaway
Watch Time: 11:20
Watch
alt
Episode 5 - 5S Your SPC Projects
Watch Time: 13:40
Watch

 
Video Transcript:

SPC Quality Program: Reasonable Limits and Data Entry Errors - Episode 3

Welcome to the InfinityQS “Tales from the Trenches.”  In this video series we present real-life quality professionals discussing how they solved important quality and process problems at their facilities. These quality professionals get into the details and show how they leveraged InfinityQS software to solve the problem, including how they identified the root cause.

Accurate and timely data is fundamental to a successful quality program. In today’s episode, quality manager, Ian Farrell, will discuss methods he has used to improve data collection efficiency, accuracy, and data analysis results.

We hope you enjoy the video.

Hello, I’m Ian Farrell, and for the last 18 years I have worked as a quality manager in the food and manufacturing industries.

In this “Tales from the Trenches” episode, I will show how I improved existing data collection projects to reduce data entry errors, yielding better data analysis, improved efficiency, and ultimately a higher-quality product for my consumers. By putting in a little bit of front-end work to prevent data entry errors, I’ll show you how you can save exponentially more work on the back end performing manual data cleansing.

Finally, I will show you how I used InfinityQS SPC tools to collect accurate data, improving turnaround time on product quality control process and efficiency decisions. Knowing the right data to collect and the right way to analyze it is critical.  Without a focused, intentional data collection process, you can end up with a lot of data, but nothing very useful.  Because of this, a lot of effort is put into building data collection and analysis systems.

In manufacturing today there isn’t time to re-invent data collection and data analysis for every new roadblock; efficient manufacturing relies on a quick turnaround from data collection to data analysis, to process improvement. 

As we’ve all seen, the expectation is that this continuous improvement cycle be repeated time and time again, each time faster and more efficient than the last. Because of this expectation, it’s important that quality professionals work to streamline our data collection and data analysis, so we don’t become the rate-limiting step in our companies’ DMAIC cycles.

In all the years I’ve been in manufacturing, I’ve never had ‘extra’ time to perform data cleansing while the team waits for me to complete a root cause analysis or process improvement.

On the contrary, the ‘analyze’ phase of a six-sigma DMAIC process is often the most frustrating for action-oriented manufacturing professionals.

Where ‘define,’ ‘measure,’ ‘improve,’ and ‘control’ are active, collaborative, and decisive routines, ‘analyze’ is more passive, and often relies on one person or a small team working with data sets in software such as Minitab or Excel. 

If you’ve never had the experience of having your plant manager sitting in your office, impatiently waiting while you analyze a set of data, consider yourself lucky! If you have had that experience, you know the last thing you want to do in that situation is add extra time by manually sorting and cleansing the data you’re working with.  Especially with an audience.

Another situation I’ve seen, and hopefully one you’ve not had the pleasure of experiencing, is pulling up an automated report in a meeting, only to find that the data entered since the last time you pulled it up is clearly erroneous.

I’ve seen horribly skewed graphs, auto-scaled y-axes so large that your spec range seems like it’s only a few nanometers wide on the screen, and sometimes graphs that are just blank, leading to equally blank stares from my coworkers.

All of these scenarios are easily preventable, and with just a few easy tips and tricks you can go from data goat to data guru.

When thinking about data entry errors and the resulting delays and embarrassment they can bring, a couple of scenarios come to mind.

The first type of data error I want to relate to you has to do with recorded data that has to be weight-corrected, like measuring defects per pound.

When a defect is either plentiful or rare, the sample size won’t necessarily match up with the units the defect is reported in.  For instance, un-popped kernels are quite rare, so sampling one pound of popcorn is statistically unlikely to have any un-popped kernels present. As a result, 5 pounds of popcorn must be sampled to get a statistically valid measurement.  The resulting count is then divided by 5 to report the result in units of defects-per-pound. 

Another way to think about the problem is a plentiful, but time-consuming count.  Inclusions in cereal, such as nuts or raisins, need to be counted, but the counting takes a long time. Counting the raisins in a pound of raisin bran could be a full-time job, so a smaller sample is taken and multiplied to get a raisins-per-pound value.

When a test result has a simple calculation as part of the test, operators can often do the math in their head, rather than relying on the calculation features built into software such as ProFicient or Enact.  When the data is entered, this can result in a doubling of the correction factor and in values that not only don’t make sense but can lead to erroneous data analysis and incorrect process improvement decisions.

In another situation, a simple mix-up between metric and standard units can not only cause data analysis errors, but when coupled with automated reports, I’ve seen it generate Certificates of Analysis that show good product as being Out-of-Spec.  It’s not as critical as when NASA crashed a Mars probe by mixing up their units of measure, but I wouldn’t use that as an excuse when your boss asks you to explain why a customer received ‘out-of-spec’ product!

In all of these situations, where time was of the essence, the delays were due to a reliance on data cleansing instead of utilizing the benefits of proactive error-proofing.

Known by a variety of names, the concept of mistake-proofing has been around for a long time. Mistake-proofing aims to first prevent defects from occurring, and if not possible, to detect defects upon occurrence.

In 1960s Japan, Shigeo Shingo applied the name poka-yoke to quality assurance.  Translated literally, this became known in the English-speaking world as mistake proofing.

Frustrated by operators forgetting to install a spring into the small electrical components they were building, Shingo realized that if he modified the workflow, the lack of the spring would become immediately apparent to the operators, resulting in a drop in defective switches.

The same principle of a physical barrier to mistakes or an automatically triggered warning notification has become pervasive in manufacturing. From the two-handed operation of a machine press to prevent hand injury to designing jigs that only fit in specific orientations, error-proofing shows up throughout factories across the world.

From a quality perspective, error-proofing supports a reduction of defects and a transition from quality control’s reliance on final inspection to quality assurance’s built-in quality program and zero-defects mentality.

With the help of electrical sensors, PLCs, and computer software, the principles of poka yoke continue to expand in their applications within the manufacturing sphere.

InfinityQS software solutions make it easy to error-proof the data entry process. In ProFicient, from the Specification window, you can input reasonable limits to your data collection.  The software will even suggest reasonable limit values for your data, but you’re free to manually input your own values.  After that, check the boxes to trigger an alarm upon violation and you’re all set!

Now, whenever an operator enters data for that part/test combination, the system automatically confirms that the data is within the reasonable limits.  Any data outside those limits brings up a warning screen, asking the operator to confirm or correct the suspect data.

With reasonable limits enabled, your data is now much more reliable.  Gone are the struggles with charts attempting to auto-scale to fat-fingered results.  No longer will you be spending your time checking your data line-by-line looking for that one entry of 1,250 grams in a long row of 1.250-gram results.

Just like Shingo’s spring insertion solution, you’re shifting your quality control process efforts further and further upstream in the process, catching and correcting defects in real time, not with added labor after the fact.

Another nice feature of reasonable limits and error-proofing your data entry is that it’s a win-win for you and your operators.  Whereas some of the other topics we’ve presented in “Tales from the Trenches” require finesse to get operators on board, I suspect you’ll find operators will be happy to have this feature enabled, serving as an easy double-check on their data entry.  With reasonable limits, there’s only upside!

Professional embarrassment and duplication of effort are both powerful motivators for process improvement.  Because of this, the decision to enable reasonable limit detection on our processes was truly a no-brainer.

Remember the calculated test results for un-popped popcorn and the operator’s tendency to perform the calculation in their head? In yet another instance of error-proofing we were not interested in removing the automatic calculation features from our data entry configurations.

Sure, the math was easy, but each calculation given to an operator vs a computer is just another place for data errors to occur.  Rather than solve the problem by removing error-proofing, we doubled-down, utilizing both the calculation and reasonable limit features in our ProFicient implementation. 

After that, when an over-achieving operator did the math in his head, or found a stray calculator in the lab (removing calculators: another example of error-proofing!) they were greeted with a bright yellow warning on their data entry screen, asking them to confirm or correct the value.

By utilizing all the layers of error-proofing ProFicient has to offer, we saw improvement from data entry through to data analysis and into process improvement.

We hope this episode of “Tales from the Trenches” has shown you way to improve data collection when using InfinityQS software at your company.

InfinityQS is always there to help. We have application engineers ready to assist you in getting the most value out of your InfinityQS implementation. Reach out to your sales representative for information and pricing.

Thank you for watching and check back soon for our next episode of “Tales from the Trenches.”

Take the first step from quality to excellence