Assignable Cause & Corrective Action – Good Data In, Good Data Out

Part 2 of 5

Videos in Ian Farrel Series

alt
Episode 1 - Reducing Customer Complaints
Watch Time: 9:00
Watch
alt
Episode 2 - Assignable Cause & Corrective Action – Good Data In, Good Data Out
Watch Time: 10:56
Now Playing
alt
Episode 3 - Reasonable Limits & Data Entry Errors
Watch Time: 11:29
Watch
alt
Episode 4 - Overfill & Product Giveaway
Watch Time: 11:20
Watch
alt
Episode 5 - 5S Your SPC Projects
Watch Time: 13:40
Watch

 
Video Transcript:

Quality Assurance Skills - Assignable Cause and Corrective Action - Episode 2

Welcome to the InfinityQS “Tales from the Trenches.”  In this video series we present real-life quality professionals discussing how they solved important quality and process problems at their facilities. These quality professionals get into the details and show how they leveraged InfinityQS software to solve the problem, including how they identified the root cause.

Every manufacturer wants to improve their processes to get better results. In today’s episode, quality manager, Ian Farrell, will discuss how he used Assignable Cause and Corrective Action Codes and other tools found in our InfinityQS software to improve processes and to reduce defects and rework.

We hope you enjoy the video.

Hello, I am Ian Farrell, and for the last 18 years I have worked as a quality manager in the food and manufacturing industries.

In this “Tales from the Trenches” episode, I will show how I have used Assignable Cause and Corrective Action Codes to collect useful information for process improvement.

Today I want to talk about how I’ve used Assignable Cause. I will also review Pareto Charts and how I used them to focus our continuous improvement projects to gain the most value from our work. Finally, I will show how I used InfinityQS SPC software as a tool to reduce defects and rework, lowering my product costs.

Relevant, accurate data is the basis for a solid quality assurance system. That data provides the knowledge needed to recognize areas for improvement and quantify the success of your organization’s continuous improvement activities. In other words, Garbage In, Garbage Out.

So, when given an opportunity to make meaningful process improvements that drive customer satisfaction and cost efficiency, nobody wants to put effort into a project that, at the end of the day, cannot prove that it was worth the expense.

I know first-hand how frustrating it can be to make improvements when the baseline and improved state data cannot be evaluated in a meaningful way.

This is something I have seen time and time again in the food industry.  But, for every unclear result, there are many examples of successful projects that utilize existing data collection methods to prove that improvements did occur.

Take, for example, a problem I encountered with heat sealing equipment and the data I was able to collect using InfinityQS ProFicient software.

We have all seen consumer goods in plastic containers with a foil lid.  Picture yogurt, sour cream, or even a can of potato chips.  They all use a similar technology using heat, time, and pressure to affix the foil lid to the container, preserving the freshness of the product until it reaches the consumer.

When the sealing equipment is working well, things work great, but in my experience, these are finicky machines that don’t handle variation well at all.  Whether it’s a change in foil lots, an unplanned downtime event, or any one of dozens of other variables, something is always causing poor seals, requiring operator and maintenance resources to bring the system back into control.

Besides the frustrations at the operator level, poor seals lead to rework, scrapped packaging materials, and of course, customer complaints.  In my case the customer complaints were often for seals that were too strong, often impossible to open without damaging the container.

Something had to be done, but where to start?  Each operator and mechanic had their own theory about why the machine was breaking down and their own unique way to correct the issue.  Because of this, there was no consensus on how to improve the process.

What I needed was some collated, curated data. What I needed was a good Pareto chart to guide me.
When trying to find the root cause of a problem, or when trying to find the appropriate corrective action for your solution, a Pareto chart is an indispensable tool.

Applied to quality assurance by Dr. Joseph Juran, the Pareto principle is named in honor of Italian economist Vilfredo Pareto, who developed the idea to highlight that 80% of the land in late 1800’s Italy was owned by approximately 20% of the population.

This idea of an “80/20” rule has become commonplace and highlights that 80% of outcomes result from only 20% of the potential causes.

The Pareto chart is simply a combination of a bar chart and a line chart, but don’t let the simplicity fool you, it’s a powerful tool.

Pareto Chart

Whereas many bar charts are sorted by time, date, alphabetical order, or some other categorization, a Pareto chart’s bars are ordered by their value, with the tallest bars coming at the left or top and decreasing right or down.  This order allows you to quickly see which categories make up the ‘vital few’ in whatever you’re measuring.

The line graph helps to highlight how ‘vital’ those few are, by acting as a running total for the graph.  In a typical Pareto chart, where the 80/20 rule is evident, the line graph appears to take on a logarithmic or asymptotic shape, rising quickly then leveling off as it nears 100%.  This again re-enforces that most of the ‘activity’ in the chart is typically in the first few categories.

When your data is plotted on a Pareto chart, your biggest bang for your buck happens when you address the issues in those tallest bars first.

Inspecting product packaging seals is not something that, at first glance, seems like a good fit for SPC tracking. It is a pass/fail process, and each package was inspected, so end-user complaints were both unlikely and rare.
The data gathered could generally inform us of a potential problem but nothing more than that. Notes generated by operators, were sporadic and unstructured, making them fairly useless.  A Pareto chart of that data was of no value.

However, we had the ability to improve the structure of operator notes by using the Assignable Cause Code and Corrective Action Code functionality of ProFicient.  Leveraging the rules and structure of those components yielded data that would have made Vilfredo Pareto and Joseph Juran proud.  More importantly, it yielded the type of data that could be used to quantify the improvements we were making to the sealing process, confirming that our focus on operator adjustments was the key to improving the sealing process.

To get the data I needed, I first interviewed operators and mechanics to see what they thought were the root causes and best solutions to each seal problem.  This was the most time-intensive part of the process, since it was critical that a majority of affected employees were interviewed to get a full set of scenarios.

Using that data, I created a set of Assignable Cause Codes and Corrective Action Codes.  Once created, and communicated to the team, I enabled the alarm notification rules to require an Assignable Cause and Corrective Action for each failed seal inspection.

Once this was complete, I was rewarded for my efforts with a stream of useful, quantitative data.  When a baseline of data was established, we set to work identifying the 20% of problems that was causing the 80% of our defects.

By using the Assignable Cause/Corrective Action feature in ProFicient, I was able to make it quick and easy for operators to enter their codes. Operators appreciated both the lack of extra documentation, and also the knowledge that they were actively participating in a project that was going to improve their work by reducing rework and improving customer satisfaction.

One thing we kept in mind while implementing our assignable cause and corrective action codes was operator satisfaction.  I had to be mindful that these codes must not be seen as adding barriers or applying restrictions to their work.  It’s important that operators be respected as the process experts they are, but it’s also important to have some guardrails on the data they record.  Even professional bowlers use lanes with gutters, right?  The key to balancing these opposing forces is to have a high level of operator input and two-way communication around the implementation. Everyone needs to know the goals, the timing, and purpose of the work.  Even more than that, operators need to see their inputs showing up in the final product.  Taking all those items into consideration led to a successful implementation of the assignable cause and corrective action code functionality.

In the end, our root cause was the idea that if a strong product seal is good, then a VERY strong product seal must be better. However, our seal inspection process focused on identifying poor seals but wasn’t very good at finding seals that were too strong and difficult to open, causing customer dissatisfaction.

Using the data, we quickly found that the most common corrective action was to increase seal temperature, which was driving the over-sealing condition we were seeing. Backed by this data, we implemented stricter controls on seal temperature adjustments and implemented a training campaign focused on educating operators and mechanics on the customer impacts of seal quality and seal temperature.

There’s real power in this feature of the software.  By funneling operator responses into a pre-set list of options, what you’ll find is that your data is much more useful.  Rather than getting 40 categories, each with 1 or 2 occurrences, by structuring the assignable causes and corrective actions, you get a few categories with multiple responses. Often, when you’re looking at 40 categories, you’re not actually getting 40 different root causes; what you’re getting is more akin to having 40 eye-witness accounts of a single incident.  Harnessing the ProFicient software, you can convert that scatter-shot data into focused data that has value and purpose.
After all this work, we went back to our Assignable Cause and Corrective Action Code data, again leveraging a Pareto Chart, and were able to quantify the improvements we had implemented.

Using the built-in data collection and analysis tools of the InfinityQS software, we turned the qualitative into the quantitative and isolated the root cause of our problems.

We hope this episode of “Tales from the Trenches” has opened your eyes to the many tools in InfinityQS software that can help you improve the manufacturing processes at your facility.

InfinityQS is always there to help. We have application engineers ready to assist you in getting the most value out of your InfinityQS implementation. Reach out to your sales representative for information and pricing.

Thank you for watching and check back soon for our next episode of “Tales from the Trenches.”
 

Take the first step from quality to excellence