Defects - The Story Continues

InfinityQS Blog
By InfinityQS Blog | April 16, 2012
Blog Author

In our first installment of collecting attribute type data we went over how to set up a ProFicient project to collect defectives type data in a single data entry configuration. Our attribute data was separated into 3 categories; critical, major, and minor, and because we created the tests as defectives this meant that if any of the items on the defect code list were found our part was considered to be defective. In this installment we will continue to collect our attribute data in the 3 categories listed above, but this time we will treat the data as counts of items, i.e. defects. We will once again have a subgroup size of 50 with different limits attached and we will want ProFicient to alarm and send emails when the limits are exceeded.

Let’s start with discussing the difference between the types of attributes; defectives and defects. We use defectives when we are trying to determine if an item is either good or bad, whether it passes or fails, or some other type of either/or condition. On the other hand, we use defects to capture the count of conditions on an item; such as whether it has spots, or has contamination, or the logo is smudged. However, the presence of a single defect does not necessarily make the item defective.

Since we are collecting defects with a fixed sample size of 50 pieces per sample, we will utilize a C chart and a Pareto chart to capture the results. If the sample size was not fixed, we would utilize a U chart because they plot the data as a proportion (defects per unit) negating the effects of the different sample sizes. We also have multiple limits for the subgroup data; one limit for the maximum number of a single defect on a piece and a second limit for the maximum type of defects on a single part.

We will create a data entry configuration with the three types of visual results; Minor, Major, and Critical Each test will be pointed to its own defect list. The piece specification limits are set to allow any given part to have either 10 minor defects, 5 major defects, or 0 critical defects. However, for this data entry configuration, the order of the tests on the list is important because of the piece specification limits. If the critical defect test was the first test on the test list, with an upper specification limit of 0 and set to alarm, entering any value would cause the warning message to display to the user. While we could just choose to disable the warning message on the Basic Options tab of the data entry configuration, another way to avoid the warning is by making the minor type visual test - the test with the highest upper specification limit - the first test on the list. Using this approach the user would only get the warning message if the value entered for a given defect is greater than the piece upper specification limit for the minor type defect test, i.e.11 defects. As with the defectives example, we will also enable Option B1 – Combine Multiple Code Lists, on the Advanced Options tab.

Next, we need to define when the alarms will happen. There are 2 definitions used to define the alarms – one for the individual counts of a given defect code and one for the total number of a defect type. For the individual counts we will utilize the piece upper specification limit which will define the number of times a specific defect can appear. For our example we will use 10 as the USL for the minor defect test, 5 as the USL for the major defect test, and 0 as the USL for the critical defect test.

The second limit is set up as the total number for the defect type, a subgroup limit, and is defined as the maximum value allowed for the subgroup as a percentage of the sample size. To calculate the maximum percentage value we divide the maximum allowable defects per part by the sample size. For this example we will use the USL for each defect type divided by the sample size of 50 to come up with 0.2 for minor defects, 0.1 for major defects, and 0.0 for critical defects as the upper subgroup limit. While we used the piece USL for this example, the maximum value could be different than the USL if we are willing to allow more defects when we add the total within a given defect group.

When we enter the results into the project we will input the number of defects found and the appropriate defect code from the defect code list. When the defect code list opens, the user can choose from all three defect groups and each code will appear on the list with the group name in parenthesis. The user enters how many of each type of defect they find as a result of their visual inspection. If the total exceeds either the maximum count for a given defect code; 10 for minor defectives, 5 for major defectives, or a single critical defect, or the maximum percent of allowable defects listed; 0.2 for minor defectives, 0.1 for major defectives, or a single critical defect, ProFicient will create an alarm and an event for the subgroup. Once the data is collected, the results will be displayed using a C chart which will tell us the average number of defects found in a subgroup. If there are either too many instances of one type of defect (the USL marker on subgroup 1) or too many instances of a defect type (the USG marker on subgroup 4) ProFicient will alarm and notify the user.

We have expanded our data collection process to include 2 ways to collect attribute data for our visual inspection processes as part of our ProFicient deployment. This portion of our deployment will provide our company with the tools – the use of the C charts, P charts, and Pareto charts – to improve the portion of our process that monitors the visual appearance of our products and allow us to make the best first appearance possible for our products.

InfinityQS Fact Checking Standards

InfinityQS is committed to delivering content that adheres to the highest editorial standards for objective analysis, accuracy, and sourcing.

  • We have a zero-tolerance policy regarding any level of plagiarism or malicious intent from our writers and contributors.
  • All referenced articles, research, and studies must be from reputable publications, relevant organizations, or government agencies.
  • Where possible, studies, quotes, and statistics used in a blog article contain a reference to the original source. The article must also clearly indicate why any statistics presented are relevant.
  • We confirm the accuracy of all original insights, whether our opinion, a source’s comment, or a third-party source so as not to perpetuate myth or false statements.



Never miss a post. Sign up to receive a weekly roundup of the latest Quality Check blogs.


Take the Next Step