What is Gauge Reliability?
Gauge Reliability is the probability that, when calibration is due, the “before” check will find the as-found condition to be “IN” (usable condition) rather than “OUT.” If found “IN,” the gauge will undergo a preventive maintenance procedure to restore it to a high reliability. Then the “after” check will be performed and the gauge will be placed back in service.
If we continue to describe the rest of the calibration process as a reliability process, it goes like this: As soon as the gauge is back in service, reliability begins to deteriorate. Reliability will continue to deteriorate until the next calibration due date. From a reliability-engineering perspective, the calibration frequency interval is a tool that we use to control gauge reliability. A problem is that most people are comfortable with a variety of reliabilities or they have a reliability target that they chose arbitrarily such as 90% or 80%, etc.
Figure 1 shows what happens when the optimum reliability target is 95.2%, and you have a gauge type where the calibration frequency is only producing 80% reliability: You would get about a 25% increase in annual calibration cost for that gauge type. If the reliability had been a little too high instead, the cost increase could be 50%.
Optimum Reliability Target “ORT”
All we need to know to calculate optimum gauge reliability is a couple of cost estimates that form what we might call the basic cost structure of calibration.
- What does it cost when the as-found condition is “IN”?
Let’s say we do a “before” check, do some preventive maintenance, do the “after” check and suppose we estimate that this costs the company $200.00 on average.
- What does it cost when the as-found condition is “OUT”?
Let’s say that means we just learned that we have been checking product with a “bad” gauge for an unknown amount of time and the company may have to pay for a lot of expensive consequences, such as problem-solving, more elaborate maintenance, quarantine, re-inspection, potential warranty costs or product recall. Suppose we estimate that this costs the company $4,000.00 on average.
The formula we will use for Optimum Reliability Target (ORT) is designed to produce a “balanced cost” result.
ORT = $ “OUT”/ ( $ “IN” + $ “OUT” )
= 4,000 / ( 200 + 4,000 ) = 0.952
If we round off to 95% as our optimum reliability target (ORT), then when we get the calibration frequencies adjusted right, the cost will be:
Balanced Cost = 0.95 x 200 + 0.05 x 4,000
= 190 + 200
= $390.00 average per calibration
The cost of “IN” and “OUT” is approximately balanced, and would have been perfectly balanced if we hadn’t rounded off.
If all of your gauges use the same reliability target, your average annual cost would be $390.00 times the number of calibrations per year. For 2000 calibrations per year that would be $780,000.00 per year. The cost curve in Figure 1 is based on average annual cost per day.
Is This an Approximation?
The balanced cost method will be precisely correct for some of your gauges. If we develop a solution for an individual gauge type however, it is possible that we could find a lower cost solution for that particular gauge type. This would be worth trying if you had a gauge that was especially critical to your product or had a very high cost structure.
To find out if the gauge has a better solution available, we need to find a parameter called the “Weibull Shape Factor.” (Dr. Weibull was a Swedish engineer who developed mathematical tools for reliability engineering.) Each gauge type will have a probability distribution that describes the probability of failing before a given time, and each distribution will have a shape. Figure 2 shows what some of those distributions look like. Generally, your gauges will have shapes somewhere between 2 and 8.
Referring back to Figure 1, the shape of the cost curve is based on a particular Weibull Shape Factor. Changing shape for an individual gauge type may cause the low point of the cost curve to move right or left, suggesting a better reliability target for that gauge type. If that happens, the annual calibration cost for that gage will go down.
For those who want some detail on how we measure the Weibull Shape Factor, we use an indirect method. First, you would need to perform two or more tests of time-to-failure. For example, if the calibration frequency were 6 months, you would divide that into approximately 25 parts and test once per week. Start with a fresh calibration, then keep checking, with no preventive maintenance, until the gauge fails (i.e., until the as-found condition is “OUT.”) You can simultaneously test two identical gauges or you can test the same gauge twice (which will take longer). Your gage management software would probably call these “two stability studies.”
The two time-to-failure estimates will be converted into two Weibull parameters: the Weibull Shape Factor and a time variable called Weibull Scale Factor. (The scale factor would be used to predict the calibration frequency.)
The Weibull conversion is often found in general purpose statistical software and, of course, in reliability engineering software. You can also use other tools often found in commercial calibration management software to estimate and control ORT in your current calibration efforts.
Using Calibration Management Software to Estimate Optimum Reliability Target and Control the Result
Let’s look at some features that focus on different areas of reliability and how we can use them to estimate ORT and control gauge reliability. Below are a series of features currently found in GAGEtrak software by CyberMetrics that can provide a starting point.
This module is found under Calibration Utilities. It stores simple formulas and will calculate them on demand. It can store the formulas for Optimum Reliability Target, Balanced Cost and Annual Cost. It even keeps the input variables until you change them.
The “Calibration Performance by Gage Type Summary” report will keep track of % “OUT.” (Of course, targeting for 5% “OUT” would be the same as targeting for 95% reliability.) This can be used to spot gauges that are in need of an adjustment to calibration frequency. Keep in mind that new gauges will show an illusion of zero % “OUT,” until they have been on the job long enough to have failures. The report also tracks % PASS, but that is not useful for our purpose, because it is based on the “after” check.
- “Method A3”
This method of controlling gauge reliability is published by NCSLI.org. It is designed to avoid making calibration frequency changes that aren’t really necessary and to help you get on target quickly when changes are necessary. It uses statistical methods to do this. You can enter the optimum reliability target and the significance threshold in the Setup area. It can be turned on for individual gauges using the Gage table, Standards tab. When “A3” decides a change is necessary, it will make a suggestion during the gauge’s next calibration. Only gauges that have “A3” turned on will receive suggestions, but all gauges of that type will be used to calculate reliability.
- Status Report for Method A3
This is available in Calibration Utilities, under the label Gage Frequency Adjusting Interval. (It is not located with other reports, because it is an on-screen report. It also has a re-calculate button, as it does not re-calculate automatically.)
- Stability Studies
This module is found in MSA Suite. It can be used to get two estimates of time-to-failure. These will be needed when you want to determine an optimum reliability target (ORT) for an individual gauge type, for example, a gauge that has a high cost structure or measures something especially critical to your process.
The Bottom Line
Optimum Reliability target (ORT) is not only an important measure to achieve reliability with your critical measurement and test equipment; it is also an important financial tool for reducing your calibration costs. Many companies who see the cost curve for the first time are astonished at the financial impact on their overall calibration costs. In the end, reducing unnecessary calibration costs and more importantly, rework or recall costs, is ultimately good for everyone’s bottom line.
About The Author
Gary Phillips has been in the quality field for nearly 50 years. Previously with GM’s Cadillac division, Gary has now been a consultant for over 30 years and has trained well over 20,000 people worldwide, primarily in technical subjects related to quality and reliability engineering, such as designed experiments, engineering testing, statistical process control and measurement systems analysis.