Calibration Tolerance: All measurements should be made to specified tolerances. The words tolerance and accuracy are often misused. In ISA’s The Automation, Systems, and Instrumentation Dictionary, their definitions are as follows:

Accuracy:

The magnitude of the error of the total output scale or the error rate to the output, expressed in percentage time or percentage reading, respectively.

Tolerance:

Permissible deviation from specified value; can be expressed in units of measurement, space percentage, or reading percentages.

As you can see from the definition, there are subtle differences between the terms. It is recommended that tolerance, specified in the measurement units, be applied to the measurement requirements made in your facility. By specifying the actual value, errors caused by calculating the percentage of space or reading are eliminated. Also, tolerance should be specified in units of measurement.
For example, you’ve been assigned to do
previously mentioned 0-to-300 psig transmitter with calibration tolerance of ± 2 psig. Withdrawal tolerance can be:
2 psig
÷ 300 psig
× 16 mA
—————————-
0.1067 mA
The calculated tolerance reached 0.10 mA, because

close to 0.11 mA will exceed the calculated tolerance. It is recommended that the tolerance of both ± 2 psig and ± 0.10 mA on the measurement data sheet when distances to the milliamp signal are recorded.

Accuracy ratio:

This term has been used in the past to describe the relationship between test level accuracy and test metal accuracy. This term is still used by those who do not understand the uncertainty calculation (uncertainty is described below). A good rule of thumb is to ensure a 4: 1 accuracy ratio when performing measurements. This means that the metal or scale used must be four times more accurate than the tool being tested. Therefore, the test equipment (such as field level) used to measure process process should be four times more accurate than process process, laboratory standard used to measure field quality should be four times more accurate than field field, and so on.
With today’s technology, a 4: 1 accuracy ratio becomes much more difficult to achieve. Why the recommended 4: 1 ratio? A 4: 1 rating confirming will reduce the result of the level of accuracy to the full measurement accuracy. If a high level is found to be intolerable in pairs, for example, the measurements made using that standard are not significantly reduced.

Suppose we use our previous example of the test equipment with a tolerance of ±0.25°C and it is found to be 0.5°C out of tolerance during a scheduled calibration. Since we took into consideration an accuracy ratio of 4:1 and assigned a calibration tolerance of ±1°C to the process instrument, it is less likely that our calibration performed using that standard is compromised.

The out-of-tolerance standard still needs to be investigated by reverse traceability of all calibrations performed using the test standard. However, our assurance is high that the process instrument is within tolerance. If we had arbitrarily assigned a calibration tolerance of ±0.25°C to the process instrument, or used test equipment with a calibration tolerance of ±1°C, we would not have the assurance that our process instrument is within calibration tolerance. This leads us to traceability.

Traceability:

All measurements should be tracked to a level that is known nationally or internationally. For example, in the United States, the National Institute of Standards and Technology (NIST), formerly the National Bureau of Standards (NBS), maintains national standards. Tracking is defined by ANSI / NCSL Z540-1-1994 (which replaces MIL-STD-45662A) as a measurement asset that may be compliant with applicable standards, usually national or international standards, for a continuous series of comparisons. ”Note that this does not mean that the stock market needs to be measured by its standard levels. It means that the measurements made are followed by NIST at all levels used to measure standards, regardless of how many levels exist between the store and NIST.

Uncertainty:

Parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty analysis is required for calibration labs conforming to ISO 17025 requirements. Uncertainty analysis is performed to evaluate and identify factors associated with the calibration equipment and process instrument that affect the calibration accuracy.

Leave a Reply

Your email address will not be published. Required fields are marked *