Calibration is the act of comparing a device under test (DUT) of an unknown value with a reference standard of a known value.
A person typically performs a calibration to determine the error or verify the accuracy of the DUT’s unknown value.
As a basic example, you could perform a calibration by measuring the temperature of a DUT thermometer in water at the known boiling point (212 degrees Fahrenheit) to learn the error of the thermometer. Because visually determining the exact moment that boiling point is achieved can be imprecise, you could achieve a more accurate result by placing a calibrated reference thermometer, of a precise known value, into the water to verify the DUT thermometer.
A logical next step that can occur in a calibration process may be to make a corrective adjustment or to true-up the instrument to reduce measurement error. Technically, corrective adjustment is a separate step from calibration. (Correction and compensation are covered in more detail in the Calibration Steps section below.)
For a more formal definition of calibration, we turn to the BIPM (Bureau International des Poids et Mesures or International Bureau of Weights and Measures, www.bipm.org), based in France, which is the coordinator of the worldwide measurement system and is tasked with ensuring worldwide unification of measurements.
BIPM produces a list of definitions for important technical terms commonly used in measurement and calibration. This list, referred to as the VIM (International Vocabulary of Metrology), defines the meaning of calibration as an “operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.” This definition builds upon the basic definition of calibration by clarifying that the measurement standards used in calibration must be of known uncertainty (amount of possible error). In other words, the known value must have a clearly understood uncertainty to help the instrument owner or user determine if the measurement uncertainty is appropriate for the calibration.