Scaling and linearization: Intro and Scaling of linear response curves



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


The task of a data-acquisition program is to determine values of one or more physical quantities, such as temperature, force or displacement. We have seen in Section 3 that this is accomplished by reading digitized representations of those values from an ADC. In order for the user, as well as the various elements of the data-acquisition system, to correctly interpret the readings, the program must convert them into appropriate 'real-world' units. This obviously requires a detailed knowledge of the characteristics of the sensors and signal-conditioning circuits used.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


The relationship between a physical variable to be measured (the measurand) and the corresponding transduced and digitized signal may be described by a response curve such as that shown in Figure 9.1.

Each component of the measuring system contributes to the shape and slope of the response curve. The transducer itself is, of course, the principal contributor, but the characteristics of the associated signal-conditioning and ADC circuits also have an important part to play in determining the form of the curve.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


In some situations the physical variable of interest is not measured directly: it may be inferred from a related measurement instead. We might, e.g., measure the level of liquid in a vessel in order to determine its volume. The response curve of the measurement system would, in this case, also include the factors necessary for conversion between level and volume.

Most data-acquisition systems are designed to exhibit linear responses. In these cases either all elements of the measuring system will have linear response curves, or they will have been carefully combined so as to cancel out any non-linearities present in individual components.

Some transducers are inherently non-linear. Thermocouples and resistance temperature detectors are prime examples, but many other types of sensor exhibit some degree of non-linearity. Non linearities may, occasionally, arise from the way in which the measurement is carried out. If, in the volume-measurement example mentioned above, we have a cylindrical vessel, the quantity of interest (the volume of liquid) would be directly proportional to the level.

If, on the other hand, the vessel had a hemispherical shape, there would be a non-linear relationship between fluid level and volume.

In these cases, the data-acquisition software will usually be required to compensate for the geometry of the vessel when converting the ADC reading to the corresponding value of the measurand.

To correctly interpret digitized ADC readings, the data-acquisition software must have access to a set of calibration parameters that describe the response curve of the measuring system. These parameters may exist either as a table of values or as a set of coefficients of an equation that expresses the relationship between the physical variable and the output from the ADC. In order to compile the required calibration parameters, the system must usually sample the ADC output for a variety of known values of the measurand. The resulting calibration reference points can then be used as the basis of one of the scaling or linearization techniques described in this section.


Figure 9.1 Response curves for typical measuring systems: (a) linear response and (b) non-linear response

Scaling of linear response curves

The simplest and , fortunately, the most common type of response curve is a straight line. In this case the software need only be programmed with the parameters of the line for it to be able to convert ADC readings to a meaningful physical value. In general, any linear response curve may be represented by the equation

y - y0 = s(x - x0) (9.1)

where y represents the physical variable to be measured and x is the corresponding digitized (ADC) value. The constant y0 is any convenient reference point (usually chosen to be the lower limit of the range of y values to be measured), x0 is the value of x at the intersection of the line y - y0 with the response curve (i.e. the ADC reading at the lower limit of the measurement range) and s represents the gradient of the response curve.

Many systems are designed to measure over a range from zero up to some predetermined maximum value. In this case, y0 can be chosen to be zero. In all instances y0 will be a known quantity. The task of calibrating and scaling a linear measurement system is then reduced to determining the scaling factor, s, and offset, x0.

The offset

The offset, x0, can arise in a variety of ways. One of the most common is due to drifts occurring in the signal-conditioning circuits as a result of variations in ambient temperature. There are many other sources of offset in a typical measuring system. e.g., small errors in positioning the body of a displacement transducer in a gauging jig will shift the response curve and introduce a degree of offset.

Similarly, a poorly mounted load cell might suffer transverse stresses which will also distort the response curve.

As a general rule, x0 should normally be determined each time the measuring system is calibrated. This can be accomplished by reading the ADC while a known input is applied to the transducer.

If the offset is within acceptable limits it can simply be subtracted from subsequent ADC readings as shown by Equation 9.1. Very large offsets are likely to compromise the performance of the measuring system (e.g. limit its measuring range) and might indicate faults such as an incorrectly mounted transducer or maladjusted signal conditioning circuits. it's wise to design data-acquisition software so that it checks for this eventuality and warns the operator if an unacceptably large offset is detected.

Some signal-conditioning circuits provide facilities for manual offset adjustment. Others allow most or all of the physical offset to be cancelled under software control. In the latter type of system the offset might be adjusted (or compensated for) by means of the output from a digital-to-analogue converter (DAC). The DAC voltage might, e.g., be applied to the output from a strain-gauge bridge device (e.g. a load cell) in order to cancel any imbalances present in the circuit.

Scaling from known sensitivities

If the characteristics of every component of the measuring system are accurately known it might be possible to calculate the values of s and x0 from the system design parameters. In this case the task of calibrating the system is almost trivial. The data-acquisition software (or calibration program) must first establish the value of the ADC offset, x0, as described in the preceding section, and then determine the scaling factor, s. The scaling factor can be supplied by the user via the keyboard or data file, but, in some cases, it's simpler for the software to calculate s from a set of measuring-system parameters typed in by the operator.

An example of this method is the calibration of strain-gauge bridge transducers such as load cells. The operator might enter the design sensitivity of the load cell (in millivolts output per volt input at full scale), the excitation voltage supplied to the input of the bridge and the full-scale measurement range of the sensor. From these parameters the calibration program can determine the voltage that would be output from the bridge at full scale, and knowing the characteristics of the signal-conditioning and ADC circuits it can calculate the scaling factor.

In some instances it may not be possible for the gain (and other operating parameters) of the signal-conditioning amplifier(s) to be determined precisely. it's then necessary for the software to take an ADC reading while the transducer is made to generate a known output signal. The obvious (and usually most accurate) method of doing this is to apply a fixed input to the transducer (e.g. force in the case of a load cell). This method, referred to as prime calibration, is the subject of the following section. Another way of creating a known transducer output is to disturb the operation of the transducer itself in some way. This technique is adopted widely in devices, such as load cells, which incorporate a number of resistive strain gauges connected in a Wheatstone bridge. A shunt resistor can be connected in parallel with one arm of the bridge in order to temporarily unbalance the circuit and simulate an applied load.

This allows the sensitivity of the bridge (change in output voltage divided by the change in 'gauge' resistance) to be determined, and then the ADC output at this simulated load can be measured in order to calculate the scaling factor. In this way the scaling factor will encompass the gain of the signal-conditioning circuit as well as the conversion characteristics of the ADC and the sensitivity of the bridge itself.

This calibration technique can be useful in situations, as might arise with load measurement, where it's difficult to generate precisely known transducer inputs. However, it does not take account of factors, resulting from installation and environmental conditions, which might affect the characteristics of the measuring system. In the presence of such influences this method can lead to serious calibration errors.

To illustrate this point we will continue with the example of load cells. The strain gauges used within these devices have quite small resistances (typically less than 350 ohms). Consequently, the resistance of the leads which carry the excitation supply can result in a significant voltage drop across the bridge and a proportional lowering of the output voltage. Some signal-conditioning circuits are designed to compensate for these voltage drops, but without this facility it can be difficult to determine the magnitude of the loss. If not corrected for, the voltage drop can introduce significant errors into the calibration.

In order to account for every factor which contributes to the response of the measurement system it's usually necessary to calibrate the whole system against some independent reference. These methods are described in the following sections.

Two- and three-point prime calibration Prime calibration involves measuring the input, y, to a transducer (e.g. load, displacement or temperature) using an independent calibration reference and then determining the resulting output, x, from the ADC. Two (or sometimes three) points are obtained in order to calculate the parameters of the calibration line. In this way the calibration takes account of the behavior of the measuring system as a whole, including factors such as signal losses in long cables.

By determining the offset value, x0, we can establish one point on the response curve - i.e. (x0, y0). it's necessary to obtain at least one further reference point, (x1, y1), in order to uniquely define the straight-line response curve. The scaling factor may then be calculated from

s = (y1 - y0) / (x1 - x0) (9.2)

Some systems, particularly those which incorporate bipolar transducers (i.e. those which measure either side of some zero level) don't use the offset point, (x0, y0), for calculating s. Instead, they obtain a reading on each side of the zero point and use these values to compute the scaling factor. In this case, y0 might be chosen to represent the centre (zero) value of the transducer's working range and x0 would be the corresponding ADC reading.

Accuracy of prime calibration

The values of s and x0 determined by prime calibration are needed to convert all subsequent ADC readings into the corresponding 'real-world' value of the measurand. it's , therefore, of paramount importance that the values of s and x0, and the (x0, y0) and (x1, y1) points used to derive them, are accurate.

Setting aside any sampling and digitization errors (see Sections 3 and 4) there are several potential sources of inaccuracy in the (x, y) calibration points. Random variations in the ADC readings might be introduced by electrical noise or instabilities in the physical variable being measured (e.g. positioning errors in a displacement-measuring system).

Electrical noise can be particularly problematic where low level transducer signals (and high amplifier gains) are used. This is often the case with thermocouples and strain-gauge bridges, which generate only low level signals (typically several mV). Noise levels should always be minimized at source by the use of appropriate shielding and grounding techniques. Small amplitudes of residual noise may be further reduced by using suitable software filters (see Section 4). A simple 8x averaging filter can often reduce noise levels by a factor of 3 or more, depending, of course, upon the sampling rate and the shape of the noise spectrum.

An accurate prime calibration reference is also essential. Inaccurate reference devices can introduce both systematic and random errors. Systematic errors are those arising from a consistent measurement defect in the reference device, causing, e.g., all readings to be too large. Random errors, on the other hand, result in readings that have an equal probability of being too high or too low and arise from sources such as electrical noise. Any systematic inaccuracies will tend to be propagated from the calibration reference into the system being calibrated and steps should, therefore, be taken to eliminate all sources of systematic inaccuracy. In general, the reference device should be considerably more precise (preferably at least 2 to 5 times more precise) than the required calibration accuracy.

Its precision should be maintained by periodic recalibration against a suitable primary reference standard.

When calibrating any measuring system it's important to ensure that the conditions under which the calibration is performed match, as closely as possible, the actual working conditions of the transducer.

Many sensors (and signal-conditioning circuits) exhibit changes in sensitivity with ambient temperature. LVDTs, e.g., have typical sensitivity temperature coefficients of about 0.01 per cent/°C or more. A temperature change of about 10°C, which is not uncommon in some applications, can produce a change in output comparable to the transducer's non-linearity. Temperature gradients along the body of an LVDT can have an even more pronounced effect on the sensitivity (and linearity) of the transducer.

Most transducers also exhibit some degree of non-linearity, but in many cases, if the device is used within prescribed limits, this will be small enough for the transducer to be considered linear. This is usually the case with LVDTs and load cells. Thermocouples and resistance temperature detectors (RTDs) are examples of non-linear sensors, but even these can be approximated by a linear response curve over a limited working range. Whatever the type of transducer, it's always advisable to calibrate the measuring system over the same range as will be used under normal working conditions in order to maximize the accuracy of calibration.

Multiple-point prime calibration If only two or three (x, y) points on the response curve are obtained, any random variations in the transducer signal due to noise or positioning uncertainties can severely limit calibration accuracy.

The effect of random errors can be reduced by statistically averaging readings taken at a number of different points on the response curve. This approach has the added advantage that the calibration points are more equally distributed across the whole measurement range. Transducers such as the LVDT tend to deviate from linearity more towards the end of their working range, and with two- or three point calibration schemes this is precisely where the calibration reference points are usually obtained. The scaling factor calculated using Equation 9.1 can, in such cases, differ slightly (by up to about 0.1 per cent for LVDTs) from the average gradient of the response curve. This difference can often be reduced by a significant factor if we are able to obtain a more representative line through the response curve.

In order to fit a representative straight line to a set of calibration points we will use the technique of least-squares fitting. This technique can be used for fitting both straight lines and non-linear curves. The straight-line fit which is discussed below is a simple case of the more general polynomial least-squares fit described later in this section.

It is assumed in this method that there will be some degree of error in the yi values of the calibration points and that any errors in the corresponding xi values will be negligible, which is usually the case in a well-designed measuring system. The basis of the technique is to mathematically determine the parameters of the straight line that passes as closely as possible to each calibration point. The best-fit straight line is obtained when the sum of the squares of the deviations between all of the yi values and the fitted line is least. A simple mathematical analysis shows that the best-fit straight line, y - sx + h, is described by the following well-known equations.


(9.3)

In these equations s is the scaling factor (or gradient of the response curve) and h is the transducer input required to produce an ADC reading (x) of zero. The δ s and δ h values are the uncertainties in s and h, respectively. it's assumed that there are n of the (xi, yi) calibration points.


Listing 9.1 C function for performing a first order polynomial (linear) least-squares fit to a set of calibration reference points


These formulae are the basis of the PerformLinearFit() function in Listing 9.1. The various summations are performed first and the results are then used to calculate the parameters of the best-fit straight line. The Intercept variable is equivalent to the quantity h in the above formulae while Slope is the same as the scaling factor, s. The ErrIntercept and ErrSlope variables are equivalent to δ h and δ s, and may be used to determine the statistical accuracy of the calibration line. The function also determines the conformance between the fitted line and the calibration points and then calculates the root mean-square (rms) deviation (the same as α 2 ) and worst deviation between the line and the points.

It is always advisable to check the rms and worst deviation figures when the fitting procedure has been completed, as these provide a measure of the accuracy of the fit. The rms deviation may be thought of as the average deviation of the calibration points from the straight line.

The ratio of the worst deviation to the rms deviation can indicate how well the calibration points can be modeled by a straight line. As a rule-of-thumb, if the worst deviation exceeds the rms deviation by more than a factor of about 3 this might indicate one of two possibilities: either the true response curve exhibits a significant non-linearity or one (or more) of the calibration points has been measured inaccurately. Any uncertainties from either of these two sources will be reflected in the ErrorIntercept and ErrorSlope variables.

Although there is a potential for greater accuracy with multiple point calibration, it should go without saying that the comments made in the preceding section, concerning prime-calibration accuracy, also apply to multiple-point calibration schemes.

To minimize the effect of random measurement errors, multiple-point calibration is generally to be preferred. However, it does have one considerable disadvantage: the additional time required to carry out each calibration. If a transducer is to be calibrated in situ (while attached to a machine on a production line, e.g.,) it can sometimes require a considerable degree of effort to apply a precise reference value to the transducer's input.

Some applications might employ many tens (or even hundreds) of sensors and recalibration can then take many hours to complete, resulting in project delays or lost production time. In these situations it may be beneficial to settle for the slightly less accurate two- or three-point calibration schemes. It should also be stressed that two and three-point calibrations do often provide a sufficient degree of precision and that multiple-point calibrations are generally only needed where highly accurate measurements are the primary concern.

Applying linear scaling parameters to digitized data

Once the scaling factor and offset have been determined they must be applied to all subsequent digitized measurements. This usually has to be performed in real time and it's therefore important to minimize the time taken to perform the calculation. Obviously, high speed computers and numeric coprocessors can help in this regard, but there are two ways in which the efficiency of the scaling algorithm can be enhanced.

First, floating-point multiplication is generally faster than division.

e.g., Borland Pascal's floating-point routines will multiply two real type variables in about one-third to one-half of the time that they would take to carry out a floating-point division. A similar difference in execution speeds occurs with the corresponding 80x87 numeric coprocessor instructions. Multiplicative scaling factors should, therefore, always be used -- i.e. always multiply the data by s, rather than dividing by s ^-1 -- even if the software specification requires that the inverse of the scaling factor is presented on displays and printouts etc.

Second, the scaling routines can be coded in assembly language.

This is simpler if a numeric coprocessor is available, otherwise floating-point routines will have to be specially written to perform the scaling.

In very high speed applications, the only practicable course of action might be to store the digitized ADC values directly into RAM and to apply the scaling factor(s) after the data-acquisition run has been completed, when timing constraints may be less stringent.


NEXT: Linearization

PREV: (coming soon)

All related articles   Top of Page   Home



Updated: Monday, April 11, 2011 16:44 PST