Plug-in data acquisition boards: A/D Boards



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


Analog input (A/D) boards convert analog voltages from external signal sources into a digital format, which can be interpreted by the host computer. The functional diagram of a typical A/D board is shown in ill. F.1 and comprises the following main components:

Input channel sample and hold circuits (for simultaneous sampling)

• Input multiplexer

• Input signal amplifier

• Sample and hold circuit

• A/D converter (ADC)

• FIFO buffer

• Timing system

• Expansion bus interface

Each of these components plays an important role in determining how fast and how accurately the A/D board can acquire data.

ill. F.1 Functional diagram of a typical A/D board


AMAZON multi-meters discounts AMAZON oscilloscope discounts


F.2.1 Multiplexers

A multiplexer is a device that switches one of its analog inputs (typically up to 16 single-ended inputs) to its output; the input channel selected being determined by the binary code at the input address lines of the multiplexer. The number of address lines required is determined by the number of input channels to be multiplexed. A 16-channel multiplexer would therefore require four input address lines.

On A/D boards, multiplexers facilitate the sampling of multiple inputs on a time-multiplexed basis. The A/D converter samples one channel, switches to the next channel, samples it, switches to the next channel, samples it, and so on. This eliminates the need for a signal amplifier and an A/D converter for each input channel, thereby reducing the costs of A/D boards.

Important parameters:

Two parameters that particularly affect the rate, at which the multiplexer can switch between channels, and therefore its throughput rate, are the settling time and the switching time.

Settling time:

The settling time is the time the multiplexer output takes to settle within a predetermined error margin of the input when the input signal on the channel swings from -FS (negative full scale input voltage) to +FS (positive full scale input voltage) or from +FS to -FS.

Switching time:

The switching time specifies how long the multiplexer output takes to settle to the input voltage, when it's switched from one channel to another.

Throughput rate:

The throughput rate is considered the higher of the settling time or the switching time, since it's possible the voltage on one channel can be at -FS while the voltage on the next switched channel is +FS. The throughput rate of the multiplexer is one factor that determines the total throughput of the A/D board.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


F.2.2 Input signal amplifier

To achieve the greatest resolution in the measurement of an analog input signal, its amplified range should match the input range of the A/D converter. Consider a low level signal of the order of a fraction of an mV, fed directly to a 12-bit A/D converter with full-scale voltage of 10 V. A loss of precision will result since the A/D converter has a resolution of only 2.44 mV. Some form of amplification is needed.

This is usually provided by a high performance instrumentation amplifier, typified by:

• Balanced differential inputs

• High input impedance

• Low input bias currents

• Low offset drift

• High common mode rejection ratio

Two types of amplifier are commonly included on A/D boards, depending on the cost and quality of the A/D board selected. Some A/D boards provide on-board amplification, where the amplifier gain is adjustable using hardware, while boards that provide programmable gain amplifiers (PGAs) make it possible to select, using software, different gains for different channels.

Adjustable on-board fixed gain amplifier:

The gain of these amplifiers is commonly adjusted using a potentiometer or are selectable on board links. A/D boards with a fixed gain amplifier should only be used where the signal levels on each of the input channels to be sampled have comparable ranges and lie within the full scale input range of the A/D converter.

Signals with greatly different signal levels will require external signal conditioning and amplification to enable them to be used on boards utilizing fixed gain amplification. A more flexible alternative is the programmable gain amplifier discussed below.

Programmable gain amplifier (PGA):

Programmable gain amplifiers (PGAs) make it possible to program the gain of the input amplifier using software, requiring a once-only write to an on-board register to select the gain for an individual channel. This is especially advantageous where input signals on different channels have very different signal levels and input ranges. The amplifier gain for each channel can be set accordingly, so that the input range of the incoming signal is matched with the full scale range of the A/D converter, thus resulting in higher resolution and accuracy.

it's usual practice that the amplifier gain, though programmable, is selected from a specified range of gain settings, thereby maintaining the amplifier within its operating range with-out saturation. In some high performance boards, the gain is automatically adjusted depending on the level of the input signal.

Important signal amplifier parameters

Two parameters that particularly affect the accuracy and the rate at which the signal amplifier can amplify the input signals are amplifier drift, and the settling time.

• Calibration and drift Calibration of an amplifier to eliminate offset and gain errors is only valid for the temperature at which the calibration was made. Over time, ___ with variations in temperature, the characteristics of the amplifier change or drift causing offset and gain errors known as offset drift and gain drift respectively. Offset drift and gain drift in parts per million per degree Centigrade (ppm/°C) specifies the sensitivity of the amplifier to changes in temperature.

Compounding the natural tendency of the amplifier characteristics to drift is the fact that the potentiometer settings of fixed gain amplifiers also tend to drift with time and temperature.

• Settling time Amplifier settling time is defined to be the time elapsed from the application of a perfect step input to the time the amplifier output settles within a pre-determined error margin of the required output value.

A characteristic of amplifiers is that the throughput decreases with increasing gain. This is because at higher gains the signal output changes by a greater amount, therefore increasing the settling time. This applies to fixed gain and programmable gain amplifiers. If the A/D converter samples the amplified input signal before the amplifier output has settled correctly, (i.e. the time between samples is less than the settling time of the signal amplifier) then an incorrect data value may be sampled. Poor settling time is a major problem, because the level of inaccuracy varies with the gain and sampling rate and can't be reported to the host computer.

To allow for the lowest possible input ranges and therefore the highest required gains, A/D boards adjust their internal timing to allow for a greater settling time for the output of the amplifier. This means that the highest allowed settling time and the reduction in throughput caused by it, is imposed for all amplifier gain settings. More advanced A/D boards take into account the input range and amplifier gain required, thereby increasing throughput at higher signal level input ranges where lower gain settings are required.

F.2.3 Channel-gain arrays

On the original A/D boards the address of the channel to be sampled was written to the multiplexer, the gain setting sent to the programmable gain amplifier (PGA) and once the signal was settled an A/D conversion was initiated. The data was subsequently read and transferred to the PC's memory. This incurred a large software overhead. Background operation using interrupts is difficult and slower than polled I/O and accurately timed samples and higher speed data transfer methods such as DMA and repeat instructions are impossible in either case.

The use of channel-gain arrays (CGA) on many A/D boards overcomes these limitations.

The channel/gain array is a programmable memory buffer on the A/D board, which contains the channel address and gain setting for each input channel to be sampled. The gain of the amplifier for a particular channel is set by the internal hardware preceding the sampling of the channel, based on the gain value read from the channel/gain array. Where a single PGA is provided for all channels, the gain required for each channel is stored in a channel/gain array.

If there are individual PGAs for each input channel, the gains for each input amplifier are stored in a gain array. The gain of each remains the same until overwritten by the software.

Channel-gain arrays vary in size from a few channel/gain pairs (one for each channel), to many thousands of pairs.

F.2.4 Sample and hold circuits

As shown in ill. F.2, a sample-and-hold (S/H) device consists of an analog signal input and input buffer, an analog signal output and output buffer, a charge-storing device, usually a capacitor, and a control input that controls the switching circuitry, which in turn, connects the input to the output.

ill. F.2 Functional diagram of a sample-and-hold device

As its name implies, a S/H has two operating states. When in sample mode, a sample command applied to the control input closes the internal switch, thereby causing the output to track the input as closely as possible. In this mode, the hold capacitor charges to the voltage level applied at the input. When a hold command is applied to the control input, the switch opens, disconnecting the output from the input. With the switch open and the high impedance of the output amplifier preventing the premature discharging of the capacitor, the hold capacitor retains the value of the input signal at the stage the hold command was applied to the control input.

With the exception of some flash A/D converters, which are very fast, most A/D converters require a fixed time period during which the input signal to be converted remains constant.

When used at the input to an A/D converter, a sample and hold circuit performs this function, acquiring an analog signal at the precise time its control input is made active. The A/D converter can then convert the voltage held at the output of the sample and hold - minimizing inaccuracies in the conversion due to changes in the signal during the conversion process.

Important signal parameters

• Hold settling time: The time that elapses from the occurrence of the sample command, to the point where the output has settled within a given error band of the input, is known as the acquisition time or hold settling time.

• Aperture time: The time required to switch from the sample state, measured from the 50% point of the mode control signal, to the hold state (the time the output stops tracking the input), is known as the aperture time.

• Aperture uncertainty: This value represents the difference between the maximum and minimum aperture times.

• Drop rate: A practical sample and hold can't maintain its output voltage indefinitely while in the hold mode. The rate at which it decays is known as the decay or drop rate.

• Aperture matching: Data acquisition boards capable of performing simultaneous sampling (see Simultaneous sampling) require sample and hold devices on each channel.

The smaller the aperture time and aperture uncertainty for each of these devices, the narrower the time range over which all the simultaneous samples will be taken. For a data acquisition board this is known as aperture matching. The lower the value, the more closely matched in time the simultaneous samples will be.

As a point of note, A/D boards that perform simultaneous sampling still have the sample and hold circuit that precedes the A/D converter, as each channel sample still has to be multiplexed to the A/D converter. Some A/D converters have built-in sample and hold circuitry, and where this is the case, the preceding sample and hold circuit is not required.

F.2.5 A/D converters

Real-world signals are analog signals, representing some measured physical parameter for every instant in time. They must be converted to a discrete time signal to be interpreted and processed by computers. As their name would imply, analog-to-digital converters (A/D converters, ADCs) measure an analog input voltage and convert this into a digital output format.

A/D converters therefore represent the heart of an A/D board or a data acquisition system.

The main types of A/D converters used and the specific and important parameters relating to their operation are detailed in the following sections.

Successive approximation A/D converters:

A successive approximation A/D conversion is the most common and popular direct A/D conversion method used in data acquisition systems because it allows high sampling rates and high resolution, while still being reasonable in terms of cost. Throughput of a few hundred kHz for 12-bit ADCs is common, while 16-bit ADCs employing a hybrid conversion method (i.e. successive approximation plus a much faster method such as flash) are capable of throughput up to 1 MHz, while still being reasonable in cost. One clear advantage of this device is that it has a fixed conversion time proportional to the number of bits, n, in the digital output. If the approximation period is T, then an n-bit converter will have a conversion time of around n T. Each successive bit, which doubles the ADCs accuracy, increases the conversion time by the period T only. The functional diagram of an n-bit successive approximation A/D converter is shown in ill. F.3.

ill. F.3 Functional diagram of an n-bit successive approximation A/D converter

The successive approximation technique generates each bit of the output code sequentially, starting with the most significant bit (MSB). The operation is similar to a binary search and is based on successively closer comparisons between the analog input signal and the analog output from an internal D/A converter.

The A/D converter starts the procedure by setting the digital input to the D/A converter, so that its analog output voltage is half the full-scale voltage of the A/D device. A comparator is used to compare the D/A analog output to the analog input signal being measured.

If the analog input signal is greater, the most significant bit (MSB) of the D/A converter input is set to logic 1 and the next most significant bit of the D/A converter input is set to logic 1, setting the analog output of the D/A at 3/4 of full scale voltage. If the analog input signal was less, the MSB of the D/A input is cleared to logic 0 and the next most significant bit of the D/A input is set to logic 1, setting the analog output of the D/A at 1/4 of full-scale voltage.

Each step effectively divides the remaining fraction of the input range in half, then again compares it to the analog input signal. This is repeated until all the n bits of the A/D conversion have been determined. it's obviously important that the analog input signal to the A/D does not change during the conversion process, hence the use of sample and hold circuits.

Flash A/D converters:

Flash A/D converters are the fastest available A/D converters, operating at speeds up to hundreds of MHz. This type of device is used where extremely high speeds of conversion are required with lower resolution, e.g., 8-bits.

ill. F.4 shows the functional diagram of an n-bit flash A/D converter. Each of the 2 n -1 comparators simultaneously compares the input signal voltage to a reference voltage determined by its position in the resistor series, and corresponding to the output code of the device. Flash A/D conversion is quicker than other methods of A/D conversion because each bit of the output code is found simultaneously, irrespective of the number of bits-resolution.

However, the greater the resolution of the device, the greater the number of comparators required to perform the conversion. In fact, each additional bit doubles the number of comparators, and therefore increases the size and cost of the chip.

ill. F.4 Functional diagram of an n-bit flash A/D converter

Flash A/D converters tend to be found in specialist boards, such as digital oscilloscopes, real-time digital signal processing applications and general high-frequency applications.

Integrating A/D converters:

Integrating A/D converters use an indirect method of A/D conversion, whereby the analog input voltage is converted to a time period that is measured by a counter. The functional diagram of a dual-slope integrating A/D converter is shown in ill. F.5(a).

ill. F.5(a) Functional diagram of an n-bit dual slope integrating A/D converter

ill. F.5(b) Voltage appearing at V0

The operation of a dual slope integrating A/D converter is based on the principle that the output of an integrating amplifier to a constant voltage input is a ramp whose slope is negative and proportional to the magnitude of the input voltage.

At the start of the A/D conversion, a fixed counter is cleared to zero and the unknown analog input voltage is applied to the input of the integrating amplifier. As soon as the output of the integrating amplifier reaches zero, a fixed interval count begins. After a predetermined count period, T, the count is stopped. For a positive analog input voltage, the output of the integrating amplifier has reached a negative value proportional to the magnitude of input analog signal. This is shown in ill. F.5(b). If the analog input varies during the fixed count time interval, then the output of the integrating amplifier is proportional to the average value of the input over the fixed time interval. This is especially useful for elimination of cyclical noise and /or mains hum appearing at the input.

At this point, the count register is again cleared. A negative fixed voltage reference is now applied to the input of the integrating amplifier and the count begins. When the output of the integrating amplifier again returns to zero the count is stopped. The average value of the input analog signal is equal to the ratio of the counts multiplied by the reference voltage. This is very effective in averaging and therefore eliminating cyclical noise appearing at the analog input.

Integrating A/D converters, generally include an additional and preceding phase, during which the device carries out a self-calibrating, auto-zero operation. The stability, accuracy, and speed of the clocking mechanism, the duration of the count period, and the accuracy and stability of the voltage reference, determine the accuracy of the device.

These devices are low speed, typically a few hundred hertz maximum. However, they are capable of high accuracy and resolution at low cost. For this reason they are principally used in low frequency applications, such as temperature measurement, in digital multimeters and instrumentation.

Important A/D parameters

Analog to digital conversion is essentially a ratio operation, whereby the analog input signal is compared to a reference (full-scale voltage), converted to a fraction of this value, and then represented by a digital number. In approximating an analog value, two operations are per-formed.

Firstly the quantization or mapping of the analog input into one of several discrete ranges, and secondly the assignment of a binary code to each discrete range. ill. F.6(a) shows the ideal transfer function of a 3-bit A/D converter with a unipolar (0 V to 5.5 V) input.

The horizontal axis represents the analog input signal as a fraction of full-scale voltage (FSV) and the vertical axis represents the digital output. An n-bit A/D converter has 2 n distinct output codes. While not used in practical DAQ systems, a 3-bit A/D converter represents a convenient example since it divides the analog input range into 2^3 = 8 divisions, each division representing a binary code between 000 and 111. ill. F.6(b) shows the ideal transfer function of a 3-bit A/D converter with a bipolar (-FSV to +FSV) input. This is equivalent to the unipolar transfer function except that it's offset by -FSV.

With regard to ill. F.6, some of the important parameters of A/D converters are discussed below.

(a) Transfer function with unipolar input; (b) Transfer function with bipolar input ill. F.6 Ideal transfer function of a 3-bit A/D converter

Code width:

This is the fundamental quantity for A/D converter specifications, and is defined as the range of analog input values for which a single digital output code will occur. The nominal value of a code width, for all but the first and last codes in the ideal transfer characteristic, is the voltage equivalent of 1 LSB of the full-scale voltage. Therefore, for an ideal 12-bit A/D converter with full-scale voltage of 10 V the code-width is 2.44 mV. Noise and other con-version errors may cause variations in code width, however, the code width should not generally be less than 1/2 LSB or greater than 3/2 LSB for practical A/D converters.

Resolution:

Resolution defines the number of discrete ranges into which the full scale voltage (FSV) input range of an A/D converter can be divided to approximate an analog input voltage. it's usually expressed by the number of bits the A/D converter uses to represent the analog input voltage (i.e. n-bit) or as a fraction of the maximum number of discrete levels, which can be used to represent the analog signal (i.e. 1/ 2^n ). The resolution only provides a guide to the smallest input change that can be reliably distinguished by an ideal A/D converter, or in effect its ideal code-width. e.g., when measuring a 0-10 V input signal, the smallest voltage change an A/D converter with 12-bit resolution can reliably detect is equal to: 1/4096 * FSV= 10/4096=2.44 mV Therefore, each 2.44 mV change at the input would change the output by ± 1 LSR or ± 0 × 001h. 0V would be represented by 0 × 000h, while the maximum voltage, represented by 0 × FFFh would be 9.9976 V. Due to the staircase nature of the ideal transfer characteristic, a much smaller change in the input voltage can still cause the A/D converter to make a transition to the next digital output level, but this will not reliably be the case. Changes smaller than 2.44 mV will not therefore be reliably detected. If the same 12-bit A/D converter is used to measure an input signal ranging from -10 V to +10 V, then the smallest detectable voltage change is increased to 4.88 mV.

Input range:

Range refers to the maximum and minimum input voltages that the A/D converter can quantize to a digital code. Typical A/D converters provide convenient selection of a number of analog input ranges, including unipolar input ranges (e.g. 0 to +5 V or 0 to +10 V) and bipolar input ranges (e.g. -5 V to +5 V or -10 V to +10 V). On A/D boards, the input range is usually selectable by on-board jumpers.

Note that the transfer functions of ill. F.6 show that the maximum input voltage is 1 LSB less than the nominal full-scale voltage (FSV). If it's essential that the A/D's input range go from 0 to FSV, then for some A/D converters it may be possible to adjust the voltage reference to slightly above nominal FSV so that this can be achieved. This increases the real full-scale range and the LSB value by a small amount. For an input range of 0-10 V a code of 0 × 000h now represents 0 V while 0 × FFFh represents 10 V.

Data coding:

While most A/D converters express unipolar ranges (i.e. 0-10 V) in straight binary, some return complementary binary, which is just the binary code with each bit inverted. Where A/D converters are used to measure voltages in bipolar ranges (i.e. -10 V to +10 V) there is an increased number of ways of representing the coded output (offset binary, sign and magnitude, one's complement and two's complement).

Most commonly, and for simplicity, A/D converters usually return offset binary values.

This means that the most negative voltage in a bipolar range (-5 V for a range -5 V to +5 V) is returned as 0 × 000h, while the highest digitally coded value of 0 × FFFh (for a 12-bit ADC), represents 4.9976 V. 0 × 800h represents the mid-scale voltage of 0 V.

Conversion time:

The conversion time of an A/D converter is defined as the time taken from the initiation of the conversion process to valid digital data appearing at the output. For most A/D converters, conversion time is identical to the conversion rate. Therefore, an A/D converter with a conversion time of 25 µs is able to continuously convert analog input signals at a rate of 40,000/sec. For some high-speed A/D converters, pipelining allows new conversions to be initiated before the results of prior conversions have been determined. An example of this would be an A/D converter that could perform conversions at a rate of 5 MHz (200 ns con-version time), but actually took 675 ns (1.48 MHz conversion rate) to perform each individual conversion.

Errors in A/D converters:

Errors that may occur in A/D converters are defined and measured in terms of the location of the actual transition points in relation to their locations in the ideal transfer characteristic.

These are discussed below.

Quantization uncertainty:

Unlike a D/A converter, where there exists a unique analog value for each digital code, each digital output code is valid over a range of analog input values. Analog inputs within a given discrete range are represented by the same digital output code, usually assigned to the nominal mid-range analog value. There is, therefore, an inherent quantization uncertainty of ± 1/2 LSB (least-significant bit), in addition to any other actual conversion errors. This is shown in Figure 5 .6(b).

Unipolar offset:

Note that in the ideal transfer function, the first transition should ideally occur 1/2 LSB above analog common. The unipolar offset is the deviation of the actual transition point from the ideal first transition point. This is shown in ill. F.7(a).

Bipolar offset:

As seen in ill. F.7(b) the transfer function for an ideal bipolar ADC resembles the unipolar transfer function, except that it's offset by the negative full-scale voltage (-FSV). Offset adjustment of a bipolar A/D converter is set so that the first transition occurs at 1/2 LSB above -FSV, while the last transition occurs at -3/2 LSB below +FSV. Because of non-linearity, a device with perfectly calibrated end points may have an offset error at analog common. This is known as the bipolar offset error and is shown in ill. F.7(b). (a) Unipolar offset error; (b) Bipolar offset error ill. F.7 3-bit A/D converter transfer functions with offset errors

Unipolar and bipolar gain errors:

The gain, or scale, factor is the number which establishes the basic conversion relationship between the analog input values and the digital output codes, e.g. 10 V full-scale. It represents the straight-line slope of the ideal transfer characteristic. The gain error is defined as the difference in full-scale values between the ideal and the actual transfer function when any offset errors are adjusted to zero. it's expressed as a percentage of the nominal full-scale value or in LSBs. Gain error affects each code in an equal ratio. Unipolar and bipolar gain errors are shown in ill. F.8.

(a) Unipolar gain error; (b) Bipolar gain error ill. F.8: 3-bit A/D converter transfer function with gain errors

Offset and gain drift

Offset and gain errors are usually adjustable to zero with calibration, however this calibration is only valid at the temperature at which it was made.

Changes in temperature result in a non-zero offset and gain error, known as offset drift and gain drift. These values, specified in ppm/deg C, represent the ADC's sensitivity to temperature changes.

Linearity errors:

With most ADCs the gain and offset specifications are not the most critical parameters that determine an A/D converter's usefulness for a particular application, since in most cases they can be calibrated out in software and /or hardware. The most important error specifications are those that are inherent in the device and can't be eliminated. Ideally, as the analog input voltage of an A/D converter is increased, the digital codes at the output should also increase linearly. The ideal transfer function of the analog input voltage verses the digital output code would show a straight line. Deviations from the straight line are specified as non-linearities.

The most important of these, (because they are errors which can't be removed), are integral non-linearity and differential non-linearity errors. The transfer characteristics of a 4-bit A/D converter showing differential and integral linearity errors are shown in ill. F.9(a) and ill. F.9(b). (a) Integral non-linearity errors specified as low-side transition ill. F.9(a) Transfer function of a 4-bit A/D converter with integral non-linearity and differential non-linearity errors

(b) Integral non-linearity errors specified as center-of-code transition ill. F.9(b) Transfer function of a 4-bit A/D converter with integral non-linearity and differential non-linearity errors

Integral non-linearity (INL):

This is the deviation of the actual transfer function from the ideal straight line. This ideal line may be drawn through the points where the codes begin to change (low-side transition or LST), as shown in ill. F.9(a), or through the center of the ideal code widths (center-of-code or CC), as shown in ill. F.9(b). Most A/D converters are specified by low-side-transition INL. Thus, the line is drawn from the point 1/2 LSB on the vertical axis at zero input to the point 3/2 LSB beyond the last transition at full-scale input. The deviation of any transition from its corresponding point on that straight line is the INL of the transition. In ill. F.9(a), the transition to code 0100 is shifted to the right by 1 LSB, meaning that the LST of code 0100 has an INL of +l LSB. In the same figure the transition to code 1101 is shifted left by 1/2 LSB, meaning that the LST of code 1101 has an INL of -l/2 LSB.

When the ideal transfer function is drawn for center-of-code (CC) integral non-linearity specification, as shown in ill. F.9(b), the INL of each transition may be different. Where the digital code 1101 previously had -1/2 LSB of LST INL, it now has 0 LSB of CC INL.

Similarly, the code 1011 has -1/8 LSB of CC INL, where it previously had 0 LSB of LST INL.

The INL is an important figure because the accurate translation from the binary code to its equivalent voltage is then only a matter of scaling.

Differential non-linearity (DNL):

In an ideal A/D converter, the midpoints between code transitions should be 1 LSB apart.

Differential non-linearity is defined as the deviation in code width from the ideal value of 1 LSB. Therefore, an ideal A/D converter has a DNL of 0 LSB, while practically this would be ± l/2 LSB. If DNL errors are large, the output code widths may represent excessively large or small ranges of input voltages. Since codes don't have a code width less than 0 LSB, the DNL can never be less than -1 LSB. In the worst case, where the code width is equal to or very near zero, then a missing code may result. This means that there is no voltage in the entire full-scale voltage range that can cause the code to appear. In Figures F.9(a) and F.9(b), the code-width of code 0110 is 2 LSBs, resulting in a differential non-linearity of +l LSB. As the code-width of the code 1001 is 1/2 LSB, this code has a DNL of -1/2 LSB. In addition, the code 0111 does not exist for any input voltage. This means that code 0111 has -1 DNL and the A/D converter has at least one missing code.

Often, instead of a maximum DNL specification, there will be a simple specification of monotonicity or no missing codes. For a device to be monodic, the output must either increase or remain constant as the analog input increases. Monodic behavior requires that the differential non-linearity be more positive than -l LSB. However, the differential non-linearity error may still be more positive than +1 LSB. Where this is the case, the resolution for that particular code is reduced.

F.2.6 Memory (FIFO) buffer

A characteristic of high-speed A/D boards is the inclusion of on-board memory or I/O in the form of a FIFO (first in first out) buffer or a pair of buffers. These range in size from 16 bytes to 64 Kbytes.

The FIFO buffer(s) form a fast temporary memory area. For small FIFOs, the buffer is addressed as I/O. Larger FIFOs are actually mapped into the memory address space of the host PC. Samples can therefore be collected up to the maximum size of the buffer without actually having to perform any data transfers. Where more samples are required, existing data in the buffer must be transferred to other parts of the main memory, or written to hard disk, before it's overwritten.

FIFO buffering is particularly useful in situations where the host computer is using polled I/O or interrupts to transfer data, and might not be able to respond quickly enough to transfer the current sample, before it's overwritten by a subsequent sample. This would typically occur when the host computer is running a multitasking operating system such as Windows or OS/2, where there are inherent interrupt latencies or a large number of tasks being performed. The FIFO buffer also has the effect of evening-out variations in DMA response times, helping to guarantee full-speed operations even with substandard PC/AT clones. On-board FIFOs, when used in conjunction with specific data techniques such as polled I/O, interrupts, DMA or repeat string instructions, can greatly improve the throughput of A/D boards.

F.2.7 Timing circuitry

To perform multiple analog-to-digital conversions automatically, at precisely defined time intervals, A/D boards are equipped with timing circuitry whose principal responsibility is to generate the strobe signals that allow the components of the analog input circuitry to perform their respective functions efficiently and correctly.

Clocking circuits are made up of a frequency source, which is either an on-board oscillator between 400 kHz and 10 MHz or an external user-supplied signal, and a prescaler/divider network, typically a counter/timer chip that slows the clock signal down to more usable values. The clock frequency can be as low as 1 Hz or up to the maximum throughput of the board.

A/D conversions are started by triggers; either by a software trigger (writing to an on-board register), or an external hardware trigger. Data conversions can be synchronized with external events by using external clock frequency sources and external triggers. The external trigger event is usually in the form of a digital or analog signal, and will begin the acquisition depending on the active edge if the trigger is a digital signal, or the level and slope, if the trigger is an analog signal.

In performing an analog-to-digital conversion cycle on a single input channel, the timing circuitry must ensure that the following steps are performed:

• Once the channel/gain array has been initialized, the timing circuitry increments to the next channel/gain pair. The next channel to be sampled is output to the address lines of the input multiplexer and the required gain setting is output to the programmable gain amplifier (PGA). The sample-and-hold (S/H) is put into sample mode.

• The timing circuitry must wait for the input multiplexer to settle, then for the PGA output delay time and lastly for any S/IA delay.

• The S/H is put into hold mode. The timing circuitry must wait for the duration of the aperture time of the S/H for the signal to become stable at the output of the S/H.

• A start conversion trigger is issued to the A/D converter.

• The timing circuitry waits for the end of conversion signal from the A/D converter to become active.

• The available data is then strobed from the A/D converter into a data buffer or a FIFO, from where it's usually accessible by the host computer.

• If simultaneous sampling is available on the A/D board, the timing circuitry generates the necessary sequence of strobes to the input S/H devices, so that all channels are sampled at the beginning of the sampling cycle, before the data is passed to the rest of the analog input circuitry.

Total throughput, for multiple conversions on different channels, is often increased by overlapping parts of this cycle. e.g., while the A/D converter is busy converting the S/H output, the next channel/gain pair can be output to the multiplexer and PGA, so that their settling and delay times are overlapped with the A/D conversion time.

The timing circuitry may also include a block-sampling mode, which allows blocks of samples to be collected at regular intervals at the A/D board's maximum sampling rate. This is discussed in the section on Sampling techniques, p. 151.

F.2.8 Expansion bus interface

Expansion bus interface

The bus interface provides the control circuitry and signals used to transfer data from the board to the PC's memory or for sending configuration information (e.g. channel/gain pairs) or other commands (e.g. software triggers) to the board.

It includes:

• The plug-in connector, which provides the hardware interface for connecting all control and data signals to the expansion bus, (e.g. ISA, EISA etc), of the host computer.

• The circuitry, which determines the base address of the board. This is usually a selectable DIP switch and defines the addresses of each memory and I/O location on the A/D board.

• The source and level of interrupt signals generated. Interrupt signals can be programmed to occur at the end of a single conversion or a DMA block. The configuration of the interrupt levels used is commonly selected by on-board links.

• DMA control signals and the configuration of the DMA level(s) used. The configuration of the DMA levels used is typically selected by on-board links.

• Normal I/O to and from I/O address-locations on the board.

• Wait state configuration for use in machines with high bus speeds or with non-standard timing. The number of wait states is usually configurable by on-board links.

NEXT: Single ended vs differential signals

PREV: Introduction

All related articles   Top of Page   Home



Updated: Wednesday, March 16, 2016 17:24 PST