Matching amplifiers with A/D converters
Selecting the right amplifier to drive an A/D can dramatically improve system performance, especially at higher resolutions
BY IAN BRUCE and WALT KESTER
Analog Devices, Inc.
Norwood, MA
Many engineers, faced with designing or upgrading a data acquisition system, are tempted by the recent proliferation of low-cost monolithic high-resolution A/D converters. After all, why live with 12 bits when you can afford 14- or 16-bit converters with all the attendant improvements in performance? Designers have grown used to thinking of the A/D as the element that limits performance in the data acquisition chain and by extension assuming that upgrading the converter will upgrade the system. The associated circuitry ahead of the converter was usually more than a match for the job.
A designer who has grown familiar with, for example, the AD574 industry-standard 12-bit A/D, but now desires greater resolution, may be tempted to simply substitute a 14- or 16-bit converter with a similar architecture. But caveat emptor: selecting the appropriate A/D requires careful thought, and in any case is only part of the problem. Just as important is the drive amplifier for the converter, which will dramatically affect system performance and should be selected with equal care. Selecting an A/D Before we investigate drive amplifiers for A/Ds it is worth reviewing the performance specifications of the converters themselves. Here we should distinguish between sampling A/Ds (that is, converters with an internal sample-and-hold amplifier function at the input) and A/Ds designed to operate on signals that remain constant during the encoding interval.
Successive-approximation A/Ds, such as the previously mentioned AD574, require the analog input to be held constant during the entire conversion cycle in order to fix the input value and the time at which it was measured. External sample-and-hold amplifiers are often used ahead of this type of converter when encoding rapidly changing ac signals. However, this S/H amp/A/D combination is often difficult to specify; many designers have upgraded their designs to use new, pin-compatible replacement converters that include internal S/H amps and guaranteed ac specifications. Sampling converters are almost mandatory in applications involving digital signal processing (DSP), where a uniform sampling rate is required and the ac characteristics of the converter are critical.
Many factors go into the selection of a particular A/D, but most relate to the character of the signal being processed. Signal bandwidth usually dictates the minimum sampling rate: Nyquist's criterion states that, for no loss of information, the signal being processed must be sampled at a rate equal to at least twice the maximum signal bandwidth of interest. Selecting the resolution of the converter will determine how finely the analog input signal can be resolved, but it won't necessarily determine accuracy. Here it is worth considering the case of the theoretically perfect converters shown in the table. The table shows the weight of the least significant bit (LSB) relative to a 10-V full-scale input signal. Also included in the table is the theoretical rms quantization noise for a sine wave input to an ideal n-bit A/D with no dc or ac errors. As the input to an A/D increases smoothly, the output moves up in a series of steps. If the analog input is summed with an ideal digital-to-analog reconstruction, these steps produce an error waveform that can be considered a type of white noise (the quantization noise). For a full-scale sine wave, the rms value of the noise is q/
No real converter is ideal. All exhibit errors of one sort or another, which are manifested as an increase in the quantization noise. As a result, SNR is always less than perfect, as illustrated by a specification called the effective number of bits (ENOB), which computes the actual number of bits corresponding to the actual SNR. For example, the 16-bit AD676 A/D has an SNR of 88 dB, which corresponds to an ENOB of just under 15 bits.
The above analysis isn't just an academic exercise; it sheds light on the best we can achieve, and shows how as converter resolution is increased the amount of the noise (which includes distortion) must shrink dramatically. Consider the 12-bit case: here an LSB is 244 ppm of full scale (2.44 mV of a 10-V signal) and the best SNR we can achieve is 73.8 dB. In the case of 16 bits, the LSB has diminished to 15 ppm, or only 153 microvolts of a 10-V signal–a difficult resolution to achieve even under the best of circumstances. Here lies the problem for the designer trying to develop a high-resolution data acquisition system: he or she may take great pains in selecting the appropriate A/D, but unless equal care is taken in selecting and implementing the circuitry that precedes the converter, especially the drive amplifier, the signal is likely to be corrupted. For example, a buffer amplifier that exhibits a few millivolts of offset over the operating temperature range will reduce the effective dc performance of the system to the 12-bit level, no matter what the resolution of the converter is. A/D drive amplifiers
The primary functions of the drive amplifier are to provide signal buffering, gain (if required), and perhaps level shifting. (Drive amplifiers frequently perform a sample-and-hold function too, but that is beyond the scope of this article.) A/Ds usually present an active and passive load to the amplifier, and the amplifier output drive must match these load requirements. Also, of course, the amp must be stable under the required gain and load conditions.
Low-frequency specifications
Selecting an op amp based on dc parameters is relatively straightforward. Specifications to consider are gain accuracy and gain drift, and voltage offsets and voltage offset drift. Here the dc open-loop gain must be sufficient to provide the necessary closed-loop accuracy for the application at hand. For voltage-feedback amplifiers, calculating the dc gain error is relatively simple: gain error is approximately equal to 1/AOB , where AO is the open loop voltage gain and B is the feedback factor (AOB is therefore the dc loop gain). For example, if 16-bit accuracy is required (0.00075%) for a unity-gain inverter (in which case 1/B = 2), then the dc open-loop gain must be at least 262,000, or 108 dB. For a 12-bit system the closed-loop gain must be at least 74 dB. When checking the dc open-loop gain characteristics of an op amp on the manufacturer's datasheet, be sure the specifications match those of the A/D at the output load value equal to the A/D's input resistance: in most cases dc open-loop gain gets worse with heavier loading (decreasing resistance). With voltage-feedback amplifiers, shifts in open-loop gain of a factor of two over temperature aren't uncommon; such a decrease in gain produces a doubling of gain error. Selecting an amp–such as the industry-standard OP-27–with high open-loop gain and/or low gain drift will help, as will increasing the feedback factor, if that is practical. Similar equations can be derived for current-feedback amplifiers. Loop gain (LG) is the product of open-loop voltage gain (ACF ) and the feedback factor (BCF ):
LG = ACF * BCF = R2/R 1 * 1/(1 + (RS /T) * (R2 /RS + R2 /R1 +1))
In the above equation, T is the transimpedance open-loop gain, RS is the input impedance (which is usually between 10 and 100 ohms), R1 is the input resistor, and R2 the feedback resistor. This expression can be manipulated in the same manner as the voltage-feedback equation to determine gain accuracy and gain drift: A first-order approximation of gain error is R2 /T. Some current amplifiers are designed to be extremely stable with small signal swings, offering excellent gain characteristics over wide bandwidths.
Calculating the dc error components that make up the output voltage offset and drift is possible with the equations shown in Fig. 1. These equations are applicable for both voltage- and current-feedback amplifiers, although for voltage-feedback devices, the bias currents are approximately equal. It is common to adjust for gain and offset to eliminate the need for extremely tight resistor tolerances. Nevertheless, metal film resistors should be used in order to provide good ratio tracking and stability over time and temperature. Offset must also be considered when scaling the system dynamic range, otherwise the A/D input may exceed the full-scale range and result in clipping.
Ac op amp specifications
When considering ac specifications, bandwidth is usually the first to come to mind. It can be specified in numerous ways–such as unity-gain bandwidth, gain-bandwidth product, -3-dB bandwidth–and with innumerable conditional restrictions. A key point that is often confused when selecting voltage-feedback amplifiers on the basis of bandwidth is that the closed-loop gain is based on 1/B, and not the signal gain. For example, if an op amp has a gain-bandwidth product of 10 MHz, the closed-loop bandwidth in a non-inverting unity-gain configuration is 10 MHz, while that of a unity-gain inverter is only 5 MHz. Another problem with gain-bandwidth product is that some amplifiers are designed to be stable for a limited range of gain settings, so that gain-bandwidth is only meaningful over this restricted range. For some applications, the relationship between gain and operating bandwidth is spelled out with specifications such as gain flatness, which specifies the operating bandwidth over which the gain response remains within a specified band.
Current-feedback amplifiers, in contrast to voltage-feedback devices, have bandwidths that are relatively independent of closed-loop gains. For these devices, gain-bandwidth product isn't particularly relevant, and most current-feedback amplifiers are optimized for a fixed value of feedback resistors. The closed-loop signal bandwidth for current-feedback amplifiers can be determined from curves on the product datasheet.
While bandwidth is a key specification, it isn't the whole story. Distortion, for example, can be just as important. Indeed, for most amplifiers that are considered suitable as A/D drivers, manufacturers provide harmonic distortion versus frequency plots. The drive amplifier must have less total harmonic distortion than the A/D, so that the amp does not limit the system's spurious-free dynamic range (SFDR). The SFDR is the dynamic range (given in decibels) that exists between the fundamental and the largest noise spur in the frequency band of interest. Settling time is also important, especially in applications where the amplitude of pulses must be measured accurately, or in multiplexer buffering and charge-coupled device imaging. Note that settling time is usually specified for a step input to within a certain percent of the final output value. For example, a settling time of 1 microsecond to 0.0015% means that the op amp will settle out to the 16-bit level within the stated time period, that is, 800 ns. The same amplifier will settle to 0.01%, or the 12-bit level, in about 780 ns. There is usually little relation between the settling time and the various error levels (or between settling time and specifications such as rise time, slew rate, or bandwidth).
Op amp noise is usually specified in terms of input voltage noise spectral density and input current noise spectral density. In addition, Johnson noise generated by external resistors may be a consideration. In the general case, the spectral densities of noise vary with frequency. Because A/Ds are used for wide-bandwidth signals, the amplifier noise in those regions is usually white, and can be characterized by the spectral noise density at the highest frequency, multiplied by the square root of that frequency (the value at 1 kHz is often quoted on datasheets). The noise characteristics of an amplifier have a significant effect on the real resolution of a data acquisition system. Consider the system with a full-scale input of 10 mV that requires amplification by 1,000 to bring it up to the full-scale level of a 12-bit A/D. Referred to the input, the LSB weighting is 2.5 microvolts, and if the amplifier has a modest noise specification of 1 microvolts–or 100 pA in 10 k Ω–accuracy and effective resolution will be significantly affected. An example of calculating noise performance for a amplifier/A/D combination is given below.
Driving a precision sampling A/D
Let's consider a specific example of selecting a drive amplifier for a high-performance 16-bit A/D, in this case the AD676, 100-ksample/s converter. The AD676 has a modified successive-approximation architecture that uses a capacitor array instead of the more traditional laser-trimmed resistor ladder. Since capacitors store charge, the sample-and-hold function is inherent without the need for additional circuitry. Performance is optimized by digitally correcting nonlinearities through on-chip autocalibration.
Designing with high-resolution converters requires careful attention to board layout. Typical PC copper track–0.25 mm (0.010-in.) wide and 0.038 mm (0.0015-in.) thick–will have a room temperature resistance of about 0.018 ohms/cm, or 0.18 ohms for a 10-cm run. In these circumstances a 16-bit converter with 5-k Ω input impedance will have a gain error of 2 LSB at full-scale input. To overcome such board layout problems, many high-resolution converters, including the AD676, provide an analog ground-sense pin that can be used to compensate for small drops in the analog input signal return line. The ground-sense connection remotely detects the ground potential of the signal source without drawing current; it is especially useful if the signal has been carried some distance to the A/D.
In all other respects the drive amplifier must live up to the converter specifications. It has to have a fast settling time, distortion better than -95 dBc, and 16-bit dc performance. Noise is also a key consideration. Figure 2 shows the calculations for the total noise of an op amp–in this case Analog Device's AD797–over the 1-MHz input bandwidth of the converter used in this example. For a gain of -1, assuming a full scale input of 10 V, the total noise is computed to be 7.3 microvoltsrms, which is considerably below the theoretical quantization noise of 44 microvolts rms.
Driving a sigma-delta A/D
Sigma-delta (also known as delta-sigma) modulation techniques have been in existence for many years, but their application to A/Ds has become popular only in the last few years. Sigma-delta devices employ a digitally intensive oversampling technique using a high-frequency bit stream and digital filters that can produce digital outputs over 20 bits. Oversampling usually occurs at a frequency well away from the bandwidth of interest, requiring only a simple single-pole antialiasing filter. For example, the AD1879 18-bit dual A/D, optimized for audio applications, effectively samples its output at 48 ksamples/s but has an input oversampling frequency of 3.072 MHz. Most audio-bandwidth sigma-delta converters present a switched-capacitor input to the drive amplifier, and this can pose a special set of problems because of signal-dependent transient input current. The capacitor, C, is switched at the oversampling rate, fS , and acts as a resistor having a resistance equal to 1/CfS . However, looking back, signal-dependent charge is injected into the drive amplifier's output stages. For optimum common-mode rejection of transient load currents, the differential input of converters like the AD1879 should be driven differentially. In addition, series resistors at the drive op amps' outputs should be used to isolate the remaining transient currents and the capacitor loads from the op amps. The value of these resistors must be small, however, to avoid distortion resulting from the signal-dependent transients caused by charge-injection.
Finally, in selecting drive amplifiers for the sigma-delta A/D it is important to consider the application needs as well as the performance match with the A/D. Robustness (the ability to drive capacitive loads) is just as critical as total harmonic distortion.
Driving a wide dynamic range A/D
A new generation of data converters have emerged over the last few years that combine exceptionally wide dynamic range with high resolution. These devices have been designed for applications such as spectrum analysis and imaging, and often combine all necessary functions–single-to-differential amplifier, sample-and-hold amp, A/D, and timing and control circuitry–in a single package. Such 14-bit converters are able to sample at 10 Msamples/s with a SFDR of 72 dB, rising to 90 dB at 2.3 Msamples/s. Selecting a suitable op amp to drive these converters is critical to maintain these performance levels; harmonic distortion specifications must be better than those of the A/D across the complete frequency spectrum. The op amp must also have excellent bandwidth and settling time. A current-feedback amplifier such as the AD9617, which has second-harmonic distortion of -67-dBc at 20 MHz and a 190-MHz small-signal bandwidth, is a good possible choice. If gain is required ahead of the A/D the ultra-low drift circuit in Fig. 3 is recommended; the configuration works well for signals through 10 MHz without introducing distortion spurs that degrade the A/D's dynamic range. At 2.3 MHz and 2-Vp-p output, all spurs generated in the drive circuit are less than -100 dBc. The signal path through U3 and U4 is a series inverting configuration that cancels even-order harmonics that are generated as the loop gain diminishes with frequency. U1 and U2 reduce the drive current of U3 and U4, respectively. Since U1 and U2 are set up in gains twice that of U3 and U4, the net effect is that the output stages of U3 and U4 are unloaded, minimizing the odd-order harmonics generated in the output stages of U3 and U4. The output of the amplifier circuit is set to drive either 2 Vp-p into 74 ohms or 4 Vp-p into 150 ohms by selecting RP . Driving high-speed flash converters Although pure flash converters rarely exceed 10 bits resolution (the number of required comparators becomes unmanageable beyond this resolution), it is useful to review drive requirements here. Driving high-speed flash converters presents another set of challenges. Most flash converters are designed on bipolar digital processes, which makes the addition of low-distortion on-chip buffers difficult. In addition, most flash converters have a fairly large input capacitance, which is usually signal dependent (that is, nonlinear). This capacitance is primarily a result of the large number of comparators on the chip. Wideband low-distortion current-feedback amplifiers are ideal for driving this type of flash converter. However, a series resistor on the order of 50 ohms may be required to isolate the amplifier from the flash input capacitance in order to prevent peaking and to maintain stability. Because of the series resistor and the signal-dependent capacitance, harmonic distortion will result (see Fig. 4). The series isolation resistor should therefore be as low as possible while still maintaining op amp stability. Large resistor values will increase distortion and limit the input bandwidth. Application information provided on both the amplifier and the data converter datasheet should be of assistance in selecting the proper resistor value. Opening photo: AD797 THD performance Table. Theoretical converter performance
Fig. 1. The dc error components that make up the output voltage offset and drift of an op amp can be determined using the equation derived from the accompanying op amp output offset voltage model.
Fig. 2. Total noise output for an op amp–in this case the AD797–is calculated over the input bandwidth of the converter–here, the AD676.
Fig. 3. If gain is needed, a low-distortion drive circuit–such as the one shown here for the AD9014 14-bit A/D using AD9617 op amps–can be implemented. This circuit has distortion performance of less than 100 dBc.
Fig. 4. Shown is the simulated THD resulting from signal-dependent converter input capacitance for a typical op amp/flash converter combination.