Bits & Pieces (Jan. 1988)

Home | Audio mag. | Stereo Review mag. | High Fidelity mag. | AE/AA mag.

Bits & Pieces -- The real reason to want an 18-bit CD player.

by DAVID RANADA

The Two-Bit Difference

T he latest round of bit battles has just begun. Several recently announced high-end players now sport 18 bit digital-to-analog converter (DAC) integrated circuits. That's right: Despite the 16-bit data stream en coded on a Compact Disc, player manufacturers are designing in chips that, at least in theory, are four times more precise than needed. Furthermore, these units are receiving their share of digital-filter technology, with several companies promising eight-times-oversampling filters to feed their newfangled DACs. But before you pronounce your first-generation 16-bit analog-filter CD player hopelessly obsolete and rush out to plunk down hundreds of dollars on a new machine, you should be aware of how an 18-bit DAC can-and cannot-improve performance.

An entire 18-bit industry has sprung up around a specific device--the PCM-64 18-bit D/A integrated circuit from Burr-Brown, a Phoenix-based company that is probably the world's largest supplier of 16-bit audio DACs. At least half of the CD players tested by HIGH FIDELITY in the last couple of years have used Burr-Brown DACs. As of this writing, the PCM-64 is the only 18-bit DAC being mass-produced for audio use, although it was originally developed for other applications. Burr Brown's Joel M. Halbert (co-author with Mark A. Shill of a paper describing the DAC that was presented at last fall's Audio Engineering Society convention) told me the chip was developed as part of a very fast, very accurate analog-to-digital converter. He says applications such as medical full-body scanners using X rays require this sort of device: The varied absorption properties of bone and tissue, plus the necessity of reducing X-ray exposure time, require a fast converter with the very wide dynamic range an 18-bit system can span (approximately 108 dB, in theory). The chip is a superb piece of engineering, with performance approaching the theoretical ideals. The Halbert / Shill paper cites harmonic distortion of an 18-bit-encoded 1-kHz sine wave as around 0.0008 percent, when the chip is used with its full factory-recommended trimming circuit. Even without trimming, the device can be expected to produce performance linear to within 16 bits.

That level of performance is a clue as to what you can expect from the new 18-bit CD players. For the moment, you can safely ignore any and all claims to "18-bit resolution and accuracy" you may see. What the Burr-Brown chips will be used to obtain is, in effect, true 16-bit re solution with 16-bit linearity. This is only proper, considering that a CD contains, at best, a 16-bit signal-and you can't get more information out of a CD than was put in. This is important because, until recently, such 16-bit performance was rarely available with most so-called 16 bit CD players.

A digression is in order here, since "resolution," "accuracy," and "linearity" are commonly mistaken for each other and consequently misused. A DAC can be thought of as containing an internal scale or ruler. The number of gradations on the ruler is the converter's resolution or precision. Going from 16 to 18 bits of resolution is equivalent to a fourfold increase in the fineness of the gradations. The actual placement of each gradation com pared to where it should be is the ruler's accuracy, and the evenness of the spacing is its linearity. The latter is the most important characteristic of a DAC in audio applications.

From this analogy, you can see why a DAC with 16 bit resolution can have only 14- or 15-bit linearity and, therefore, higher distortion than theory predicts: The gradations don't all fall where they should. You can also sense why an 18-bit converter will give better 16-bit performance: In order to fit four times as many gradations on the ruler, the ones already there must be moved closer to their ideal locations. To obtain 16-bit linearity with the new Burr-Brown device, the two extra bits don't even have to be connected! For the most part, the move from quasi-16- to true 16-bit performance made possible by 18-bit DACs will result in an inaudible gain in sound quality, since the improvements will be masked by noise and distortion far earlier in the recording chain. Measured distortion and linearity performance should improve remarkably, how ever, as will signal-to-noise ratio. But the latter specification should be viewed with suspicion, since methods for obtaining it are quite unrepresentative of playing an actual music disc.

The Halbert/Shill paper does hold out some hope for improved music performance using 18-bit converters--but not with presently existing technology. Halbert and Shill point out that if the audio signal remains considerably below the Compact Disc system's upper limit of about 22 kHz, the signal is being oversampled by the original 44.1 kHz sampling rate and contains information below the 16-bit level, which can be extracted by as yet undeveloped digital signal processing (DSP). "To take full advantage of 18-bit converters in a 16-bit CD or DAT player, more sophisticated, signal-adaptive DSP hardware will be required. Although particular specifications can be improved this way, the subjective benefits with real musical signals are yet to be documented. It is also quite possible that the extra bits can be used to pro duce effects which subjectively improve the sound quality, even if the measured total error power is not reduced." How nice it is to see a concise engineering prediction about a possible avenue of progress in digital audio-stated in phraseology as yet uncontaminated by commercial hyperbole.

Also see:

Ultra High Fidelity (for advanced audiophiles)

NEC CD-810 Compact Disc player


Top of Page   All Related Articles    Home

Updated: Tuesday, 2025-04-29 21:53 PST