Digital Domain (Sept. 1984)

Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting


Departments | Features | ADs | Equipment | Music/Recordings | History




BIT BY BIT


In essence, the task of recording and reproducing music can be simply summarized--we want to form a representation of the music. Obviously, the closer our representation is to the original, the better. Unfortunately, reality is stubborn in its ability to defy recreation, and we are left with a challenging endeavor as we attempt to create an approximation of the original event. The essential problem lies in the complexity of even the simplest acoustic waveform and the dual nature of the information it carries. No matter which recording system we employ, our task is the same: To completely characterize an acoustic event, we must store both correlated time and amplitude information. Thus, a vinyl LP has a groove, the length of which implicitly encodes time, and lateral variations which encode amplitude. In a digital system, both time (implicitly again) and amplitude are stored as discrete pieces of binary information.

We've discussed sampling, a method of periodically taking a measurement. Of course, taking a measurement is meaningful only if both the time and value of the measurement are stored. Thus, sampling represents the time of the measurement, quantization represents the value of the measurement or, in the case of audio, the amplitude of the waveform. Sampling and quantization are thus the twin pillars of digital audio. Together, at least in theory, they can completely characterize any acoustic event.

Of course, as I've mentioned, reality isn't prone to duplication, thus our goal is one of approximation. Both sampling and quantization become variables which determine the accuracy of our approximation. An originally analog waveform may be mapped into a series of pulses; the amplitude of each will yield a number which represents the analog value at that instant. It is apparent that the greater the rate of the sampling, the better the representation of the waveform. Nonintuitively, the sampling of a band limited signal is a lossless process; so long as we sample at twice the highest throughput frequency, sampling is a perfect technique. But the choosing of the amplitude value is a different story. An analog waveform has an infinite number of amplitude values whereas we can only choose from a finite number of increments, so our chosen value will be only an approximation to the actual. In other words, there is an error.

Let's use an example to illustrate that, yes, even digital has error, and differentiate this error from the one inherent in an analog system. Suppose that I've connected two voltmeters, one analog and one digital, to my recording console, and, at the instant of the last cannon shot in the "1812 Overture," I read both meters, reading the voltage corresponding to the acoustic input signal. Given a good meter face and a sharp eye, I read the analog needle as showing 1.278 V. My digital meter, a rather cheap model, has only two digits and thus I read 1.3 V. If I pay a little more for a three-digit meter, I might read 1.28 V, and a four-digit meter would give 1.278 V. Now, both types of meters are always in error.

The analog meter errs because of the ballistics of the mechanism and my difficulty in reading the meter. Even under ideal conditions, at some point, any analog measuring capacity is lost in the device's own noise. With the digital meter, the nature of the error is different. My accuracy is limited by the resolution of the meter, that is, by the number of digits displayed. The more digits, the greater our accuracy, but the last digit will always round off relative to the actual value (1.278 was rounded to 1.3). Under the best conditions the last digit would be completely accurate (1.300 shown as 1.3) and under the worst conditions the rounding off will be one half increment away (1.250 rounded off to 1.2 or 1.3). The philosophical question of which is better, analog or digital, has already been decided in the marketplace-at least as far as voltmeters are concerned. For equal or less cost, we can build digital meters with greater resolution, ease of use, and reliability. A digital readout is an inherently more robust kind of measurement; we gain more information about an analog event when it is characterized in terms of digital data. The analog voltmeter has gone the way of the slide rule.

Back to audio digitization: Quantization is the technique of incrementing an analog event to form a discrete number. In terms of the quantizing hardware, the number of increments is determined by the length of the data word, in bits; just as the number of digits in our voltmeter determined our resolution, the number of bits in our digitization equipment determines resolution. As we will see in a future column, that decision (primarily an economic one) takes place in the design of the analog-to-digital converter. An eight-bit byte would accommodate 28 = 256 increments, a 16-bit byte would map 216 = 65,536 increments. The more bits the better, but there will always be an error associated with quantization because the limited number of amplitude choices contained in the binary word can never completely map an infinite number of analog possibilities. No matter how many increments are available, there can always be an analog amplitude in between. At some point, the quantization error becomes audibly indistinguishable, but exactly what word length provides that luxury still isn't clear.

Word length determines the resolution of our digitizing system, and hence provides an important specification to measure the system's performance.

Sometimes our chosen increment will be exactly at the analog value; usually it won't be quite exact. At worst, the analog level we desire to encode will be one half increment away, that is, there will be an error of one half the least significant bit of the quantization word. For example, suppose the binary word 011000 maps the analog increment of 1.20 V, and 011001 maps 1.30 V, and the actual analog value at sample time is unfortunately 1.25 V. Since 011000 and 1/2 isn't available, I would have to round up to 011001 or down to 011000, either way, I would be in error by one half of an increment magnitude.

In characterizing digital hardware performance we may formulate a ratio of the total number of increments covered by our quantization scheme to the maximum incremental error. This ratio of maximum expressible amplitude to error determines the signal-to-noise ratio of the digitization system. For example, a 16-bit system would yield a S/N ratio of 65,536 0.5 = 131,072; in terms of a more familiar dB measurement, this is about 98 dB. The S/N relationship can be conveniently expressed in terms of word length as S/N (dB) = 6.02n + 1.76 where "n" is the number of bits. Using the formula, a 16-bit system again works out to 98 dB, whereas a 14-bit system is slightly inferior at 86 dB.

[Editor's Note: The common approximation is to multiply the bits by 6, yielding 96 dB for 16 bits and 84 dB for 14. --- Ivan Berger.]

In a later column we'll see that a digital S/N specification s a slightly different animal than an analog S/N specification.

Quantization is more than just word length, it is also a question of hardware design. There are many techniques available to accomplish quantization and different strategies determine how the analog signal is mapped onto the increments; we could use techniques such as linear or nonlinear distribution, monotonic or magnitude-and-sign, or many-to-one or one-to-many. Those algorithm decisions influence the efficiency of the available bits as well as the relative effects of the error. For example, a linear quantizer produces a relatively high error with low-level signals which span only a few increments.

A nonlinear system such as a floating-point converter could be used to amplify low-level signals to utilize the fullest possible incremental span; while that improves overall S/N ratio, the noise modulation byproduct is undesirable and requires special masking.

Sampling and quantizing are, therefore, the two fundamental design criteria for a digitization system. Sample rate determines bandlimiting and thus frequency response, and word length determines S/N ratio. Although band limited sampling is a lossless process, quantizing is one of approximation. Incidentally, quantization error is akin to noise-fortunately, with 14 or 16 bits it exists below most analog noise floors, and its effects can be further masked in the recording chain with a purposefully introduced noise called dither which makes quantization noise perceptually benign. In general, a word length of 14 or 16 bits with a sample rate of 44.1 kHz or 48 kHz yields remarkable fidelity, comparable to, or better than, the best analog systems.

And that statement isn't meant to disparage the quality of analog audio--rather, it pays dear tribute to analog recording. It required an extremely sophisticated digital system, capable of processing 1.5 million bits per second, to rival Edison's wiggling groove.

Also see:

Philips Oversampling System for Compact Disc Decoding (April 1984)

(adapted from Audio magazine, Sept. 1984; KEN POHLMANN )

= = = =

Prev. | Next

Top of Page    Home

Updated: Tuesday, 2019-06-25 8:59 PST