I've been asked to contribute a series of articles on some of the details in digital audio - something I think can always do with some clarification. I'd like to start with a look at the early days of digital audio, then to other questions about current technology and its applications. There are some simplifications, and lack of jargon here, and I hope the content is helpful.
For this article, I'd like to talk about the early days of analog to digital conversion for Compact Disc, sometimes known as B.O. or "Before Oversampling," where complex analog filters were combined with converter systems re-purposed from medical or other industries.
There were few converters made just for audio at the time.
This doesn't mean that good A/D converters weren't available. After all, industry had turned to digital long ago - and for applications far more sensitive than audio. But in the early 1980's it was still a challenge to integrate the elements in an audibly transparent way. And cost was always a factor, even in professional gear.
Some Background and The Basics
Before sampling, the audio must always be filtered to remove frequencies greater than half the sampling rate. For Compact Disc, this means 20kHz must pass unaffected, yet 22kHz is excluded completely! To do this requires complicated analog filters with a poor phase-response owing to their extreme cutoff rates. Even the best examples were far from optimum, and many factors made the proper response essentially impossible to achieve.
Many of the original analog to digital converters for CD used the Successive Approximation technique. This is where you just take a guess ("maybe it's halfway up") at the input level, then play a "too high, too low" game until you settle on a value that matches the input. Sounds crazy I know, but it works...
Because the conversion relies on looking repeatedly at the sample and comparing it with the latest guess, it's necessary to hold the input steady. This stage is cleverly called the "Sample-and Hold" stage. But it's quite the task to forget everything you knew about the last sample and then acquire the next in these circuits.
Sometimes we hear about the effects of "dielectric absorbsion" in capacitors for audio, and in general it's not really much of a factor, but in the sample-and-hold it was a significant one. The sample-and-hold stage works by taking the sampled audio and "holding" it in a capacitor during the conversion interval. Even excellent quality capacitors may "absorb" a few percent of the signal when asked to go from fully charged to zero. An error of 2% might not sound like much, but CD is accurate to 0.002% if all is working correctly...
Before Oversampling, just that one little section, the sample-and-hold capacitor and its associated circuitry - only holding the signal for 20-some microseconds had all kinds of problems! And that was before trying to tell one signal level from 65,000 others, which is required of all 16 bit converters.
All combined, these errors made it essentially impossible to achieve the 16-bit capabilities of the Compact Disc at the time it was released. This doesn't mean great sounds weren't possible, but the real performance of the converters was not 16 actual bits of distortion, which is very low indeed. One particularly weak area was accuracy at low signal levels, where the signal distortion of the audio increased as the level of the music decreased. This was quite noticeable in some early converters as a loss of reverberation. This is also the complete opposite of analog recording, where the distortion grows with increasing signal level.
Oversampling - Where Less Is More
Enter oversampling, where we sample with less bits, meaning more error, but make that error happen much more often! Paradoxically, the coarser signal - but at a very high rate - is ultimately more accurate than a greater number of slower bits.
The original CD sampling rate of forty-odd thousand times per second is raised to a dizzying several million times per second... But at only 1 to 6 bits accuracy!
But why oversample in the first place? The answer is accuracy. In our original 16 bit converters for CD, we needed to make each sample accurate to about 0.002%, roughly one part in 65,000, which is a very small amount of error. Converters with this level of performance either didn't exist or were impossibly expensive. However, it's easy to make a perfect 6 bit converter - after all, there are only 64 possible answers!
Oversampling works by spreading the inevitable distortion over a very wide frequency range, while starting with a very accurate low-bit converter. When this multi-megahertz signal is digitally filtered to the audio spectrum, the resulting audio is converted to very high accuracy - much more faithfully than any previous technology was capable of achieving.
Unlike before, because we are now sampling at megahertz speeds, instead of having to allow 20kHz and reject 22kHz, the filtering requirements are considerably relaxed. As a result, the phase response (all notes arrive at the same time) is now essentially perfect. That sample-and-hold capacitor we worried about earlier? At 6 MHz, its performance is now hardly a factor.
Thanks to oversampling analog to digital converters, by 1988 conversion accuracy at both high and low signal levels was excellent and 18 bit performance was common in professional recorders.
Next time we will talk about R/2R converters, Non-oversampling converters of today, marketing bits and some other popular misconceptions about digital audio.