In this next installment, I'd like to go into some aspects of sigma-delta conversion with an overview to the general layout of these type of converters. The actual nut-and-bolts of the converter is pretty boring - arcane symbols and other hieroglyphics; stuff that is great if you're a beard-scratcher, but not so great for the general interest I'm hoping for in these articles, so we'll skip over some of the gory details.
This basic gist of sigma-delta is this: Take a beautiful, smooth, sine wave (in green) and slice it into this unlikely looking square-ish thing (in blue).
"Ruined, I tell you!"
If we look closely, you'll see that the blue waveform actually does follow the green sine. When the voltage is high, a larger proportion amount of time is spent on, and vice-versa. The "density" percentage is changed, instead of giving each individual sample a number. While one-bit conversion is really just one-bit Pulse Code Modulation, it is sometimes also referred to as "pulse-density modulation" or PDM.
In the second article we talked about the concept of 'linearity' of a converter, and guess what? The one-bit converter can have good linearity - due to the fact that it only has two states to worry about. This is a boon for some aspects of performance, and a deal-killer for others. The primary issue is that you need a very high sampling rate to represent the audio signal as only one bit. For quality sound, the speed must be at least sixty-four times higher than for conventional PCM. And it's not 64 times better either, this is just to get roughly equivalent performance in a one-bit system.
The whole one-bit converter craze came about when a chipmaker made the raw delta-sigma signal available on a normally unused test-pin of a converter. The upshot was that you could essentially connect that digital pin to the analog world and sound would come right out! Talk about cost-effective. And it would sound pretty good, too. No digital filters were used, and not even enough analog filters as we shall see later.
That sound you just heard was me skipping over vast tracts of noise-shaping theory, and other Greek letters, but here is an easy way to look at it: In PCM each bit is "worth" about 6dB of volume range. Regular CD is 16 x 6= 96 which is a lot; loud parts are loud, quiet parts are quiet. Yet one-bit has only about 6dB of volume range, making it very noisy! Too noisy to be usable in fact... Fortunately, this noise is like a balloon: You can squeeze it down in one frequency area, and let it stick up in another. And so, the noise is shaped from areas you can hear to areas that you can't. In one-bit as used in SACD, the inaudible range was considered to be anything above roughly 20kHz
There's our audio signal right there at 1kHz, and everything looks great, but of course the balloon snaps right back above 20kHz, eventually overtaking everything else. Some say sounds above 20kHz are audible, and some don't, but in any event the noise is coming for them and will ultimately win.
When the balloon comes back with all of this high-frequency noise, much like the non-oversampled and unfiltered audiophile D/A converter, the results can be unpredictable. In the early days of SACD, there were some notable examples of high-end power amplifiers overheating and shutting down during demonstrations as a result of trying to amplify this noise. It was decided it would be to everyone's benefit to let the noise rise until 50kHz and then remove it for SACD audio.
In multi-bit PCM, with no signal the digital value is just that: Zero. But in the one-bit system, no signal is represented by this sequence: 1010101010...etc. An equal amount of ones and zeros that average out to zero. Unfortunately, for some of those boring technical issues we are totally overlooking in this article, it's not a perfect 1010101010 sequence in the real-world. It may go 10101011111110101010 or something like that, and this creates tones where it's supposed to be silence. These are called 'idle-tones' or 'limit-cycles' if you want more information. There are various other issues with one-bit systems with regard to the ability to add adequate dither, or even to keep the system out of overload. At any rate, (Har!) these are difficult technical problems that ultimately drove high-quality audio (as well as industrial and scientific applications) to the use of more than one bit quantization.
But once we have this one-bit signal what can we do with it? We can keep it at this 3 megahertz sampling rate and reproduce it just like that, and this is what is done in the SACD format. But in the studio we like to do things like adjust the volume, tone, or many other manipulations. And this presents a fundamental problem. While there have been some attempts made at directly processing the one-bit signal, it was fraught with compromises. This is why many SACDs were released with essentially no processing applied. But for those that want to change the sound in some way, the signal must be converted to regular PCM. This process goes by the unfortunate name of "decimation." Like "They decimated them!" Nevertheless, through the miracle of mathematics, our one-bit signal can be turned into a regular 24 bit PCM signal for processing.
But here is the sticky part - if we want to go back to one-bit (perhaps even for marketing or philosophical reasons) we have to further process the signal, and all of this back and forth comes with the potential for trouble because of that lurking noise we talked about earlier. This condition can be treated by raising the intermediate sampling rate for processing, but it certainly begs the question: "Why not just use regular PCM in the first place?" Today's modern converters all use between 3 and 5 bits internally. And they can be decimated to virtually any sampling rate and word-length, with none of the problems inherent in the "pure" one-bit implementation.