I thought we would start this next installment with a discussion of some different converter types and their particular pros and cons. Just to backtrack a little, all the D/A converter does is take a number and turn it into a voltage. A larger number applied to the converter generates a higher voltage at the output. Regardless of the type of converter, this is all that ever happens.

Originally - and this type of converter dates back to the 1920's, you had one resistor for each binary bit you wanted to convert, and the appropriate number of resistors were switched in to generate the corresponding analog output. This presents an immediate practical problem, as in PCM we need 2 raised to the number of bits worth of resistors, so a paltry 10 bits means over a thousand parts. There has to be better way! Numerous clever approaches were taken to either reduce the parts count, improve the accuracy, or both.

A popular D/A design that evolved from these problems was called the R/2R converter. Instead of 2^(number of bits) resistors needed, it only requires 2 *times* the number of bits worth of resistors. So, 10 bits uses 20 resistors, etc. The name R/2R comes from the use of only two values of resistor, one twice the value of the other. Say, 1000 Ohms and 2000 Ohms, for example. This ratio simply comes from the fact that each bit added is twice the value of the one before. We count this as 1,2,4,8,16, etc., in binary.

This doubling of the numerical value is also the same as 6dB in analog, for those keeping score at home.

All digital audio uses this system, from SACD with one bit, to 24 bits in others.

Here is a conceptual 4 bit R/2R converter:

Our four digital bits come in on the top B's. Let's say five volts for a one, and zero volts for a zero. As we work from the MSB (B3) to the LSB (B0) the contribution of each bit is half of the previous one, because the voltage has to travel through more resistors. The more bits that are five volts, the more voltage that comes out. At that ratio of 1,2,4,8 we talked about earlier.

Looks simple enough, right?

All D/A converters are prone to various types of errors such as:

**Linearity:**

This is just a fancy word for distortion, or lack of faithfulness to the original input. It can take many forms, but all of them create new notes that weren't present in the original recording! For instance, in the R/2R converter, the fact that all the resistors are not exactly matched affects the linearity and generates distortion.

**Monotonicity:**

This is another departure from linearity, and simply means that when the digital value increases, the output must also increase and vice-versa. Obviously it's bad for the input to go up and the output to go down! This can also result in "missing codes" where the output remains the same even though the input has changed level. Called "sticks" and "jumps" in the trade.

**MSB Transition Noise:**

In PCM audio, our music is always represented as a series of digits ranging from the Most Significant Bit (MSB), to the Least Significant Bit (LSB). Whether 1 or 24 bits, it always runs from the MSB to the LSB. As the word-length is increased, more LSB's are added. The MSB is also the "sign bit" describing whether the music is a positive or negative voltage for that particular sample. But when the signal *crosses* zero, at its lowest and quietest point, all the bits change state - and having all the bits change at once can cause interference between the analog and digital portions of the D/A converter and this can be a very difficult engineering problem.

The R/2R converter is subject to all three of these errors, and many more. Some of them can be treated, but simple things like the matching of resistors (As an reminder, 20 bits is one part-per-million!) as well as compensating for aging over the life of the product is very difficult. Even 16 bits of actual performance is difficult to achieve in R/2R converters.

Let's put on our lab-coats and compare some real-world read-outs. First, of a state-of-the-art multi-bit R/2R design. The price is around USD$40,000 and it's highly regarded in the industry.

But first a few words about these graphs. The test contains only two notes - nineteen and twenty kilohertz - these are very high frequencies, likely not even audible to listeners on their own and without distortion.

But there is an interesting property of these two notes: It's a sort of "torture test" for converters. A perfect system would just show two vertical lines at 19 and 20kHz. When distortion is present, many more notes will appear! Even the difference of 1k (20k minus 19k) will be displayed.

And in contrast, here is an equally state-of-the art delta-sigma design using integrated circuits. Also well regarded, and approximately USD$2,000.

This all looks very esoteric, I know, but all that red stuff in the first graph are really new notes that were not present in the original music!

As we talked about in the first article, the use of oversampled sigma-delta converters has brought greatly increased accuracy and fidelity to the original recording, as well as being a much more affordable approach.

In the next article, we'll go into more detail about sigma-delta, as well as some practical aspects of modern designs.