As I discovered after my recent article “You DO have to sweat the small stuff” was published, the debate about whether cables make any difference to the sound of an audio system is apparently never going to go away. In writing that article, I specifically stated that it was NOT about cables, and that I was only using one young man’s comment about cables to introduce my REAL subjects, which were that: 1) In audio, improvement is incremental and, although you may not always get what you pay for, you definitely DO pay for what you get. 2) Improvement is subject to the “Law of Diminishing Returns”, so that each additional dollar spent will bring less new improvement, and 3) When you finally DO get to a certain point, just like focusing a camera or fitting a key to a lock, that tiny last increment can make a huge difference.
So what happened? Virtually all of the comments that came in were about cables and few, if any, were in response to what I had actually written about.
Okay, never let it be said that I am not capable of learning a lesson: This article IS about cables, although NOT about the issue of “DO they work” ― as the designer of all XLO cables from 1990 through 2002, a recognized contributor to the knowledge base accessed even by today’s most advanced cable companies, and a finalist for DuPont’s prestigious Plunkett Award for technological innovation, I think my position on that subject must be pretty obvious. Instead, it will be about just one single aspect of HOW they work, and there may, depending on how well this one is received, be more similar articles to come. (Trolls; get your note pad and both of your typing fingers limbered up and ready to go!)
In pushing their position, the people out there who would have you believe that cables don’t make any difference to system performance usually start by assuring you that there are only three (or four, depending on which orthodoxy they come from) factors that affect the performance of a cable: RESISTANCE (“R”) [or its converse, CONDUCTANCE (“G”)], CAPACITANCE (“C”), INDUCTANCE (“L”) and, if they come from that group, CHARACTERISTIC IMPEDANCE (“ZO”). Those are the factors considered important to cable performance in conventional Electrical Engineering theory and practice and, as far as they go and for their intended purpose, they are (surprise!) both correct and sufficient. The problem’s not with them, but with that word “only”. In fact, there are a great number of other things that also affect cable performance, and even of just the four given (R, C, L, and ZO), two are, FOR MOST AUDIO APPLICATIONS, of lesser significance than the others and one may make absolutely no difference at all!
Resistance is one of those factors that is usually of less importance for audio cables. In power transmission systems, however (remember that power lines all over the world run at 50 or 60 Hz, and that those are definitely audio [not-very-deep-bass] frequencies), that’s NOT the case, and, in fact, the reason that power lines are AC of any frequency at all has specifically to do with resistance: The original Edison power lines for electrifying the cities for electric lights ran on direct current (DC) and the ONLY thing that affected their performance (other than a “short” or a break in the line) was resistance. The problem was that cities, even in the beginning, when there weren’t all that many users, and power wasn’t being conveyed all that great a distance, still used HUGE amounts of electricity, and to transport it as DC current (Amperage) meant dealing with either or both of two concerns: Either the cables had to be colossally thick (and HIDEOUSLY expensive and difficult to work with) to reduce their resistance or the electric companies had to face significant losses of otherwise salable power to just heating the cables!
The solution came from Nicola Tesla, who, backed by George Westinghouse, went to an AC (alternating current) system which allowed tremendous amounts of power to be transported through relatively small lines as low current at very high Voltage (up to 220,000 Volts) without heating the lines and then, using step-down transformers, to convert it to high current at much lower (120 or 240) voltage for home or business use.
For an audio interconnecting cable, the signal is, as with power lines, an alternating current, but obviously of a vastly broader frequency range and vastly lower power. In fact, for an “unbalanced” (“single-ended”) audio line connecting two components, one with an output impedance of typically 50 to 250 Ohms, and the other with an input impedance of typically 10,000 to 47,000 Ohms, the amount of current actually carried is negligible – in the milliamp or fractional milliamp range and resistance as an element of current loss is not a consideration. “Balanced line” interconnects (typically fitted with XLR connectors), although connecting matched impedances (the original standard was 600 Ohms) and therefore not having “loading” issues, still typically carry only very tiny amounts of current so, other than affecting their own characteristic impedance (ZO), their internal resistance is of similarly little importance.
]]>In speaker cables, it may be different. For one thing, most speakers are electromagnetic in their operation (as opposed to “electrostatic”) and, as such, are “high current-low voltage” devices (That’s why good High End amplifiers are known for, among other things, their ability to produce a high current output) With a potentially high amperage current flowing through it, a speaker cable CAN, if its resistance is too high (meaning that its overall AWG wire gauge is too thin), present the possibility of signal loss due to cable heating. Except in extreme circumstances, though, this is highly unlikely, and even if there were signal losses, because they would affect all frequencies equally, they would likely have no effect on the sound at all except for a slight lowering of its volume level. The more likely way for an excessively resistive speaker cable to affect the sound of the system would be by lowering the amplifier’s effective “damping factor”, which is its ability to resist spurious driver motion by electrically opposing driver “back EMF” – the electrical energy produced by a driver acting as a generator when it keeps on moving after the signal driving it has stopped or changed direction. Damping factor, itself, is however, like cables, a hotly debated subject, with the trolls and a goodly portion of the engineering community declaring it to be of little or no consequence and much of the High-End community, including many electronics manufacturers (and even such a decidedly non-“Tweak” pro audio company as Crown) taking the other side and considering it to be of major consequence. Another issue of controversy relative to cables and resistance is that of silver versus copper. Remembering that Conductivity (“G”) is the converse of Resistivity, and that both are exactly the same thing, just seen from opposite directions, consider this: According to Lehigh University, high-purity silver has a conductivity of “106” and is the best natural metallic conductor known. Even the very lowest grade of copper, however, is conductivity-rated at “89.5”, only about 15% worse, and the conductivity rating of the very best copper is “100”, just 5.6% less than that of of pure silver. So, 1 – if, as the trolls and unbeleivers contend, cables don’t have a sound, and 2 – if, as discussed above resistance makes little, if any, in performance in cables, and 3 – even at its greatest there’s not much difference between silver and copper in most audio applications, Why the battle? Especially when YOU CAN CUT ANY CABLE’S RESISTANCE IN HALF JUST BY MOVING YOUR COMPONENTS SO YOU CAN USE A CABLE HALF AS LONG! To carry the question even further, why, (all of the following percentages are quoted from here) there’s gold, which is 24% less conductive than copper, used on the connectors of the great majority of High-End audio cables? Or Rhodium, at 74%, yes, SEVENTY-FOUR PERCENT LESS CONDUCTIVE THAN COPPER, used on audio connectors claimed to be an ultra-premium alternative to gold? Could it be that, for audio cable applications, resistance (unless wildly excessive or as one of the half-dozen elements of characteristic impedance) really doesn’t matter very much at all? Of the three or four so-called “significant” cable factors, that is one down. If you want more on these subjects, please let me know.