It’s the time of year for saving money!
You know how some guys won’t ever believe anything until they’ve tested and measured it? What if your ears could be scientifically proven to be the very best test equipment of all? Wouldn’t that bend a few noses and drive a few trolls back under their bridge?
Think about this for a moment: Do you remember when you were just getting into Hi Fi? Do you remember how you tried to learn everything you could about it? How you collected product literature, and memorized product “specs” just the way sports fans memorize baseball statistics? Do you remember how shocked you were when you finally went into a Hi Fi shop and listened to the stuff you had lusted after for so long, only to discover that flat frequency response and .00001% THD don’t really have a whole lot to do with whether a product sounds any good? Or whether its sheer realism will set you grinning and grabbing for your wallet?
The fact is that a product’s specifications and its ability to convince you that you’re listening to the real thing may or may not be related, and the only way you’re ever going to really find out is by listening! And here’s a surprise for you: those ears that you’re going to listen with are ― not just subjectively, but in measurable fact ― VERY MUCH better than even the very best electronic test equipment:
The normal threshold-to-threshold range of human hearing is typically described as encompassing roughly 100dB; from about 30dB absolute, the “threshold of perception” (where sound first becomes audible to most people) to the “threshold of pain” at about 130 dB absolute (imagine the sound of a Boeing 747 taking-off just 50 feet away.) Because decibels are not an absolute, but a relative measurement, calculated logarithmically to represent the ratio of one sound level to another, and because each 10 times increase over a prior sound level results in an increase of 10dB, a 100dB increase over an original reference level (in this case, 30dB absolute, also called 30dBSPL) indicates a 10,000,000,000 to 1 (ten BILLION to one) increase in sound pressure over the reference level!
To put it differently, the “threshold of perception” (30dBSPL) reference level is only 0.0000000001 as loud as the 1.0 level (130dBSPL) at the threshold of pain. How many test devices do you know of that can even read that? How many have single-scale resolution to ten significant digits?
Following through on that word “significant” is where it starts to get interesting: What if we were back in those earlier days when we believed and cared about product specs and we saw that (for example) a new amplifier had just been measured at one one-thousandth of one percent fourth harmonic distortion (0.001%). Would we be impressed? Would we want to rush right out and buy one?
To figure that out, let’s try amplifying just two single tones – sine waves of 200 and 1600Hz, both well within the frequency range of human hearing (or even the average old-time telephone). We’re going to play them through our amplifier with the 200Hz tone at a loud but not uncomfortable 100dBSPL and the 1600Hz tone much quieter, at just 50dBSPL. Because both levels are well within the range of normal human hearing, and because the tones are significantly different in frequency, even though one is 50dB louder than the other, there should be no “masking” effect and we should still be easily able to hear two separate and distinct tones.
Now let’s take those same tones, played at the same relative volume levels through that same amplifier, and run the amp’s output into a Distortion Analyzer. What will it show? Well, remember that the 200Hz tone is 50dB ― 100,000 TIMES ― louder than the other. Or maybe it’s the other way around; maybe the 1600Hz tone is 100,000 times QUIETER than the other, being just 0.00001 times (0.001%) as loud as the 200Hz tone. Whichever you want to call it, because the tones are exactly three octaves apart and because of the way test instruments work, the Distortion Analyzer WON’T “see” the amplifier’s output as two separate tones at all, but will indicate a 200Hz tone with one one-thousandth of one percent fourth harmonic (1600Hz) distortion.
Is that 0.001% distortion “significant”? IT BETTER BE! If we were to rely on just the test instrument reading, ignoring the evidence of our ears, and if we were to insist that just 0.001% distortion can’t possibly be significant, we might find ourselves listening to just one tone and losing the other one, entirely!
Which do you trust? Your ears or your instruments? If there’s ever any doubt, I’ll go with my ears! How about you?