Back in the ancient days of Hi-Fi, even before there was any such thing as the "High-End", there was a man named Julian Hirsch who had a company called Hirsch-Houck Laboratories which tested, over time, more than four thousand products for the leading consumer hi-fi magazines of that day.
That Hirsch and his company should have done so and that the results of his testing should have had a powerful influence on the buying public is absolutely reasonable: Hi-Fi was a new and, to most people, completely unknown field that was not only highly technical but often extremely -- possibly even excessively - expensive. The James B. Lansing (now JBL) "Paragon", for example, was a one-piece stereo speaker system that (at $1,895 if I remember correctly) in 1957, was fully as expensive as a well-equipped Chevrolet car.
With products like that and the ongoing success and growing public reliance on other publications like (since 1936) Consumer Reports, it was only natural that people should look for expert guidance on their hi-fi purchases, and that they should expect that the test results they read would help them in choosing among the products tested.
That certainly seemed to be a reasonable expectation, but in reality there were problems: Consumer Reports, for example, concentrated its testing on the durability and build quality of the products it tested, and paid little attention, if any, to customer satisfaction (how much people actually liked them) or any other subjective factor.
It was the same with hi-fi, too: Readers of test reports (they could never really be called "reviews") done by Julian Hirsch would learn a product's frequency response; plus (if it was an electronics component,) its signal-to-noise ("S/N") ratio; its total harmonic distortion ("THD"); its intermodulation distortion ("IM"); and later, when it became briefly fashionable, it "transient intermodulation distortion" ("TIM"). If the product was a turntable, they would also learn its "rumble" and its "wow and flutter figures (but never, as I recall, would they find out anything at all about tonearms or record mats, which in the days before "Platter Matter" or my favorite, the "Glasmat", were apparently assumed to have no sound of their own).
This lack of concern for (or even recognition of) the sound of the things tested wasn't limited to any one or even any category of products. Although Julian Hirsch can certainly not be faulted for the amount of testing he lavished on the products he reported on or even, by the standards of his era, on the quality, rigor, or nature of his tests, the one thing that he apparently never ever concerned himself with was what anything actually sounded like. In fact, the story goes (although not having read all 4,000 of his test reports, I can't personally vouch for it) that the word "sound" never once appeared in any of them.
In today's world where, frankly, there are so many new products (and even so many new formats) coming down the line that even I would appreciate a little help from time to time, testing of hi-fi equipment and related products and systems is still something that most people - even most audiophiles - would really like to have available to them when it comes time to buy new toys. And, reflecting their wishes, reviews are now available from more sources and are better read than ever before. The difference is that now old-style test reports that just give specifications and features are no longer sufficient, and, ever since subjective reviewing was invented by the likes of "Harry Pearson and J. Gordon Holt, what the product under review actually sounds like has become, for most reviewers, the most important consideration.
That hasn't stopped people from objective testing, of course, but - to the likely confusion of the ordinary non-audiophile consumer who just wants to buy a new "hi-fi" or home theater system -- it has resulted in a bifurcation or even polyfurcation (Do you like that? Huh? Huh? Do I get points for it?) of even the most informed opinion about testing: One camp (accused of being Luddites by some others) strongly believes that objective testing is the ONLY way to come to any sort of usable conclusion, and that simple listening, even by trained "expert" listeners, because of "placebo effect" and any number of other psychological and psychoacoustic effects, is never to be trusted. Another major camp ("Voodoo-believers" and "Snake-Oil Buyers", in the opinion of the "Luddites") holds 1) that most testing is meaningless to the average buyer; 2) that "blind" or "double-blind" testing - the darling of the test set, while certainly of value in other areas of research, is - either because of inherent problems with the test, itself, or with the nature of the test materials (the music being listened-to) -- of utterly no value for testing hi-fi hardware or software, and 3) that listening is both the purpose of our equipment and the only thing we should ever place our faith in. A third group is more selective, and believes that testing - provided that it's testing the right thing in the right way, to determine answers to the right questions - is a good way to "rule out" inappropriate products before listening and making a final selection "by ear". (To illustrate this, consider that if you love deep bass and you're shopping for a subwoofer, reading a test report that shows that one model offered to you only goes down to 50Hz can definitely help you to move on to a different one without the need for any actual listening. Consider also, however, that the absolute opposite can be true: Distortion readouts, just as one example, may be of no value at all, and a tube amplifier with VASTLY higher distortion figures may, on listening, actually sound BETTER to some people than a solid-state unit with much lower distortion measurements.)
And what do I personally think? Into which "camp" do I fall? I believe that where testing can provide meaningful data to support more informed decision-making it should always be done and, when I was a manufacturer (XLO cables), there were certain tests that we always did and that we found to be of great value. There were also other kinds of testing that we found from experience to be anything from unnecessary, to irrelevant, to outright misleading. Critical listening, on the other hand, was something that we did at every possible opportunity.
Testing is a good idea and at XLO, when we heard something new, either good or bad, we always tested to try to learn the reason for it so that we could either try to get more or try to reduce or eliminate it, as appropriate. That testing wasn't always successful: There are things that anyone who listens will hear for which no solid explanation yet exists, but that DOESN'T mean that they don't hear them. Although imagination or "placebo effect" is certainly possible (as the testing-only fans insist), it's at least equally possible that the tests that have been done to determine and document their reality were faulty; that they were testing the wrong thing; that the right thing was being tested, but in the wrong way; or that the test results were both correct and meaningful but were interpreted incorrectly.
Testing to see if something is real can be a good idea, but testing the tests can be of at least equal value and all too little of that seems to be done. Critical listening by people who actually know how to do it is a great way to start.