I started this continuing series on audio reviews and reviewers by asking the question “Do we really need reviews and reviewers”, and concluding that we did – if only because there’s so much stuff out there that can be put together in so many possible combinations, that NOBODY, no matter how much time he spent, could ever audition even the tiniest meaningful part of it for himself, and we must therefore rely on the help of others – if only as pre-screeners — to be able to make informed decisions regarding the toys and goodies to buy for our hi-fi hobby.
I then wrote that just about anything that anybody says about a system or a piece of hi-fi gear is, and should be recognized and judged as a “review”; that the most important thing about any review is whether or not we should believe it; and set down some basic qualifications that I think are necessary for the creation and communication of believable and worthwhile reviews.
The three requirements I listed are the ability to judge products’ performance; the ability to apply some (consistent) standard for rating them; and the ability to communicate one’s judgments and ratings to other people in a way that they will be able to understand and benefit from.
That first requirement – the ability to judge product performance – had three requirements of its own, I wrote, that needed to be satisfied in order for any “review” to be meaningful and valid: The “reviewer” had to have hearing good enough to allow him to discern differences; had to have a reference system good enough to present those differences for him to hear; and had to audition enough program material, of enough different kinds, over a long enough time that all of the potential differences would have been exposed for him to hear.
If any one of those things was lacking, I would dismiss the review out of hand, but even if all of them were present in full measure, I would still question the reviewer’s conclusions unless I knew what standard he had applied in deciding if the product under review was good, bad, or indifferent. That question was looked at in my last installment of this series, and one standard that I suggested was one that had been come up with years ago by me and one of my Sounds Like… Magazine reviewing colleagues, Tom Miiller — the idea of “believability” — not of the review, this time, but of the sound of the music played.
Put most simply, Tom and I agreed that a “good” product or system was one that would — regardless of the actual recording circumstances, or even of the fact (if that were the case) that there had never really been an original performance with all of the artists together and performing at the same time — cause us to believe that what we were hearing was real musicians playing live music at a real venue. With that as our standard, we could always, on comparing one or more products or systems, tell absolutely which was/were better; which worse; and which were not believable at all.
That gave us a good standard for reviewing equipment, and Jack English, another reviewer colleague of ours (mine and Tom’s) from the old Sounds Like… days, was kind enough to post a comment to Part 2 of this series that addresses another crucial issue regarding the believability of reviews and reviewers.
What Jack said was that, because he and other audiophiles have different tastes and preferences, different systems, different kinds of music that they will use for evaluating systems or components, and different rooms to listen to those things in: “…the key (for me [he said]) was to read reviews of pieces of equipment I was familiar with. If the reviewer and I had similar impressions, I would read more by that reviewer. Conversely, if I clearly felt differently than a reviewer about a given piece of equipment, I would purposefully try to find other reviews by that reviewer of equipment I knew. I was trying to either confirm that he/she and I perceived things differently or if some other thing (e.g., system compatibility) was involved. Over the years, I found there were many reviewers I trusted. I also found a number of reviewers that clearly didn’t hear what I heard or value what I valued…”
In short, what I think Jack was suggesting was that a good way to judge a reviewer as a potential guide is to read what he has to say about things that you have personally heard and already have your own opinions about and, if you find that you and he agree, put him on your list of people who can provide you with valuable input, and if not, not. Do this also with just the fellow Hi-Fi Crazies you hang out with, discussing the joys or flaws of the latest hi-fi goodies: If you and one of them agree about things that you both know or may have heard together, obviously the man has Golden Ears and his opinion about stuff you may NOT have heard could be valuable, too. But if you find that the two of you consistently disagree, you should certainly keep him as a pal, but you probably shouldn’t buy stuff based on just his recommendation.
For reviews of stuff that you haven’t heard or by a friend or reviewer whose opinions or conclusions on other gear you aren’t familiar with, one clue as to whether or not to give credence to what he’s saying might come from learning what he listens on. If his system is something you obviously wouldn’t want, that may be true of his opinion, too. And even if his system is a wonder and a glory, it still might not mean you should believe what you say. If he’s running huge speakers, for example, that might indicate that he has a huge listening room and what sounds good in it might not sound good in a more normal-sized room. It might even be, on the other hand, that he’s using his huge speakers in a small room – possibly even smaller than yours – and if that’s the case, the sound he’s getting might not be all that it could be and you might, again, not want to accept his opinion as indicative of your own. Carrying this to extremes, there was even the time, some years ago, when I was in Asia on a business trip for XLO, and was taken to see and hear two extraordinarily expensive systems, both in huge and heavily acoustically-treated listening rooms, and each consisting only of Stereophile Magazine “Class A” rated components, cables, and accessories. Both systems — put together based only on the magazine’s ratings, without any attention whatsoever paid to “what works well with what”, sounded utterly awful, and I would certainly neither seek nor accept audio advice from the owner of either one. Would you?
In the next installment of this continuing series, I’ll tell you about other situations where two different highly-qualified Hi-Fi industry professionals tested and evaluated exactly the same products and (Who should you believe?) came to wildly different conclusions – TWICE!
See you then.