It’s the time of year for saving money!
Some audiophiles are convinced that all audio reviewers are corrupt. This article isn’t for them. I know that no matter what I write their minds are already firmly closed. No, this is for everyone else. And what follows has always been painfully obvious to me, and by the end of the article I hope that you, too, will see the light.
I’m not being flippant. I know that when I write a review that it will be read by other reviewers, some of whom may even have reviewed the same product. And while we can have different opinions based on our own tastes, there are some parts of a review (besides the specifications) that if incorrect, or performed in an intellectually lazy manner, will be noticed and usually commented upon by others.
Let me give you what I consider a “classic” example of this phenomenon. In a recent review in TAS several products produced by one manufacturer that can be used together were reviewed. The power amplifier was tested with one set of speakers during the review. The reviewer heard a “too bright” harmonic balance and attributed it to an intrinsic character of the amplifier, which is only ONE possible reason for what he heard. The other reason could have been a mismatch between amplifier and loudspeaker. The way to determine whether that is the case is to use an alternative pair of loudspeakers. The reviewer failed to mention in the review whether he did this (he may have and it ended up on the cutting room floor) but the result was that there could be some question in a reader’s mind as to the validity of his conclusion. It didn’t take more than ½ day before the first comment appeared, and sure enough it was about whether the conclusions were correct, since no second set of loudspeakers were used.
I have no problems with opinions that differ from my own as long as a reviewer bolsters their opinions with information. An example of factual info that supports a sonic opinion would be a comparison between two similar products that indicates that one has better dimensional retention than the other. This gives other audiophiles, who are familiar with one or both components, a solid “fact point” to compare with their own observations. Opinions, when not backed by direct comparisons, carry far less intellectual weight, in my humble opinion…
Back in the good old days one “rave” review from Harry Pearson and a company was destined for overflowing coffers from the sales from that, now prized, component. Those days are gone. In the present day no one review from anyone, no matter who they are (don’t mean to bruise and egos here) will launch a brand. These days it takes a multiplicity of rave reviews for a product to go heavenward. An example would be the 1More Triple IEM, which due to great reviews from every single person who wrote about them, has surged up the market-share ladder.
Since it does take a multitude of great reviews for a product to go skyward, sales wise, any attempt to subvert the process by corrupting one “star” reviewer into writing an overly positive review is not only unlikely to succeed, it is more than likely to backfire! When I see what I consider an overly rosy review with little in the way of comparisons or detailed sonic descriptions my warning light goes off, as it does for other experienced audiophiles. That is not a good thing.
Is this “system” foolproof? Name one human-based social system that is. But given the competitive nature of audio journalism, I know that if I try to “game” the system by writing an overly positive (or overly negative) review there will very likely be someone who is savvy enough to catch me at it. For me, that is a strong motivating factor, as I believe it is for other audio reviewers…and that keeps us all honest.