It’s the time of year for saving money!
Objectivist audiophiles have, from their first data dump, loved A/B/X tests because they believe them to be the “gold standard” for testing. But we need to clarify one basic premise of an A/B/X or any other “blind” test right from the start – if no discernable difference is found, it only means that for that one test, no difference was found, which means either there was no difference or that the system under test was not sufficiently resolving for the difference to be heard. That is ALL that one A/B/X or A/B test tells you. It is only through many repeated tests with different subjects and systems that ONE particular component can be said to “sound different” by A/B/X standards. That is, quite a high bar, and perhaps there are tests that while not quite so rigorous, can reveal some useful information about the audio components in our systems.
Let’s look at what can be gleaned by one test currently available on the Archimago Musings site on Internet. It involves downloading a large file that contains four song samples “captured” by an RME ADI-2 Pro ADC. For the capturing, the signal chain was from one of the four digital playback devices line-level analog outputs to the inputs of the RME. So, the recording went from its original studio recording (which had its own ADC processing) to a file which was then processed by each device’s D/A circuits, then sent to their analog outputs where it was received by the RME’s analog inputs, then re-digitized by its ADC and then packaged into the zip file for easy transfer.
The four tracks all began as 16/44 Redbook-level files, then “captured” by the RME using its 24/96 encoder. According to the webpage, “Afterwards I tweaked the files to make sure that the average loudness was almost identical.” A Foobar2000 1.4.1 dynamic range meter was used for the comparisons. The device or App used for the actual volume adjustments was not noted in the test’s description.
If you read farther down the page you will find this, which, in a more standard scientific study might have been the hypothesis – ” What is more important I believe is the direct recording of the analogue output derived from 16/44 playback into the same high-resolution ADC at 24/96 is capable of capturing all the significant audible differences between the devices: or at least hopefully enough for your playback system to reproduce with high fidelity.”
So, from this sentence it appears that the goal of the test is to show that differences exist. But can that really happen with this test? I see several areas where a critic could take exception with this test and where additional variables and barriers could skew the results.
While this test is, at least until April 30th when the results are revealed, blind, it is not an A/B/X test, since there is no X, merely A/B/C/D, which is not wrong, merely different from A/B/X. Here the emphasis seems to be on hearing differences rather than not hearing them, which is diametrically opposite of the usual A/B/X bias.
For me the main issue in this test was the use of 16/44 files; not because they are intrinsically inferior to higher-resolution versions (which can be the case) but that they began with 16/44, then outputted to analog before being upsampled from analog to 24/96. I suppose the rational for using 16/44 files it that they are all Redbook standard and playable on everything, including CD players, but it could be argued that using the 16/44 CD tracks introduces issues due to playback errors compared to the same data on a digital file. Since this is blind test it was not noted whether the original source file was a digital file or CD since that would give away whether the device was a CD player or a DAC.
Obviously, a big part of this test’s usefulness and validity will depend on the quality of the gear and ears of those individuals who take the test and report their findings. Unlike many A/B tests I’ve seen, this one has at least a chance of arriving at something other than “no differences heard” due to the nature of the test – its original hypothesis is that there are sonic differences…
My initial response to this on-line test was decidedly negative, but upon analyzing the test more closely it does have value – it can certainly help listeners determine if their system is high enough resolution to hear differences between the analog outputs of different “real world” digital sources. So, as I’ve suggested on other posts, take the test…
Here’s a link to the test : click here
I think the link is broken FYI.
That’s better. Thanks.
Great. Thanks for letting folks know of the test…
In the audiophile world, where reviews and opinions are plenty, there are precious few tests that try to gather the opinions and experiences of the many. I have not looked with great detail at the test results at this point, but I do see that I have respondents of all ages, both speaker and headphone listeners, and what I would certainly consider to be excellent systems – many of which well over 5 figures.
I’ll close off the test by April 30th so anyone interested in participating and contributing to the data set, I would be most appreciative.
Thumbs up for the test and this article. What I’m pining for, though, is a test procedure that focuses not only on whether there is a difference, but whether the difference is worth caring about. It always seems to be assumed that every little difference (whether verified and demonstrable or merely perceived) is important and worth paying for. But I would argue that our attention is better spent on exploring and understanding music than on ferreting out subtle differences that alter the sound in a way that we can’t know is right or wrong.
Good point Brent,
This is why in the blind test, I specifically asked whether the listener – at least those who heard a difference – could tell me the magnitude of the difference heard and whether they would spend money to achieve the difference.