By now, you probably know what a "double blind" test is: Instead of a "single-blind" test like the famous "Pepsi Challenge", where a tester, who knows which sample is which, asks a person who doesn't, which of two soda pops he likes better, in a double-blind test, neither the tester nor the "testee" has any clue as to which glass holds the Pepsi.
The claimed benefit of doing it that way is greater objectivity: If the tester doesn't know what's in which glass he can't, either purposely or through body language or some other way, bias the test.
Sometimes it even works, and in many fields of science, where one single factor can be effectively identified and isolated, double-blind testing is the preferred method and really can make for more reliable data. In audio, though, except under very special circumstances, that's simply not the case.
Unlike a whole lot of other hobbies and pastimes, audio is either plagued by or -- if you like that sort of thing - blessed with constant dispute. In baseball, although the umpire can be wrong about some things, you can always tell for certain if the batter actually hits the ball and if the fielder actually catches it. Those are indisputable, as is the measured speed and quarter-mile-elapsed-time of a dragster. And, in the case of a "photo finish", whether of cars or horses or anything else, you can accurately record and measure the results and you can ALWAYS rely on what the measurements say. The same is true for photography, another favorite hobby of many audiophiles; where nobody would ever even consider arguing that different lenses, different film, or a different pixel count (for digital photography) don't make a difference, but in audio, argument about the basics has been going on ever since the hobby started, and there's no sign that it's ever going to stop.
The reason for this ISN'T that Hi-Fi Crazies are inherently irrational or that their perceptive senses are either flawed or easily fooled; it's that, unlike all of those other fields, disciplines, or pastimes, in audio it's almost impossible to isolate any one single criterion to test, and even if that could be done, it would still be almost impossible to get any two or more people to opine on only just that one thing, and even if that COULD happen, the odds are that their opinions would have little value.
To show you what I mean, let's first take a look at a double-blind audio test that might actually work: (Remember that I DID say that "under very special circumstances" that could happen.) For our test material, let's play a 2 kHz sine wave in 20 second samples, repeating the samples at two different volume levels precisely 2dB apart. If all of our testees listen to the same sample at the same time, through identical headphones, and if the sample volume levels are selected by a computer controlled by a random numbers generator, and if we record them for proof of sequence, but tell neither the subjects nor the researchers which volume level is actually playing at any given time, we will have what ought to satisfy even the most rigorous researcher as being a genuine double-blind test.
If the purpose of our testing is to determine whether people can identify 2dB differences in volume, and if each time a sample is played the testees are asked to state whether it is of the greater or lesser volume level, what they tell us they hear, compared to what was actually presented should be sufficient to allow us to reach some reasonably "high confidence" conclusions, and the reason for that will be that we will have isolated one single changing factor - the volume level -- and held every other element of the test absolutely constant: All of the testees will have listened to exactly the same test material at exactly the same time, through exactly the same sound sources, spaced exactly the same distance from their ears in exactly the same (because the headphones will effectively eliminate any room acoustics) acoustical environment, and, other than the relative volume levels, there will have been no variable factors at all.