It’s the time of year for saving money!
In the first two parts of this series, I told you about how your ears are able to “pinpoint” the direction and distance between yourself and a sound source as a result of arrival time and phase differences between when the sound arrives at your two ears – unless it’s directly in front of you — first one ear and then the other.
I also told you that, because both of your ears and both of the microphones in a (minimalist) recording system will hear all of the sound coming at you from wherever its source might be (instead of the right ear or mic “hearing” only right-side sound and the left ear or mic hearing only left side sound), there will be delays – again, except for sounds originating directly in front of (or behind) you — in both the arrival time at both ears or both mics and in the phase of the sound at both upon arrival. Those time delays can be calculated by dividing the speed of sound (nominally 1125 ft. [343 meters] per second) by the effective difference in distance between the sound source and each of the two ears or two microphones. The phase difference can also be calculated by dividing that same difference in distance by the wavelength of the frequency of the sound and multiplying the quotient by 360 (degrees per wavelength) to get the number of degrees of phase difference upon arrival between the two ears or the two microphones. For example, if two microphones are used and the effective difference in distance between them (how much farther it is from the source to one than to the other) is eight feet and we’re talking about a single-frequency tone of 1 kHz (the wavelength of which is 13.5″” [34.3cm]), the time delay will be 0.0071 seconds (8′ ÷ 1125′ feet per second = 0.0071 seconds) and the phase difference will be 2,560 degrees. (8′ x 12″ per foot = 96″, 96″ ÷ 13.5″wavelength = 7.111 wavelengths, 7.111 wavelengths x 360° = 2,559.96° of phase difference, which can also be stated as 7complete waveforms plus 39.96° of difference.)
Finally, I said that because the spacing between your two ears (about 6″) and two stereo microphones is never likely to be precisely the same, and because distance equals time and time equals phase, the sound that you would hear, even if you were exactly centered between two classically-placed (“A-B”) microphones, could never be the same as the microphones would “hear”, and thus no recording from those microphones could ever sound exactly the same as what you could hear in person.
Of course there are many different microphone techniques that can be used in stereo recording, some of which may even provide mic-to-mic spacing similar to the spacing of our ears. Even so,, the simple fact that stereo is intended to be played through speakers, and that the speakers will normally be placed very much farther apart than our ears are will still mean that what we hear recorded will – because the microphones and the speakers are at different distances from each other – not even be close to what we would have heard if we were there at the recording session, listening “live”.
“Aha!”, you say, But what if the (recording) microphones and the (playback) speakers are exactly the same distance apart? Wouldn’t that work perfectly to record and reproduce the music or whatever else we might want to listen to?
Sorry; good try, but no: Let’s suppose that we’re talking about a barbershop quartet, and that the four guys singing are positioned (just to make it all as easy as possible) facing forward in a straight line so that the mouth of the guy on the left is exactly eight feet from the mouth of the guy on the right, and that two mics are used, also exactly eight feet apart, with one exactly four feet in front of the mouth of the guy on the left and the other exactly four feet in front of the mouth of the guy on the right. And to make it all consistent, let’s suppose that a listener at the session is seated exactly four feet in front of the singers and exactly between the two microphones, and that once it’s all been recorded, the playback speakers are exactly eight feet apart and that a listener to the playback is exactly between the speakers and at a distance back from them that will have the sound of each speaker arriving at both of his ears at exactly the same time.
Given absolutely perfect equipment and absolutely perfect acoustics, both in the studio and in the playback listening room, will the studio listener and the playback listener hear the same thing?
No, of course not. If the mics are eight feet apart and the listener’s ears are six inches apart, while the sound from the left speaker will have exactly the same spatial relationship to the corresponding microphone as it did to the left singer and the same will be true for the right microphone and the right singer, everything else will be different, the time delays for the right singer to the left microphone and then back to the 6″-spaced ears of the playback listener or vice-versa will still be greatly different than they would be for the 6″-spaced ears of the studio listener, and the stereo effect, while enjoyable, still won’t be accurate.
There is one case, though, where the spacing of the microphones; the spacing of the studio listener’s ears; and the spacing of the playback transducers are all essentially identical, and that’s binaural sound, where the microphones are mounted in a human-like dummy head at the same distance apart as a real listener’s ears might be, placed in the studio or other recording location, at the same place a studio listener might be, and played-back through headphones spaced, obviously enough, at the same distance apart as their listener’s ears are.
That gets you sound recorded and played back exactly as a real listener would hear it “live”, and with exactly the same “front-back-side-over-and-under” imaging and precise sense of ambient space and “soundstage” as we experience in real life. It’s also right in line with the “less is more” philosophy common to “purist” audiophiles: While editing and even equalization can certainly be done, multi-mic’ing and multi-channel recording – both popular (and possibly even required) for stereo recording — simply aren’t possible in binaural, and the need for “mastering” and even for a “Mastering Engineer”, both standard for most stereo recordings, may be eliminated entirely.
That mastering for anything other than purely aesthetic purposes even exists at all may be because of a weakness of stereo as compared to binaural sound. I’ll tell you more about it next time.
See you then.