Written by 6:00 am Audiophile • 7 Comments

Are Older Recordings Really Better?

Steven Stone looks at some why’s involved in good and bad recording quality…


AR-Stones225.jpgSteve Guttenberg wrote some time ago in one of his daily blogs, “Take any well recorded music from the 1970s on forward, it sounds as good or better than a similar contemporary recording. The ‘Stones’ Beggars Banquet and Bigger Bang for example. But almost all 1970s vintage films look and sound rather drab next to their contemporary equivalents. My point is, sound quality never really improved over the decades, and sure, today’s over-compressed music is partly to blame. So, compare a great 1950s recording, ie. Belafonte at Carnegie Hall 1959, and any equivalent from the 1990s. Look at 1950s films, they look old…”

While I won’t dispute Steve’s claim that many 70’s recordings, both rock and classical, sounded worse than some of those made in earlier times. But to blame recording quality gaffes on the gear itself would be a mistake. The reasons that sound recording did not improve was far more a result of humans’ priorities than hardware.

But before I delve into the multiplicity of reasons recordings can suck, let’s try to compare apples to apples before we dive deeper. In the 50’s ALL recording chains were simple minimalist affairs. Mercury’s famous “Living Presence” series used three channels for all their recordings, which was one more channel than most of their competition, but that additional channel isn’t what made their recordings sound so good – it was that Bob Fine (the principal engineer) and Wilma Cozart (the producer) who knew how to use those three channels for maximum fidelity. When multi-channel recording became an option (thank you Les Paul) most classical labels added microphones and subtracted recording time, figuring that they could “fix it in the mix” any channel or instrument imbalances later on. The results were always worse sound. 

AR-Mercury225.jpg

Why? Because the more microphones and channels you use the more chance that human error will invade the recording process. And I’m not talking gross human error such as “They forgot to turn on that last mic)” but small, subtle yet pervasive systematic errors that deteriorate the overall sonics. Phase alterations piled one on top of another as each microphone adds its own unique viewpoint of the sonic event, confuse our natural built-in phase detectors to the point where it becomes a homogenous wall of sound.

Compare a modern recording made using a minimalist recording technique of no more than two mics, spaced for maximum phase coherence with a vintage one and the newer one will win. Listen to a 2L or Chesky classical recording. They are at least the sonic equals to any of the RCA or Mercury “golden era” recordings. They use the similar techniques, but with modern, lower distortion gear, and the results bear out that good techniques combined with good gear equals a good recording.

AR-StudioReelToReel225.jpg

For many years J. Gordon Holt and I made two-channel, one stereo pair recordings of symphony orchestra concerts. They not only sound very convincing in terms of three dimensional imaging in stereo, but also, because the phase information is intact and not confused by multiple sonic viewpoints, decode to trifield, derived surround, and ambisonic nicely.

What about rock and pop music? In the early days, when rock and roll was a glimmer in Bill Haley’s (or Ike Turner’s) eye, recording engineers tried to use the same techniques they used for classical – minimal mics and tracks. And this worked well for all-acoustic groups such as Ian and Silvia or the Jim Kweskin Jug Band, but the minute the drum kits started growing and instruments got louder, getting everything in balance without bleed from one channel or instrument to another became a problem. The solution was close-mic’ing, multiple channels, and in the early days of four-channel pro recorders, “bouncing” or combining two tracks into one to free up that track for another instrument. Obviously, any phase info gets lost when you close-mic and multi-mic. And while you can add in an artificial ambient field over the whole recording and pan your mono channels so it sounds more cohesive, it will never be as phase-correct as a minimal-mic’d recording.

AR-Shellac225.jpg

I was surprised that Steve chose the Rolling Stone’s Beggar’s Banquet as an example of a “good” classic rock recording, because from my point of view this particular release personifies some of early rock recording’s worst issues. First, there’s wow and flutter. Listen to Keef’s acoustic guitar track on “Salt of the Earth” If your system is good enough you will hear what an acoustic guitar recorded onto a cassette deck sounds like. There is a “under water” fluttery quality to the acoustic’s pitch caused by the deck’s pitch variations. Also, if you compare the original pressings with the latest re-releases (and mixes) you will hear some parts that got buried in the original mix. And while some may argue that the original mix is by far the best, its appeal is not due to its clarity, but its murky mystery…

And why then did the Beatles classics recordings sound so good? Three words – Sir George Martin. The Beatles’ recordings were made by well-trained engineers who knew how to use their equipment to its fullest capabilities for maximum fidelity. Also, even the Beatles in their early days could not tell the engineers what to do. Most began as “tea boys” – gofers who watched and learned before they were allowed to touch a dial. As budgets and egos grew, performers and producers became the dominant decision-makers on fidelity. And since your average ’70’s and ’80’s producers knew next to nothing about good sound, the results were, at times just short of dismal.

AR-Cylinder225.jpg

On many sessions it got to the point where the audio engineers were there only to make sure that stuff worked, not monitor the fidelity or affect the process of recording. Audio engineers became merely an insurance policy, but had little say as to the how’s, where’s, and why’s of the process…and audio fidelity suffered as a result.

This ability to comprehend a piece of gear’s capabilities and functions is often not an intuitive process. You have to understand how something works to get the best out of it. As recording studios became more and more complex during the 70’s and 80’s, the number of engineers and producers who bothered to learn how or cared how everything worked so they could use the gear to maximum fidelity diminished. In some famous sessions the soberest person at a recording studio was probably the janitor…

AR-Gear225.jpg

Even nowadays on modern pop recordings it’s pretty easy to tell who knows their gear and who’s just randomly flipping switches. The well-recorded stuff sounds good – clear, dynamic, and each instrument or voice has their own special spot in an aurally articulate mix. While it hasn’t reverted to the point where the audio engineers are in charge, on many pop recordings you can hear that a lot of time and money went into the production values and that it was not an afterthought.

The final sonic quality of a recording is not a function of when it was made, or to a lesser extent, how it was made, but who made it and how much attention was paid to the sound quality of the release. 

A skilled sensitive recording engineer can do wonders with minimal gear while, conversely, in the wrong hands, even with state of the art gear, any recording can be turned into a disposable commodity…

(Visited 2,531 times, 17 visits today)
Close