Written by 4:24 am Digital

Why 44.1/16?

Steven Stone debunks the debunkers of high-resolution audio


Lately I’ve seen a lot of posts from the usual suspects about 44.1/16 being more than enough bandwidth for humans to completely enjoy music. The reasons given are ones I’ve seen 1000 times before, yet they are no more correct now than when originally put forward. All these arguments are based on the faulty logic that 44.1/16 has been “scientifically” proven to be more than adequate for the average person listening to music. The plain fact is that 44.1/16 was never chosen because it was sonically transparent or optimal. It was chosen because it was technically feasible.

AR-digi1a.jpgThe following is the most elegant description I’ve found of why 44.1 was chosen as sampling rate, courtesy of Richard Murison, in issue 3 of Copper Magazine “The earliest implementations of digital audio used VCR cassette tapes and transports. It was the only available technology that could manage the data and bandwidth requirements. It was cost-effective to have these VCRs run at the same speeds as they would run at for video applications. At the time there were 2 dominant video standards, PAL and NTSC. PAL’s video format used 625 lines at 50Hz refresh rate, and NTSC’s used 525 lines at 60Hz refresh rate. If the transport would write 3 audio fields in place of every video line, the required audio sample rate for PAL (set to use 588 active lines out of 625) would be (588/2) x 50 x 3 = 44,100, and for NTSC (set to use 490 active lines out of 525) would be (490/2) x 60 x 3 = 44,100. Either format of VCR transport could therefore be used as digital audio transports with minimal modification.”

AR-digi2a.jpg

Instead of analyzing the capabilities of the human ear/brain and making a sampling rate that based on that research, the 44.1 standard was based on nothing more profound than “What’s the best we can do, right now?” and then justified by tests after it was determined that 44.1 was what they needed to use due to technical limitations. Rather than a “gold standard” for digital inaudibility 44.1 was (and still is) merely what was technologically convenient.

Nowadays we can do better.

But there are some folks who write about audio who would have you believe that any need for a higher sampling rate and bit depth is merely a form of snake oil designed to induce audiophiles to spend money on stuff that will not make an audible difference. Of course they base all their arguments on the false premise that 44.1 is “good enough” because it was based on scientific research. That research was done AFTER the standards were formulated. And its primary purpose was to justify the establishment and acceptance of those arbitrary standards.

AR-digi3a.jpgIn retrospect its unfortunate that audibility tests on 44.1 gave researchers a positive (for them) result. If the tests had been more rigorous perhaps the whole digital rollout would have been postponed for a couple of years until the technology needed to produce better and higher-resolution DACs was in place.

In 2016 there is no earthly reason besides historical precedent why we are still saddled with 44.1/16 as any kind of minimum standard for audio quality. It’s not. Perhaps it never should have been the standard. As of today there is no logical reason that anyone who cares about audio quality should argue for retaining it as anything but a MINIMUM standard today.

(Visited 696 times, 3 visits today)
Close