Written by 4:27 am Digital

MQA – It’s About Time, Not Frequency!

Andy Schaub explains what he thinks MQA does better…


MQA’s claim to fame is that it can fold a high-resolution music stream into a 24-bit, 44.1 or 48kHz pipeline. That’s great and quite an accomplishment particularly, as I previously mentioned, since it is not actually lossy given a full end-to-end MQA implementation. However, the real magic of MQA is that it preserves the integrity of the time domain in a way much closer to analog than most or any other method of sampling analog signals into digital data streams and reconstructing them. This is important because, as I’ve said before, human hearing is extremely sensitive to distortions in the time domain that simply make the recording sound less authentic. 

AR-mqaded1a.jpgSo how does MQA do what it does? That’s a very good question, and you look online for answers you can only find hints and guesses. However, it’s not difficult to deduce how it probably works given the currently available information about MQA. 

Being an end-to-end solution, MQA takes into account the sound of the ADC and the DAC and uses different kinds of “filters” to compensate for the sound of the gear; but they aren’t really filters in the conventional sense because there isn’t necessarily any bandwidth limiting in an MQA ADC or DAC. 

AR-mqaded4a.pngBob Stuart’s opinion (the cofounder of MQA) is that aliasing isn’t such a bad thing and is far less audible than the time domain distortion that lowpass or brick wall / antialiasing filters create. Although MQA may use some adaptive bandwidth filtering based on the musical content as it varies over time, which is similar to how some video compression algorithms work. The actual applications are far more obvious in video compression where the information is inherently organized into frames with a lot of overlap of information between any two adjacent frames. In video you can often reduce a given frame to the difference between itself and the previous frame. 

Analog audio is continuous, so you can’t really use the video approach, but you can adjust the necessary bandwidth for sampling based on the amplitude and frequency content of the signal to minimize the time or phase smear if you apply any filtering at all. Its similar in some ways to how many modern digital cameras, apart from those made by Canon, have eliminated low pass filters because the aliasing effects that can’t be repaired by sharpening algorithms. 

AR-mqaded6a.pngIn the realm of the Shannon-Nyquist theorem, lowpass filtering in the DAC is mathematically equivalent to reconstructive interpolation and far simpler to implement. But that only applies to the frequency domain, not the time domain! The math is based on the reproduction of a pure sine wave or test tone, not a complex sound / musical signal. 

MQA takes a more modern and mathematically sophisticated approach to interpolation to reconstruct the waveform using a kind of geometric interpolation algorithm based on B-splines, or sparse points on a curve, that can more perfectly recreate a complex curve or signal without creating distortion in the time domain because it’s not a filter that’s mathematically equivalent to interpolation in the frequency domain alone and under limited conditions. MQA delivers true, sophisticated, reconstructive interpolation. One could regard this as a filter in in the sense that the interpolation can be customized based on the sound of a specific DAC; but it’s not actually a filter. 

The math for this MQA “trick” can get very complex and I won’t try to explain it, but B-spline interpolation is very modern and the Shannon-Nyquist theorem was first published in 1928, a long time before Sony and Phillips developed the Compact Disc. 

AR-mqaded7a.jpgHere’s an exaggerated analogy…If you’re talking on the phone and the higher frequencies suddenly get limited, you can still understand what the other person is saying. But if, when you talk, you hear an echo of your own voice no matter how brief it is, you will suddenly become almost unable to complete a sentence due to the echo. That’s how sensitive human hearing is to time distortion. 

The claim that, because of the integrity of the time domain, MQA files will sound more natural and appealing even if you play them without MQA unfolding and decoding may be true with some tracks (depending on their level of MQA processing) and some non MQA-DACs, which do not use brick wall filtering (or any explicit filtering at all, such as the 2.0 version of the Vinnie Rossi LIO DAC and all Audio Note DAC’s) because the preservation of the time domain can, and likely will, sound better than the antialiasing that occurs with a brick wall filter. 

Some developers of their own DAC firmware, such as dCS, use a technique called FFT’s (Fast Fourier Transforms) to create visual displays of an audio file to help fine tune their approach or algorithms. But FFT’s only describe or display information in the frequency domain! So, while you could have two waveforms that appear identical to one another on an oscilloscope or through FFT-based visualization, you only see the overall waveform or envelope of sound which is comprised of many individual waveforms combined together., If you could see all of those individual parts, they would all stay time-aligned with an approach like MQA; but using brick wall filtering based on Shannon-Nyquist results in individual frequencies becoming temporally unaligned, or shifted in time with respect to one another, which we hear as a smearing of the sound. 

There’s really no way to measure such information in the time domain now, at least not to the accuracy of human perception, so we have to rely on our ears, which we really should always do anyway. 

ar-mqaded8a.jpgAlthough some DAC’s, like the LIO and those by Audio Note, won’t cause time smearing at the end of the reproduction chain because they don’t filter the sound. All currently known ADC’s except those designed for MQA encoding do use brick wall filtering to prevent aliasing, so the time smear is already in the signal and you can’t get rid of it. That’s why MQA is intended to be an end-to-end solution, to avoid any such filtering based on the observation that temporal smearing sounds much worse than aliasing and supports the observation that even the best, highest-resolution digital protocols using filtering sound less natural than analog media (vinyl, tape, and FM radio as examples). Mytek is in the late stages of development on their first MQA ADC. 

At some point in the relatively near future, the average amount of bandwidth available to the consumer will increase to the point that no compression will be necessary for a 24/192 or higher bitstream; however, given the model that I’ve posited, MQA encoding and decoding would still be advantageous because it would not smear information in the time domain, much like analog media, even without the folding and unfolding or “audio origami.” 

More to follow …

(Visited 2,171 times, 2 visits today)
Close