It’s the time of year for saving money!
“Sounds as good as the master”, “Master Quality!”, “Reproduces all the original master’s sound quality” are all statements I’ve heard or read during the last couple of months. And all this buzz about “masters” got me thinking about what is and is not a “master” in my small dusty corner of the audiophile universe.
Since I make recordings, and have for the past 50 years or so, I have what I call “masters” in more formats than you’ve got fingers…1/2″ tape, ¼” tape, cassette tape, PCM-Beta, PCM-VHS, DAT, CDR, micro SD card, hard drive, full-sized SD card, and USB stick…these are the original first-generation made real-time recordings of musical events. Often, I’ve had to edit these raw recordings. I call those edited versions “production mixes.”
Sometimes the edit is simply removing dead space, but even in that case it is a different file, and it is one-generation removed from my master. In the bad old days of cassettes, that difference could be rather substantial…but nowadays…the sonic differences between a 192/24 or DSD 5.6 production mix from my original DSD 5.6 recordings are sonically undistinguishable from my masters 99.9% of the time (gotta allow for that black swan). But, strictly speaking, these production mixes are not the same as the masters.
Just to muddy the waters a bit, in the pro world the term “Masters” is routinely applied to what I call a production mix. So, if these production mixes are indeed undistinguishable from the masters, why not call them “master quality?” Why indeed…
To call a recording “master quality” requires a sonic or technical value judgement by someone. And that begs the question “judged by whom?”
And I think the “judged by whom” issue is what most alienates MQA critics. MQA is short for Master Quality Authenticated, which is the name of a data compression/reprocessing methodology/scheme used by TIDAL streaming service for their higher than Redbook CD-level recordings.
The “Master Quality” part is determined by the record label that supplies MQA with “masters” for MQA processing. While MQA does process these higher resolution files, they can’t process what they don’t have. It’s the same for every streaming service – they can’t stream what they haven’t gotten from the record labels. So, the first layer of trust is with the record labels themselves, who can, since they own the rights, do whatever the want.
If “they” say it’s master quality, it’s master quality…maybe…Amazon has further muddied the water by calling anything that is not compressed MP3 “HD” which is short, one assumes, for high definition, which is definitely not what everyone else in the industry considers high resolution…
The second layer of mistrust is with the MQA technical process itself. Some critics question the reason it exists, while others try to duplicate its filters to show how it’s merely another, easily duplicatable, digital filter scheme. All of these are examples of not trusting that MQA is what it claims to be, even though there are “White Papers” by Bob Stuart that demystify the process.
This double layer of mistrust has inspired many critics to make some strong claims about the truth levels in MQA as well as the master-quality claims from major record labels. Some critics have used test measurements of things they feel show that a recording has been upsampled from a lower-rez production mix rather than made from a true high-resolution original master.
Obviously, knowing what we all know about humans, there are very likely some recordings in streaming services libraries that are guilty of this sonic offence. But, also knowing what we know about humans, most of the recordings marked “Hi-Rez” on any of the premium streaming services’ libraries are, indeed, High-Resolution recordings that were made with all the best intentions for maximum fidelity by the artists and record companies involved.
As I write this I’m streaming Jimi Hendrix playing “Ezy Ryder” on a just-released set from Songs For Groovy Children – Fillmore East Live Concerts. It had had to be sourced from an analog tape, so by some critic’s standards it technically can’t be a high-rez digital recording no matter what rate it’s been A/D’d at. But it’s a much better sound than most of the folks in the Fillmore heard…I used to go to the Fillmore East…I know how easily the house sound turned to mush…but I digress.
As far as my world is concerned, 2019 delivered more great music to my ears than any year before, much of it in true high-resolution, and occasionally, I’m sure, some upsampled from lower resolution-sources, and I expect that trend to continue into 2020…but for now you’re free to think and argue among yourselves…and, I hope, listen to some music you thoroughly enjoy.
A short comment from Bob Stuart –
As an industry we seem to be plagued with insecure, inadequate or misleading definitions – starting with ‘High-Resolution’ . ‘Masters’ is another that has taken different meanings according to era or position in the supply chain. ‘Master’ has evolved over the era of recorded music; in earlier days there were typically fewer ‘stems’ ‘mixes’ or ‘deliverables’ and fewer chances for tampering between release and the customer.
The main confusion arises at the point where the creative team have finished mixing and preparing a recording into one ‘definitive Master’, which is normally somewhat directly approved by the producer/artist. In very broad terms, from the 50s up to mid-80s, this would be auditioned in the studio and then written to a magnetic tape archive (usually duplicated). In the late 80s and up to 2000, the typical production process used 44.1 kHz 16 or 24b, sometimes with analog mixing or EQ as an intermediate step. Occasionally this final result was stored on analog tape as well as in digital form. Latterly we see a similar process except the workstations are more powerful and occasionally higher sample-rates prevail but also, we often see many more ‘stems’ and layers.
Nevertheless, at the end of this process, there is a sound in the studio which is considered definitive and the capture of it is a definitive ‘Master’.
By the way, this is the ‘Master’ that MQA seeks because our goal is to deliver the sound from as close to this point in time as can be released – because it is what was heard at the end of the production process and generally approved by key stakeholders. It could be an analog tape (if still playable) or a competent digital archive capture with metadata about that process; it could be a digital tape (same caveat) or a digital file from this moment. The security of this data and quality of associated information varies wildly with the Label – e.g. depending if the content has changed ownership or remained in a good system. Some labels have great archives, sometimes all that remains is a CD….
The term is in fact diluted by the fact that ‘Mastering’ is a also subsequent process that originates in traditional workflow where the sound had to be changed to ‘match’ it to delivery carriers. A good example is where the recording was to be released on vinyl; here, often, dynamics and/or high-frequencies need to be constrained and low-frequency information tamed to be sure that the disc was playable or pressable. And as you suggest, sometimes this involved changes to the mix (e.g. mono version). So, a Mastering Engineer would make a ‘LP Master’ and cut a ‘vinyl Master’ (‘mother’) on a lathe. Incidentally, it was quite normal to have an analog tape of this LP Master and so, as one goes through a Label’s archive, it is important to follow the documentation – we don’t want to encode a 2nd or 3rd-generation copy with vinyl EQ. A similar process of ‘taming’ was used to make a ‘cassette master’ and later, level-up-shifting or compression to make a ‘CD Master’.
In your article you refer to these derivatives as ‘production mixes’. These days the processes to make a CD Master or vinyl disc continue. But today, digital distribution has complicated matters considerably with up to 50 different deliverables that may either be ‘human-mastered’ or ‘published’ by a scripted export process in a workstation.
In that list of deliverables may be versions at various bitrates exported in AAC, MP3, WMA as well as ‘special versions’ for CD, ‘download’, ‘Mfi’, as well as up- or down-scaled derivatives such as DSD64, DSD128, or 24-bit PCM in both families (e.g. 352.8, 176.4, 88.2, 44.1 kHz and 192, 96, 48 kHz).
Clearly, none of these deliverables can be as pure as the definitive ‘Master’ – album publishing is not lossless, and it is hard to imagine that there can be several definitive versions – they will not sound the same as the source. Plus there are also no guarantees that the data make it all the way through the delivery chain to the end consumer without further processing (if only in the playback equipment, OS and DAC chips).
Nevertheless, there is another complication, in that we (and others) can only distribute from assets which are ‘approved for release’ by the artist/producer/responsible adult.
In modern recordings, where labels or artists are working with MQA during the production process (recording, mixing and mastering), MQA can work with the real definitive Master at point of creation, including completely bypassing the export process by taking the native 32b-floating-point or DSD128/256 directly from the workstation – doing so gives real purity in sound and avoids uncertain provenance. Everything downstream is degraded to a greater or lesser extent. Of course, sometimes we have to use the ‘best version available’ but progress is steady.
So, to your question ‘judged by whom’, our answer is simple. We have close relationships with the archive and supply teams in the labels, we converged on process and workflow to give the very best shot at encoding the ‘definitive Master’. We include considerable QA analysis in the encoder that tries to catch technical errors including up/down-sampling or scaling, watermarks, etc. When invited to do so (for important releases or re-releases), we have an armoury of tools that can help us identify or actually ‘drill back’ to the ‘definitive sound’ and, in some cases, remove technical artefacts of the historical processes used. We trust the Labels, mistakes are found and quickly corrected. No-one is trying to be evil. Furthermore, we make no judgment on the master, but we do address technical flaws and use modern sampling methods so that the sound in the studio is reproduced.
Of course, these days, anyone can express anonymous opinions and undermine trust.
We are very clear that it is impossible to judge a recording’s provenance by technical measurement. More information is needed, and this information is owned by the label and not ours to reveal. But as MQA, we set out and continue to do the best job to identify archive assets and bring their sound directly and efficiently to a music listener in many contexts and to confirm the process. We believe that clearer sound quality makes it easier to enjoy the music. MQA has always been clear about our process to determine and encode ‘Masters’. Each recording is a special case; below are links to Q&A materials and descriptive examples of workflows and specific examples.
Coming to your final point, MQA uses a new, modern approach to sampling that best matches properties of the content and human hearing. That makes it efficient and transparent. In order to deliver the original sound exactly, MQA also calibrates the playback so that the analog output very closely matches the input (because sound on air is analog). The underlying principles have been innovated by members of the MQA team over more than three decades and described in professional peer-reviewed papers. (See Open-access papers , ,  &  and their references).
There is a strong support from professional recording and mastering engineers for the sonic benefits and ‘Authentication’ in MQA. Finally, MQA is not a ‘filter’, this has been refuted many times in Q&A materials and perpetuating the myth reflects either a lack of expertise, an inability to read, or just mischief!
Provenance and Archive
1] J. R. Stuart, “Soundboard: High-Resolution Audio,” J. Audio Eng. Soc., vol. 63, pp. 831–832 (2015 Oct.).
Open Access http://www.aes.org/e-lib/browse.cfm?elib=18046
 J. R. Stuart and P. G. Craven, “A Hierarchical Approach for Audio Capture, Archive and Distribution,”
J. Audio Eng. Soc., Vol. 67, (2019 May). Open access: https://doi.org/10.17743/jaes.2018.0062
 J. R. Stuart “Coding for High-ResolutionAudio Systems,” J. Audio Eng. Soc., vol. 52, pp. 117–144 (2004 Mar.). http://www.aes.org/e-lib/browse.cfm?elib=12986
 J. R. Stuart and P. G. Craven, “The Gentle Art of Dither,” J. Audio. Eng. Soc., vol. 67 (2019 May)
Open access: DOI: https://doi.org/10.17743/jaes.2019.0011
Hi Steven, Ironically, the commercial music world is disconnected from the real music world. Using the word “Master” is extremely misleading as in the commercial world, it really means something recorded which is then “over-processed” to suit consumer playback…To me, a Master is what I have when I’ve recorded concerts on my R2R tape recorders. If I’ve done my job right, then it’s “done”…That stopped happening in the 1960’s once recording left the live venues and ended up in studios. So they shouldn’t call it “Masters”, but they should call it something else – “Commercially manipulated, overprocessed, compressed, melodies”….
Why go through all the MQA nonsense? And why is Bob Stuart harping on about “anonymous critics of MQA.” The extensive criticism I’ve read is from very real, very technical, very public sources. I don’t trust corporate shills, wasting our time and attention on their profit margin. FLAC files are perfect and free.
When you call it “nonsense” and “fairy dust” that pretty much negates everything else you’ve written IMHO…
Fair enough. I edited it. If nonsense is too strong a word for you, try unnecessary.
The most important feature is that MQAs from original master recordings. After that, not sure this MQA processing creates a reproduction better than other lossless processing. Anyway, I do like the sound as much as that of other good files.
A “master” to me is a version of a musical performance that has been worked on by a mastering engineer.
“MQA” to me is a piece of audio that has been processed by proprietary DSP developed by MQA, the firm. This apparently has several effects: (1) It reduces bandwidth, (2) it generates revenue for MQA, (3) it requires a special decoder, (4) it makes user DSP on the digital signal difficult or impossible, and (5) it produces a sonic effect that some find enjoyable. It is something I’d rather do without, but tastes vary.
A short comment from Bob? There are more words in his comment than in the original article!
Steven, as you know I’ve made my living as an audio engineer, record producer, university professor, and label owner for over 40 years. During that time, I’ve worked extensively with analog tape (stereo and multichannel – I owned a 3M 56 2″ 16-track tape machine and still have my Nagra and Ampex 440-C), early standard-resolution PCM digital, and high-resolution PCM digital. I was one of the first to record, mix, and release true high-resolution native recordings when DV D-Audio was introduced in 2000. As many in the industry still refuse to properly address the issue of provenance (standard-resolution analog recording derived from tape can never be classified as hi-res music because the original recording formats weren’t capable of hi-res capture), it’s important to be consistent and accurate when discussing the issue of “masters” in regards to hi-res audio.
In actuality, there can be several — or even many — “masters” in the production and release of a music album. Like you Steven, I’ve recorded hundreds of live musical events directly to 2-channel stereo (analog or digital), duped them onto cassettes or burned CD-Rs and called it day. When I worked with the pop band Ambrosia in the late 70s and early 80s at Mama Jo’s studios, we recorded to a Stephens 2″ 24-track tape machine. The 2″ tapes that resulted from those sessions were rightfully called the “multitrack masters”. They were supposed to be delivered to the labels along with at least two other “masters” – the stereo mixdown and ultimately the final mastered “gold master”. The very be consumers could hope for during analog era were vinyl LPs or cassettes — and the were 3rd or 4th generation from the “multitrack master”.
I was fortunate enough to be a participant in sessions held at Battery Studios in New York City some years ago. According to the engineers working on new restorations and digitizations for streaming and disc release, they rarely have access to the final mastered “master”. When preparing a “so-called” hi-res music version of an album from the classic analog period, they are often forced to search track by track for the “best” surviving copy of the analog master. In the case of a Harry Nilsson release they were working on, that meant using a duplication safety copy located in a German vault. My friends at the WB and Universal mastering rooms concur that the real masters are NOT typically available. The digital releases audiophile download and stream are not actually masters.
After a preliminary review of the HD-Audio Challenge submissions (which is ongoing and open to all looking to participate), no one can pick out a high-res audio track over a CD version of the same track. The high-res marketing hoax and other processes that promote getting back to the original master should be honest and recognize that any problems we have with the fidelity of classic analog era records and the new digitally produced ones are the fault of the CD format or standard-resolution sample rate and word lengths. The failure to deliver fidelity is due primarily to poor engineering choices, inadequate equipment, analog decks that are not calibrated or capable of delivering 100% of the fidelity on a master, and the demand by labels (not artists) for ever louder releases.
The very best “masters”we can hope for are files obtained from the original 2-channel mixed master digitized at 96/24-bit PCM using state-of-the-art analog tape machines and ADCs and delivered without ANY additional processing of any kind — this includes MQA.
Mr. Waldrep makes an interesting point: “The very best “masters”we can hope for are files obtained from the original 2-channel mixed master digitized at 96/24-bit PCM using state-of-the-art analog tape machines and ADCs and delivered without ANY additional processing of any kind — this includes MQA.”
Assuming this is correct, and I suspect that it is, I have a few questions:
1) If we were to compare precisely the same Title (from the same “Master”), one instance on Qobuz at 96/24, and the other instance a Master Quality Tidal recording that unfolds (sorry if this is not the precise term, Mr. Stuart and Mr. Stone) to 96/24 – would there be any perceptible difference?
2) If not, is the primary benefit of the MQA technology to shrink the bandwidth required to deliver these recordings? Would this be primarily of interest to the streaming services rather than to their subscriber base?
3) If there is a perceptible difference, to which element(s) of the MQA technology chain can we look towards to explain the difference?
Thanks for the lively conversation!
“micro SD card, hard drive, full-sized SD card, and USB stick” audio formats?????? Well at least you said it early so I didn’t have to read the rest of the article. jesus, hard drive as an audio format…..