Quantcast
MQA discussion at Denver RMAF - Page 27 - Gearspace.com
The No.1 Website for Pro Audio
MQA discussion at Denver RMAF
Old 2 weeks ago | Show parent
  #781
Gear Head
 
Quote:
Originally Posted by lucey ➡️
Again you’re wrong, it’s really boring…

MQA was marketed for years and is still marketed as being approved by mastering engineers and artists and being the same thing that they hear in the mastering room… your interpretation of that as rightsholders is creative yet another con

Mastering Studio authentication is the central claim with MQA and intended to appeal to the perfectionistic interest of the audiophile

You sound like a lawyer not a music maker… Clueless

Maybe you should go work for them, or maybe you already do?

If you don’t have integrity, I certainly can’t show it to you in an online conversation… If you haven’t had first-hand experience with manipulative narcissists in business, I certainly can’t show you the signs either

Good luck. You’re blocked.
You made a comment about what is heard in the mastering room... If you are talking about what is heard before the nasty signal processing that is done to a consumer recording, then if you start with the clean copy, then MQA can sound better. If you start with what is put on CDs or distributed to consumers even in 24bit form, then MQA will always be worse, even tough often inaudibly worse.
It makes me wonder if MQA is intended to be a format that diminishes the quality in a technical sense, but allows the use of something closer to the 2trk master that was originally mixed. I doubt if the consumer will EVER get the 'good stuff'.
By far, the biggest damage is done as the last phase before distribution, even in consumer 24bit high res materials.
I am NOT pro-MQA AT ALL because it solves a problem that is already solved with legacy methods -- that is, either data compression or more aggressive mp3/opus type schemes. However, using mp3/opus or even flac type compression for downloads is much less necessary than for even 10yrs ago. We mostly have mega-bandwidth and mega-storage, then why garbage up the signal more to save some space.

Bottom line, if MQA is yet another layer of obfuscation on top of what is normally distributed, then it is balderdash with todays technology. If it will be used to unleash something closer to a true 2trk master without further manipulation, then it might be helpful to the consumer.

Alas, I doubt that MQA will be used to free the less-processed original materal, and will be added to the already existent layers of obfuscation. We already get recordings that are damaged sufficiently. MQA is just not helpful with todays technology, even though it is partially enabled by todays technology.

Just my 2 cents.
Old 2 weeks ago | Show parent
  #782
Gear Guru
 
lucey's Avatar
 
Verified Member
1 Review written
🎧 15 years
Quote:
Originally Posted by John Dyson ➡️
You made a comment about what is heard in the mastering room... If you are talking about what is heard before the nasty signal processing that is done to a consumer recording, then if you start with the clean copy, then MQA can sound better. If you start with what is put on CDs or distributed to consumers even in 24bit form, then MQA will always be worse, even tough often inaudibly worse.
It makes me wonder if MQA is intended to be a format that diminishes the quality in a technical sense, but allows the use of something closer to the 2trk master that was originally mixed. I doubt if the consumer will EVER get the 'good stuff'.
By far, the biggest damage is done as the last phase before distribution, even in consumer 24bit high res materials.

I am NOT pro-MQA AT ALL because it solves a problem that is already solved with legacy methods -- that is, either data compression or more aggressive mp3/opus type schemes. However, using mp3/opus or even flac type compression for downloads is much less necessary than for even 10yrs ago. We mostly have mega-bandwidth and mega-storage, then why garbage up the signal more to save some space.

Bottom line, if MQA is yet another layer of obfuscation on top of what is normally distributed, then it is balderdash with todays technology. If it will be used to unleash something closer to a true 2trk master without further manipulation, then it might be helpful to the consumer.

Alas, I doubt that MQA will be used to free the less-processed original materal, and will be added to the already existent layers of obfuscation. We already get recordings that are damaged sufficiently. MQA is just not helpful with todays technology, even though it is partially enabled by todays technology.

Just my 2 cents.
I have no idea what you’re talking about in every word of this near jibberish post

Bold print on the most bold jibberish

Consumers are able to hear the actual master from the mastering room. 24 bit masters are just fine thank you

MQA is 16/44.1 or 16/48 plus artifacts
Old 2 weeks ago | Show parent
  #783
Lives for gear
 
David Rick's Avatar
 
🎧 15 years
How I learned to stop worrying about aliasing

Quote:
Originally Posted by sax512 ➡️
This is interesting. Why would using R2R architecture avoid the need to upsample at the DAC, or oversample at the ADC, without the risk of creating unnecessarily high levels of alising?
No, you still need a guard band for a realizable anti-alias filter.

Quote:
I find it interesting that you complain (rightfully so) about the sinc^n type of frequency response, when that's exactly what happens with the B-spline kernel.
A B-spline kernel of order n being the convolution of (n+1) rects, its frequency response is the multiplication of (n+1) sincs in the frequency domain.
...which decays (n+1) times as fast. But you're actually onto something here, Marco, in that there's never a sharp frequency cutoff like in Nyquist-style PCM. The filters aren't required to do that anymore. In Nyquist-type systems, both the A/D and D/A filters are required to remove all of the energy beyond a certain bandwidth. If they fail, we call that aliasing. But in bi-orthogonal systems, the unwanted stuff doesn't get filtered, it gets cancelled by the combination of the encoding and decoding filters. It gets a little bit fuzzy just trying to specify what the bandwidth of the resulting system actually is because there's not a single point of delineation between passband and stopband. The corresponding question is "what rate of signal innovation can this system reproduce?"

I'm snipping some of your further musings about aliasing because they're still anchored in Nyquist-type thinking. It took me a long time to get over this mindset myself.

Perhaps it will help to realize that aliasing is basically a mismatch between the original signal and the subspace you're projecting into. But if the original signal is already in that subspace, then there can't be a mismatch. Can we assume that? In a Nyquist-type system, we'd have to assume that the original signal was strictly band-limited. It really isn't, so we apply a pre-filter to make it so and from there on the process can be "lossless", but that's because we hid the loss at the front and forgot about it.

In a post-Nyquist system, we don't assume (or enforce) a hard bandwidth limit on the original signal. Instead we assume that the signal has a "finite rate of innovation" -- that it can change only so fast -- which is a much safer assumption for signals originating in the physical world. Then if we "sample" at a rate capable of representing that rate of innovation, then we haven't lost anything.

Quote:
But maybe what bothers me the most about the paper you linked to is the fact that, to correct for its sinc^(n+1) type of frequency response, a complimentary pre-filter has to be added first. The paper is quite descriptive and maybe I'm not interpreting the figures right, but it seems to me that this pre-filter is added before the sampler, at the analog stage. If so, good luck getting that right! The purely mathematical approach with little to no considerations to how these kernels and pre-filters can actually be achieved in real life is the bothering part. Everything looks good.. on paper!
Very perceptive. Just realize that we're doing the exact same thing in conventional systems today. The initial anti-alias filter is always analog, usually followed by digital-domain cleanup, thanks to oversampling. So conventional systems have a "prefilter" too. In fact, bi-orthogonal sampling theory includes conventional systems as a special case in which the encoding and decoding filters are constrained to be identical.

There's one situation in which you can skip the pre-filter: When the original signal is already in the representation space. For Nyquist-type systems, that means the anti-alias filter has nothing to remove. In general, it means that the original signal is fully representable using your shortest-available basis functions. Those basis functions are finite-support splines in the MQA technology. No surprise that Craven picked them because he's practically a Greek God in piecewise spline approximation theory. Unser developed that aspect of his theory because splines are popular in video processing and he does a lot of medical imaging research. But, probably with an eye towards audio, he also showed how to do similar things with exponential basis functions, and decaying exponentials are super easy to do in the analog domain. It should be possible to build an A/D converter which inherently has the proper sampling function, but nobody has done it yet.

Quote:
As I said before, there's many ways to skin a cat, and I'm certainly not one to not appreciate learning about a new mathematically sound method to do it (feasibility considerations aside). B-spline IRs are just another way. But they do come with their own problems. To summarize them, in my opinion:

1. Need of a pre-filter in the analog domain of very specific frequency response (very hard to get right to begin with, let alone keeping it stable as components drift).
You do it by oversampling, just as we do today because we can't build decent brickwall analog filters. It's just that you need less of it.

Quote:
2. Need to use higher sampling rates to avoid sinc^n type of frequency response side lobes to show up as aliasing.
I covered this concern above. Choose basis functions that match the real world better than infinite sinc's, and you've got less to worry about than today.

Quote:
3. Stability and need to link the DAC to knowledge of the ADC.
Not sure what you mean by "stability" since the analog filtering is easier, not harder. The need to send side-chain information to enable proper decoding is there, but we already have to do the same thing with MP3, AAC, etc. Of course, once you require an auxiliary data channel, that opens the possibility to transmit other stuff that people might get upset about.

David
Old 2 weeks ago | Show parent
  #784
Lives for gear
 
IanBSC's Avatar
 
🎧 5 years
Quote:
Originally Posted by David Rick ➡️
Thanks for that, Ian. A discrete R2R that's good to 20 bits is really non-trivial; no wonder they were costly! It's really really hard to get resistors that hit the tolerance required to get the linearity spec they achieved and hold it over temperature. I think the Vishay precision metal foil ones could do it (they cost multiple dollars each) but I'd want them mounted on a common substrate for thermal tracking and laser-trimmed to match. That's an expensive custom part.

What this approach gets you is the ability to build a converter with no decimation filters. Or you could do a nominal amount of oversampling and do a low-ratio decimation with a full-on DSP chip, with 40 bit precision and proper dithering on the word-size reduction. Monolithic decimation chains are not that good.

It's true that the monolithic converter chip makers have thrown longer word lengths at their decimation filters in recent years, but they're still undithered and the high-rate stages of the decimation chain are still done with bog-simple CIC decimation stages, which are just a series of maximally-decimated "boxcar" filters. They're really lousy low-pass filters with a classic sinc(f)^n frequency response. They have crappy passband flatness and their sidelobes alias a lot of stopband energy into the passband. Then the passband droop gets cleaned up by a long (non-causal) FIR filter at the end of the chain, but the damage has already been done: There are aliased sidelobes lurking in the the bottom bits of the passband that can't be removed. If you look at the plots, you can often see things that look like chopped-up pieces of sinc tails. But the PM2 converter plots have none of that, because it simply didn't happen.

To tie this back to the present discussion, I'll mention that spline filter sidelobes decay lot more quickly; if you decimate using those, a lot less grunge is going to end up in the audio band.

David L. Rick
Very interesting. Thanks, David!

What I do know is that the Model 2 does have a decimation filter (2 types actually) at least that's how they describe it in the manual. My guess is they use some oversampling, probably 8x and they have a dedicated DSP chip for the filters and HDCD encoding.

With the advent of Chinese tech there are a number of hifi DACs with R2R boards that perform up to 22 bits for much less money than the Model 2. Look at the Denafrips Terminator. If the financial incentive existed we could probably see this technology in future ADCs, although it is still more expensive than a chip. Maybe the AKM shortage will be a factor...
Old 2 weeks ago | Show parent
  #785
Gear Head
 
Quote:
Originally Posted by lucey ➡️
I have no idea what you’re talking about in every word of this near jibberish post

Bold print on the most bold jibberish

Consumers are able to hear the actual master from the mastering room. 24 bit masters are just fine thank you

MQA is 16/44.1 or 16/48 plus artifacts
All I know is that most consumer recordings are strongly compressed. Sorry about my language skills -- I am do have problems expressing things.
I have seen huge pushback from some groups of people, especially those in industry. We are working on some *FREE* technology to undo the compression applied to consumer recordings needing few adjustments between each recording (mostly stereo image and a threshold level choice between two values.) The improvement is obvious, but perhaps accomodation has allowed some people to be unable to detect the compression.
Old 2 weeks ago | Show parent
  #786
Gear Guru
 
lucey's Avatar
 
Verified Member
1 Review written
🎧 15 years
Quote:
Originally Posted by John Dyson ➡️
All I know is that most consumer recordings are strongly compressed. Sorry about my language skills -- I am do have problems expressing things.
I have seen huge pushback from some groups of people, especially those in industry. We are working on some *FREE* technology to undo the compression applied to consumer recordings needing few adjustments between each recording (mostly stereo image and a threshold level choice between two values.) The improvement is obvious, but perhaps accomodation has allowed some people to be unable to detect the compression.
Are you upset with volume normalization done by Spotify, YouTube, etc.?

Or are you talking about the compression that is part of the production choices of the music makers?
Old 2 weeks ago | Show parent
  #787
Gear Guru
 
Brent Hahn's Avatar
 
1 Review written
🎧 15 years
Quote:
Originally Posted by John Dyson ➡️
The improvement is obvious, but perhaps accomodation has allowed some people to be unable to detect the compression.
If people can't detect the compression to begin with, how can the improvement be obvious?
Old 2 weeks ago | Show parent
  #788
Gear Addict
 
JLaPointe's Avatar
 
🎧 10 years
Quote:
Originally Posted by John Dyson ➡️
It makes me wonder if MQA is intended to be a format that diminishes the quality in a technical sense, but allows the use of something closer to the 2trk master that was originally mixed. I doubt if the consumer will EVER get the 'good stuff'.

<snip>

Bottom line, if MQA is yet another layer of obfuscation on top of what is normally distributed, then it is balderdash with todays technology. If it will be used to unleash something closer to a true 2trk master without further manipulation, then it might be helpful to the consumer.

Alas, I doubt that MQA will be used to free the less-processed original materal, and will be added to the already existent layers of obfuscation. We already get recordings that are damaged sufficiently.
It's hard to unwrap exactly what you're saying here, but I think your position is that music is overprocessed for your taste?

I think you need to understand that artists are releasing exactly what they want you to hear. 100% of my clients want you to hear the mastered material, not the mixes "without further manipulation", to use your words.

In fact, many many many mixes arrive to mastering with all the compression and limiting as part of the production. There is no less-processed material.

Re: MQA - it's my understanding that Apple will be streaming lossless imminently, with Spotify soon to follow. Amazon already does.

So what exactly is the point of MQA in the marketplace?
Old 2 weeks ago | Show parent
  #789
Gear Addict
 
Verified Member
Quote:
Originally Posted by David Rick ➡️
Those basis functions are finite-support splines in the MQA technology.
David do we have any proof that finite-support splines or any other post-shannon approaches actually are implemented in MQA?
Old 1 week ago | Show parent
  #790
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by sax512 ➡️
Dude..
You keep getting hung up on variations of arguments about what is to be intended for "lossless".
There was no mention or talk about lossless in the post of mine you quoted, nor was it related to what I was responding to.

When I have been talking about the meaning of " lossless" it is has been purely in the context of replying to one single post in the thread.

I am very sorry you have difficulty following the different, seperate conversations in these forums.
Old 1 week ago | Show parent
  #791
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by David Rick ➡️
Sloppy transfers, bad record-keeping, and lack of respect for artistic intent have been all-to-common record label behavior since the industry began. I don't know why people are suddenly inclined to lay it all at the door of MQA when it's been there all along. Perhaps its more satisfying to point fingers at a small group of scapegoats who are readily identifiable. The same "logic" could be used to attack any proposed distribution technology. All of them, from CD's clear back to Edison cylinders could be done well or done poorly depending on who was in charge of the process. Do people not remember how awful many early CD releases (cut from second or third generation vinyl masters) were? How did you like those 8-tracks with gaps in the middle of songs?

To me, the far more interesting question is not how bad MQA can be, but how good it can be. It's pretty disappointing that George Massenburg showed up here with actual experience in front-to-back MQA encoding of acoustic music and people simply carried on arguing about Tidal. Personally, I've been trying to come up with a crisp list of questions to ask, but I fear he's already wandered off in disgust.

BTW, I'm not discounting Brian Lucey's first-hand experience, but [off-target generalization deleted, with my apologies] I want to hear from others as well. I'm much more interested in what MQA, used in its best mode by a skilled practitioner, can bring to acoustic music. I really hope GM will have more to say about that.

David L. Rick
Seventh String Recording
The problems people seem to have about MQA are all ideological or about marketing. There doesn't seem to be much real chat about the technical side of MQA.. Yet those folks are *so* ideologically opposed, they conflate the two.

It needs to be judged when correctly applied, not by some guy using Tidal and making Anonymous-style youtube vids about it.

Labels like 2L wouldnt use it if it didnt have merit.

Last edited by nat8808; 1 week ago at 11:16 PM..
Old 1 week ago | Show parent
  #792
Lives for gear
 
gyraf's Avatar
 
🎧 15 years
No. You're wrong.

The problems people seem to have about MQA are about

1) being lied to about lossless
2) being lied to about approval from artists/masterings
2) the blatant attempt to corner the delivery stream market with a propitiatory commercial algorithm, designed to squeeze extra revenue out of an already suffering economy, marketed by false claims.

..what's not to like?

/Jakob E.
Old 1 week ago | Show parent
  #793
Gear Guru
 
lucey's Avatar
 
Verified Member
1 Review written
🎧 15 years
Quote:
Originally Posted by nat8808 ➡️
The problems people seem to have about MQA are all ideological or about marketing. There doesn't seem to be much real chat about the technical side of MQA.. Yet those folks are *so* ideologically opposed, they conflate the two.

It needs to be judged when correctly applied, not by some guy using Tidal and making Anonymous-style youtube vids about it.

Labels like 2L wouldnt use it if it didnt have merit.
False.

Labels are trying to make money, wake up

I’ve heard it before and after. It’s distorted. Messes with eq and mid side balance

I’ve stated repeatedly that the only way to correct PCM, which is really an absurd idea, would be to also sell a converter, a very high-quality AD.

And this idea that they know what converter everyone’s using, is a lie… plus even if they did each converter has it sound sound, analog year is different from each other particularly when you’re talking about the microscopic artifacts that are a issue here… None of which really matter compared to the quality of the work

Am I “some guy”?

No, I didn’t think so
Old 1 week ago | Show parent
  #794
Lives for gear
 
sax512's Avatar
 
🎧 5 years
Quote:
Originally Posted by David Rick ➡️
No, you still need a guard band for a realizable anti-alias filter.
But didn't you say that R2R avoids the need to oversample? How are you going to get a guard band if not by oversampling or using a brickwall filter (which is clear at this point you have something against)?

Quote:
...which decays (n+1) times as fast. But you're actually onto something here, Marco, in that there's never a sharp frequency cutoff like in Nyquist-style PCM. The filters aren't required to do that anymore. In Nyquist-type systems, both the A/D and D/A filters are required to remove all of the energy beyond a certain bandwidth. If they fail, we call that aliasing. But in bi-orthogonal systems, the unwanted stuff doesn't get filtered, it gets cancelled by the combination of the encoding and decoding filters. It gets a little bit fuzzy just trying to specify what the bandwidth of the resulting system actually is because there's not a single point of delineation between passband and stopband. The corresponding question is "what rate of signal innovation can this system reproduce?"
Looking at the schematics in the paper (Figs. 13, 15 and 17) and how they still use a sampler which is a traditional multiplier of the input signal with a series of Dirac deltas shifted in time, the sampler still creates a signal that is made up of spectrum replicas at multiples of the sample rate.
The only way you can avoid aliasing in the audio band is to oversample and count on the fact that side lobes you can see in the pre-filter response in Fig.16 are low enough by the time they wrap back down to 20 kHz and below.
Once you have spillage in the audio band, how would you be able to remove it by filtering with a B-spline type of kernel?

Quote:
I'm snipping some of your further musings about aliasing because they're still anchored in Nyquist-type thinking. It took me a long time to get over this mindset myself.
The Nyquist thinking still applies in respect to the center piece of the schematics, the sampler. See above response.

Quote:
Perhaps it will help to realize that aliasing is basically a mismatch between the original signal and the subspace you're projecting into. But if the original signal is already in that subspace, then there can't be a mismatch. Can we assume that?
I would have to see a rigorous mathematical demonstration to understand how aliasing does not apply to this scheme, especially in light of the fact that it is still centered on a good-old traditional sampler. The only difference being that instead of two brickwall filters you now have two other types of filters.
But the spectrum replicas action of the sampler is still there.

Quote:
In a Nyquist-type system, we'd have to assume that the original signal was strictly band-limited. It really isn't, so we apply a pre-filter to make it so and from there on the process can be "lossless", but that's because we hid the loss at the front and forgot about it.
I know we disagree on the importance of the lost content.
It's actually not a loss at all! You lose content that nobody can hear and you put the electronic and electro-mechanic devices in the audio chain in the condition to operate more linearly. It's good engineering.

Quote:
In a post-Nyquist system, we don't assume (or enforce) a hard bandwidth limit on the original signal. Instead we assume that the signal has a "finite rate of innovation" -- that it can change only so fast -- which is a much safer assumption for signals originating in the physical world. Then if we "sample" at a rate capable of representing that rate of innovation, then we haven't lost anything.
No matter how you look at it, you still have to make sure that the sampler at the center of the architecture doesn't cause aliasing in the audio band, right?
So what is the rate that's capable of representing the rate of innovation? If we used that rate with traditional sampling, would we not be able to design a filter that gently slopes down above the audio band (provided that, as I said before, I think there is no benefit at all in doing that)?

Quote:
Very perceptive. Just realize that we're doing the exact same thing in conventional systems today. The initial anti-alias filter is always analog, usually followed by digital-domain cleanup, thanks to oversampling. So conventional systems have a "prefilter" too. In fact, bi-orthogonal sampling theory includes conventional systems as a special case in which the encoding and decoding filters are constrained to be identical.
Yes, but the pre-filter in traditional sampling is a simple R-C low pass. Very easy to make and, thanks to the following oversampling strategy, of practically constant amplitude and linear phase within the audio band (with deviations that can be accounted for by the digital filter kernel, if one wants to be a perfectionist).

Quote:
There's one situation in which you can skip the pre-filter: When the original signal is already in the representation space. For Nyquist-type systems, that means the anti-alias filter has nothing to remove. In general, it means that the original signal is fully representable using your shortest-available basis functions. Those basis functions are finite-support splines in the MQA technology. No surprise that Craven picked them because he's practically a Greek God in piecewise spline approximation theory. Unser developed that aspect of his theory because splines are popular in video processing and he does a lot of medical imaging research.
They're also such that there's no way to reproduce the audio band accurately without oversampling (common trait with sinc base function) and being forced to maintain ultrasonic content in the audio chain, which is really not a good thing to do (not necessary with traditional sampling, which is a strong plus in my book).

Quote:
But, probably with an eye towards audio, he also showed how to do similar things with exponential basis functions, and decaying exponentials are super easy to do in the analog domain. It should be possible to build an A/D converter which inherently has the proper sampling function, but nobody has done it yet.
It's all a matter of what the frequency response looks like for the pre-filter, and how high you have to oversample in the center sampler to avoid aliasing.

Quote:
(1. Need of a pre-filter in the analog domain of very specific frequency response (very hard to get right to begin with, let alone keeping it stable as components drift).)
You do it by oversampling, just as we do today because we can't build decent brickwall analog filters. It's just that you need less of it.
But I want MORE of it (brickwall filtering)! I want all unnecessary, damaging ultrasonic content to be cleared off the audio chain as soon as possible (while still making sure the audio band response is constant in amplitude and linear in phase, of course).
You may need more oversampling to avoid aliasing with the traditional R-C low pass analog filter at the ADC input, vs another analog pre-filter which is designed to decay faster, but I don't see why this would be much of a problem.
In any case, oversampling doesn't address the challenges of designing a compensation analog pre-filter that has to specifically match the reconstruction filter at the DAC and that is made with components that have manufacturing tolerances and whose values drift in time.

Quote:
(2. Need to use higher sampling rates to avoid sinc^n type of frequency response side lobes to show up as aliasing.)
I covered this concern above . Choose basis functions that match the real world better than infinite sinc's, and you've got less to worry about than today.
Only if you operate at higher sample rate than 44.1, but then you carry ultrasonic content with you throughout the whole audio chain, which is a much bigger worry (especially at the end of the chain) than designing a brickwall filter in the digital domain.
Let's not forget that not only the new modern sampling types of filters have this problem embedded with them, but they also have to be designed and implemented in the analog domain. That's a big problem, since not all responses can be achieved via analog components, while with digital filters you can have pretty much infinite design freedom in both amplitude and phase (if you can deal with the associated latency).


Quote:
(3. Stability and need to link the DAC to knowledge of the ADC.)
Not sure what you mean by "stability" since the analog filtering is easier, not harder. The need to send side-chain information to enable proper decoding is there, but we already have to do the same thing with MP3, AAC, etc. Of course, once you require an auxiliary data channel, that opens the possibility to transmit other stuff that people might get upset about.

David
By stability I mean that with the passing of time, provided that you were able to design the filter exactly the way you wanted (not a trivial thing at all with analog filters), you still have components' value drift in time, so the frequency response is not stable on the long run. Also, as temp changes, it could be unstable in the short run too, if no action is taken to avoid that by careful choice of components or circuitry design (and you still can't get it 100% right). As opposed to digital filters that are immutable until the end of time (or the next EMP).
Analog filters are definitely harder to get right than digital ones. Sometimes impossible. But even if they were possible to get right every time (which they aren't), you still have to deal with power dissipation. So at the very least digital filters are preferable to analog ones just for this practical fact.

There's also the issue that the DAC needs to 'know' the ADC, and when both filters are impossible to get right in the analog domain, both variations can theoretically add up to even more deviation.

Last edited by sax512; 1 week ago at 12:34 PM.. Reason: Some clarifications
Old 2 days ago | Show parent
  #795
Gear Head
 
Quote:
Originally Posted by Brent Hahn ➡️
If people can't detect the compression to begin with, how can the improvement be obvious?
It is obvious because the uncompressed material is seldom available to consumers. Too bad that a clean, accurate representation of what was mixed is not normally available. (Speaking of pop/commodity jazz or classical -- NOT boutique materials.)

Is relatively 'not good' good enough? Well, I have more respect for people. The prevailing consumer recording distribution is just like Mc Donalds being good enough... For a hungry person, and no other food available for one reason or another, McDonalds IS good enough. People should be able to hear what was mixed, or something reasonably close.
Old 1 day ago | Show parent
  #796
Gear Addict
 
🎧 10 years
Quote:
Originally Posted by John Dyson ➡️
It is obvious because the uncompressed material is seldom available to consumers. Too bad that a clean, accurate representation of what was mixed is not normally available. (Speaking of pop/commodity jazz or classical -- NOT boutique materials.)

...

People should be able to hear what was mixed, or something reasonably close.
Why is the mix and not the master the thing you want a clean, accurate representation of? For that matter, why the final mix and not a rough board mix from the tracking session?

The master is the the most clean and accurate representation of the artist's intention possible. This is a tautology. It is true by definition if you know the meaning of the words being used. What are you looking for a clean, accurate representation of?
Old 1 day ago | Show parent
  #797
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by lucey ➡️
Am I “some guy”?

No, I didn’t think so


Well.... yes you are, we all are (I'm sure your ego won't let you contenance that for one second... ) . But I was talking about the GoldenOne figure who's youtube video was being cited a lot previously.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 28819 views: 3055311
Avatar for Bleu Fontaine
Bleu Fontaine 3 hours ago
replies: 208 views: 12626
Avatar for WarmJetGuitar
WarmJetGuitar 7th February 2021
Topic:
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump