The No.1 Website for Pro Audio
Does Apple's Core Audio resample AD/DA signal?
Old 9th May 2022 | Show parent
  #151
Deleted 9f46789
Guest
Quote:
Originally Posted by moostapha ➡️
In your professional experience and given these results and that deep null...what do you think is responsible for the apparently perceptible difference.
I dont know what is causing this but it sounds like jitter to me, or perhaps rounding error. Core Audio has been proven to be Bit Perfect (what goes in is what comes out), so I dont think it is word length related.

Jitter on the other hand affects how DACs and ADCs perform, even with a bit perfect result. Jitter is the silent killer in digital audio because it is easily induced, but difficult to measure without a proper scope. Just like head lice.

Ever wonder why audiophiles these days are using apps like HQPlayer to upsample 44.1khz WAVs to as high as 98.304 MHz before feeding their DACs?

Because the higher you upsample, the more the original 44.1 jitter is ironed out into smaller and smaller segments to the point where the samples become near perfectly spaced (no jitter), and now the DAC suddenly sounds better.

In pro audio we dont have that bandwidth luxury so if our sample dispersion is altered with enough jitter, we will hear it at both recording and playback.

I could be way off, and it has nothing to do with timing errors, but maybe some sort of bug in one of the 32bit AU DSP modules found in Core Audio. A bug that has remained in place for 20 years.

Hopefully someone with a DSP background and decent ears will finally isolate whats causing this degradation.
Old 10th May 2022 | Show parent
  #152
Gear Nut
 
thebeatless's Avatar
 
🎧 15 years
Quote:
Originally Posted by Robb Robinson ➡️
I dont know what is causing this but it sounds like jitter to me, or perhaps rounding error. Core Audio has been proven to be Bit Perfect (what goes in is what comes out), so I dont think it is word length related.

Jitter on the other hand affects how DACs and ADCs perform, even with a bit perfect result. Jitter is the silent killer in digital audio because it is easily induced, but difficult to measure without a proper scope. Just like head lice.

Ever wonder why audiophiles these days are using apps like HQPlayer to upsample 44.1khz WAVs to as high as 98.304 MHz before feeding their DACs?

Because the higher you upsample, the more the original 44.1 jitter is ironed out into smaller and smaller segments to the point where the samples become near perfectly spaced (no jitter), and now the DAC suddenly sounds better.

In pro audio we dont have that bandwidth luxury so if our sample dispersion is altered with enough jitter, we will hear it at both recording and playback.

I could be way off, and it has nothing to do with timing errors, but instead some sort of bug in one of the 32bit DSP modules found in Core Audio. A bug that has remained in place for 20 years.

Hopefully someone with a DSP background and decent ears will finally isolate whats causing this degradation.
Maybe you already mentioned this somewhere in this thread, but if so I missed it. Did you record from the same source multiple times and get the same results and/or null at a lower level?
Old 10th May 2022 | Show parent
  #153
Lives for gear
 
Jerry Tubb's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by Deleted 9f46789 ➡️
Ive done 10,000+ masters thru Core Audio.
interesting research you're doing Robb!

brings the "if a tree falls in the forest" scenario to mind.

what solutions & alternatives do you propose?

switch to PC windows?

cheers, JT
Old 10th May 2022 | Show parent
  #154
Deleted 9f46789
Guest
Quote:
Originally Posted by thebeatless ➡️
Maybe you already mentioned this somewhere in this thread, but if so I missed it. Did you record from the same source multiple times and get the same results and/or null at a lower level?
Yes I did do multiple passes of each loop and forgot to mention that their nulls were the same at around -95.0 dBfs.

The DAC used is a R2R ladder DAC, which compared to Delta Sigma DACs, is a little more nonlinear with noise. On weds I will make new transfer loops using the Merging Premium DAC (DS chip) which is more stable and should provide deeper nulls.

If anyone has material they want me to run and dont mind posting here, please PM me.

Quote:
Originally Posted by Jerry Tubb ➡️
what solutions & alternatives do you propose?
Thanks Jerry - you can tell im on a mission here.

As far as what I would propose to Apple:

1) Inside the System Preferences 'Sound' pane, allow a checkbox for a new 'Pass Thru' mode for each audio interface listed thru Core Audio. It doesnt matter if it is a consumer grade or professional grade interface, they should all get user specified access to this new mode.

2) Enabling Pass Thru mode would essentially do two things:
a) Make the interface exclusive to the first DAW (or app) that claims it as its IO interface. From there, no other app can access the interface - it wont even show up as an interface option. So if Logic claims it first, Reaper wont even see it, and neither will the system audio.

b) Bypass Core Audio's SRC, word length, summing and gain modules and pass the 24 bit integer directly from the DAW to the hardware interface. Core Audio will remove itself as the 32 bit floating point middleman the moment Pass Thru mode is enabled for that interface.
3) The Audio Midi Setup App will no longer control or regulate any interface that has been designated as PassThru by the user in System Prefs.

4) If there is a SR mismatch between DAW and interface, then Core Audio will pass the change request to the interface and do nothing further. The interface drivers are already coded to be aware of sample rate change requests, so nothing new should be required here from 3rd party devs.

5) The goal is to prevent the hundreds if not thousands of interfaces already on the market from requiring driver updates to work with Pass Thru mode. Keep the 3rd party interface devs happy by not asking anything more from them beyond native Silicon support.
Old 10th May 2022 | Show parent
  #155
Lives for gear
Quote:
Originally Posted by Robb Robinson ➡️
Yes I did do multiple passes of each loop and forgot to mention that their nulls were the same at around -95.0 dBfs.

The DAC used is a R2R ladder DAC, which compared to Delta Sigma DACs, is a little more nonlinear with noise. On weds I will make new transfer loops using the Merging Premium DAC (DS chip) which is more stable and should provide deeper nulls.

If anyone has material they want me to run and dont mind posting here, please PM me.

Thanks Jerry - you can tell im on a mission here.

As far as what I would propose to Apple:

1) Inside the System Preferences 'Sound' pane, allow a checkbox for a new 'Pass Thru' mode for each audio interface listed thru Core Audio. It doesnt matter if it is a consumer grade or professional grade interface, they should all get user specified access to this new mode.

2) Enabling Pass Thru mode would essentially do two things:
a) Make the interface exclusive to the first DAW that claims it as its IO interface. From there, no other app can access the interface - it wont even show up as an interface option. So if Logic claims it first, Reaper wont even see it, and neither will the system audio.

b) Bypass Core Audio's SRC, word length, summing and gain modules and pass the 24 bit integer directly from the DAW to the hardware interface. Core Audio will remove itself as the 32 bit floating point middleman the moment Pass Thru mode is enabled for that interface.
3) The Audio Midi Setup App will no longer control or regulate any interface that has been designated as PassThru by the user in System Prefs.

4) If there is a SRC mismatch then Core Audio will pass the request to the interface and do nothing further. The interface drivers are already coded to be aware of sample rate change requests, so nothing new should be required here from 3rd party devs.

5) The goal is to prevent the hundreds if not thousands of interfaces already on the market from requiring driver updates to work with Pass Thru mode. Keep the 3rd party interface devs happy by not asking anything more from them beyond native Silicon support.
So Core Audio is doing far more than rounding the 64-bit float DAW outputs to 32-bit floating point PCM? Is it passing a 32-bit float output from the DAW cleanly through if no volume control is used and nothing else is running but the DAW?

Apple also disabled the hardware access layer for usb devices in recent Mac OS versions.
Old 10th May 2022 | Show parent
  #156
Lives for gear
 
Verified Member
Quote:
Originally Posted by Robb Robinson ➡️
you can tell im on a mission here.

As far as what I would propose to Apple:

1) Inside the System Preferences 'Sound' pane, allow a checkbox for a new 'Pass Thru' mode for each audio interface listed thru Core Audio. It doesnt matter if it is a consumer grade or professional grade interface, they should all get user specified access to this new mode.

2) Enabling Pass Thru mode would essentially do two things:
a) Make the interface exclusive to the first DAW that claims it as its IO interface. From there, no other app can access the interface - it wont even show up as an interface option. So if Logic claims it first, Reaper wont even see it, and neither will the system audio.

b) Bypass Core Audio's SRC, word length, summing and gain modules and pass the 24 bit integer directly from the DAW to the hardware interface. Core Audio will remove itself as the 32 bit floating point middleman the moment Pass Thru mode is enabled for that interface.
3) The Audio Midi Setup App will no longer control or regulate any interface that has been designated as PassThru by the user in System Prefs.

4) If there is a SRC mismatch then Core Audio will pass the request to the interface and do nothing further. The interface drivers are already coded to be aware of sample rate change requests, so nothing new should be required here from 3rd party devs.

5) The goal is to prevent the hundreds if not thousands of interfaces already on the market from requiring driver updates to work with Pass Thru mode. Keep the 3rd party interface devs happy by not asking anything more from them beyond native Silicon support.
I appreciate you being on this mission. Please do not give up even if a few naysayers discourage you. You're doing an important job here. Thanks.
Old 10th May 2022 | Show parent
  #157
Deleted 9f46789
Guest
Quote:
Originally Posted by To Mega Therion ➡️
So Core Audio is doing far more than rounding the 64-bit float DAW outputs to 32-bit floating point PCM? Is it passing a 32-bit float output from the DAW cleanly through if no volume control is used and nothing else is running but the DAW?
While DAWs may process at 64 bit float nowadays, they don't output at 64 bits. I could be wrong, but I think they output a fixed 24 bit integer. If true, then Core Audio pads the 24 bit integer with 8 exponent bits to become floating point. That is different than a DAW bouncing to 32/64 bit float because that is an internal app process and the other is a data stream output already truncated to 24 bit.

Professional DAWs shouldn't expect Core Audio to dither and truncate for them, so fixed 24 bit DAW outputs are likely. Can anyone confirm this?

Audirvana and other hifi players (Roon, HQPlayer, Amarra, etc) acknowledge that DACs can only accept 24 bit integers so they dont disturb the original word length by going floating point.

Core Audio uses floating point because its summation module (two or more app sources summed into one output) and gain change module (system volume faders) perform better using FP math. Pro users do not want or need SRC, summation or gain DSP applied to the DAW outputs so the preference is to have the option of leaving the 24 bit integer alone.

Pro Tools HDX is different in that one company designed the DAW, audio engine and physical interface so the entire audio path is regulated under one set of rules that preserve the 24 bit integer from DAW to converter, and converter to DAW.

But I still dont understand how this all relates to jitter, which is what I think im hearing as the degradation, and is where my limited DSP understanding and professional ear are not correlating.

Quote:
Originally Posted by dfghdhr ➡️
I appreciate you being on this mission. Please do not give up even if a few naysayers discourage you. You're doing an important job here. Thanks.
Thanks man. I think we'll be done soon. Ive never typed so much in my life. Certainly not in college.
Old 10th May 2022
  #158
Gear Addict
 
maldenfilms's Avatar
 
Robb, thank you for all the work you're doing on this. Is there any pathway you have to present your findings to someone at Apple? I know the type of person you're looking to reach probably isn't easily accessible through their website / regular channels. I'd be curious if anyone on here has any connection to a developer over there that might be helpful.
Old 10th May 2022 | Show parent
  #159
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Robb,

Have you looked into the hog mode of Apple's core audio?
Old 11th May 2022 | Show parent
  #160
Here for the gear
 
Quote:
Originally Posted by Robb Robinson ➡️
While DAWs may process at 64 bit float nowadays, they don't output at 64 bits. I could be wrong, but I think they output a fixed 24 bit integer. If true, then Core Audio pads the 24 bit integer with 8 exponent bits to become floating point. That is different than a DAW bouncing to 32/64 bit float because that is an internal app process and the other is a data stream output already truncated to 24 bit.

Professional DAWs shouldn't expect Core Audio to dither and truncate for them, so fixed 24 bit DAW outputs are likely. Can anyone confirm this?

Audirvana and other hifi players (Roon, HQPlayer, Amarra, etc) acknowledge that DACs can only accept 24 bit integers so they dont disturb the original word length by going floating point.

Core Audio uses floating point because its summation module (two or more app sources summed into one output) and gain change module (system volume faders) perform better using FP math. Pro users do not want or need SRC, summation or gain DSP applied to the DAW outputs so the preference is to have the option of leaving the 24 bit integer alone.

Pro Tools HDX is different in that one company designed the DAW, audio engine and physical interface so the entire audio path is regulated under one set of rules that preserve the 24 bit integer from DAW to converter, and converter to DAW.

But I still dont understand how this all relates to jitter, which is what I think im hearing as the degradation, and is where my limited DSP understanding and professional ear are not correlating.

Thanks man. I think we'll be done soon. Ive never typed so much in my life. Certainly not in college.
Hi Robb

I've compared playback of masters I've done through Audionirvana vs from Wavelab. There is definately a difference. Sweeter overall sound from Audionirvana. A more effortless clarity. A bit more grainy / blurry from Wavelab. I´d LOVE to have this same quality on playback in Wavelab! I really don´t want to switch to Pro Tools / HDX card or a PC to get this when it´s already there, in a way, at least through Audionirvana.
This was even with both apps playing back through the same chain (Macbook Pro - Ravenna to a Hapi mk 2 to Lavry Quintessence DAC for monitoring). With Exclusive Access enabled in Audionirvana. Still there seems to be a difference in playback in the two apps.

Could turning on Hog mode (seems kindof complicated) for core audio be a way to get similar results in Wavelab? Not that I even know how that would be done.

Thanks
Old 11th May 2022 | Show parent
  #161
Lives for gear
 
Verified Member
Quote:
Originally Posted by Gsound ➡️
Hi Robb

I've compared playback of masters I've done through Audionirvana vs from Wavelab. There is definately a difference. Sweeter overall sound from Audionirvana. A more effortless clarity. A bit more grainy / blurry from Wavelab. I´d LOVE to have this same quality on playback in Wavelab! I really don´t want to switch to Pro Tools / HDX card or a PC to get this when it´s already there, in a way, at least through Audionirvana.
This was even with both apps playing back through the same chain (Macbook Pro - Ravenna to a Hapi mk 2 to Lavry Quintessence DAC for monitoring). With Exclusive Access enabled in Audionirvana. Still there seems to be a difference in playback in the two apps.

Could turning on Hog mode (seems kindof complicated) for core audio be a way to get similar results in Wavelab? Not that I even know how that would be done.

Thanks
Quote:
Originally Posted by Gsound ➡️
Could turning on Hog mode (seems kindof complicated) for core audio be a way to get similar results in Wavelab? Not that I even know how that would be done.
I'd like to know the answer too.
Old 11th May 2022 | Show parent
  #162
Deleted 9f46789
Guest
Quote:
Originally Posted by maldenfilms ➡️
Is there any pathway you have to present your findings to someone at Apple?
Justin Perkins has forwarded this thread to his contact at Apple again recently, so now we just need to hope it gets escalated to the powers that be.

A software bug is easy to isolate and reproduce by software engineers, but the degradation that we're talking about here is not easy to isolate and reproduce by someone without proper environment and training.

However, now that its been captured, I think anyone with a hifi DAC and resolving enough headphones has an opportunity to hear a difference. And whether the difference is interpreted as being good or bad is besides the point when the difference shouldn't be there at all.

The strongest argument they could make is: if only 1% of music producers hears this degradation, then 0.01% of all music listeners will, so why should we invest time and money into fixing something that such a small audience will appreciate?

Of course my response is: just because the majority of listeners never realized something has been missing, doesn't mean they wont appreciate it when it's been found.
Old 11th May 2022 | Show parent
  #163
Deleted 9f46789
Guest
Quote:
Originally Posted by Aivaras ➡️
Robb,

Have you looked into the hog mode of Apple's core audio?
I have heard of Hog mode, which ive understood as also being called Exclusivity mode, where only one app can use an interface at a time, but ive never seen the steps to turn this on via terminal or anywhere.

Can you present the steps clearly here if you find them?

Is it too risky for those of us with stable systems we depend on?

If Hog mode means no more summation or gain changing AU DSP modules then perhaps this is a step in the right direction, and a sign that Apple may already have some infrastructure in place towards implementing my aforementioned Pass Thru mode, which takes Hog mode a little further by also including Integer mode.
Old 11th May 2022 | Show parent
  #164
Deleted 9f46789
Guest
Quote:
Originally Posted by Gsound ➡️
This was even with both apps playing back through the same chain (Macbook Pro - Ravenna to a Hapi mk 2 to Lavry Quintessence DAC for monitoring). With Exclusive Access enabled in Audionirvana. Still there seems to be a difference in playback in the two apps.
You are definitely hearing these differences easier than most because the Quintessence is the most unrelenting monitor DAC ive ever heard. No stone left unturned with that beast.

And I hear you, throwing HDX at this problem is not the solution. I just came to my breaking point and did it, and I must say, my first few mastering projects entirely thru HDX have been revelatory.

Granted there is a ton of expectation bias going on in my head (a new shiny toy!), but there is no doubt in my mind that my work has been elevated to some degree. HDX sounds more relaxed, less urgent than CA, and with that comes quite a few byproducts: better low end definition, a taller image, more depth, more width, and as you said, slightly less grainy, especially thru the upper mids.
Old 11th May 2022 | Show parent
  #165
Gear Maniac
 
Quote:
Originally Posted by Robb Robinson ➡️
I have heard of Hog mode, which ive understood as also being called Exclusivity mode, where only one app can use an interface at a time, but ive never seen the steps to turn this on via terminal or anywhere.
Never heard of hog mode before but it looks like it can do exactly what you want.
But it’s nothing you can just turn on in terminal!
You need to code an application to use it.
It seems to me that everything needed is in place but developers doesn’t seem to care about it.
Old 11th May 2022 | Show parent
  #166
Here for the gear
 
Quote:
Originally Posted by Robb Robinson ➡️
You are definitely hearing these differences easier than most because the Quintessence is the most unrelenting monitor DAC ive ever heard. No stone left unturned with that beast.

And I hear you, throwing HDX at this problem is not the solution. I just came to my breaking point and did it, and I must say, my first few mastering projects entirely thru HDX have been revelatory.

Granted there is a ton of expectation bias going on in my head (feels like a new car!), but there is no doubt in my mind that my work has been elevated to some degree. HDX sounds more relaxed, less urgent than CA, and with that comes quite a few byproducts: better low end definition, a taller image, more depth, more width, and as you said, slightly less grainy, especially thru the upper mids.

Suddenly I cant wait to get to work today.
Yes. The Quintessence really does reveal differences like these!

So can you use the HDX playback engine in Wavelab to do the analog captures there, or do you have to do this in Pro Tools? I know theres something like a Avid Core Audio driver or something of the sort, but that might not be the same as staying in Pro Tools?
Old 11th May 2022 | Show parent
  #167
Deleted 9f46789
Guest
Quote:
Originally Posted by Gsound ➡️
So can you use the HDX playback engine in Wavelab to do the analog captures there, or do you have to do this in Pro Tools? I know theres something like a Avid Core Audio driver or something of the sort, but that might not be the same as staying in Pro Tools?
Unfortunately no :(

To keep away from the Core Audio layers you have to use the HDX playback engine only found in pro tools. The Core Audio driver for HDX and HD Native goes across the same layers as say an RME or Motu interface does.

Audirvana and HQPlayer somehow bypass all of CA so they dont require the expensive PCIe card to get around CA.
Old 11th May 2022 | Show parent
  #168
Here for the gear
 
Quote:
Originally Posted by Robb Robinson ➡️
Unfortunately no :(

To keep away from the Core Audio layers you have to use the HDX playback engine only found in pro tools. The Core Audio driver for HDX and HD Native goes across the same layers as say an RME or Motu interface does.

Audirvana and HQPlayer somehow bypass all of CA so they dont require the expensive PCIe card to get around CA.
Damnit:( I really don´t want to start doing analog captures in Pro Tools, and assembly in Wavelab. I really like having it all happen in one DAW.
Old 11th May 2022 | Show parent
  #169
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Quote:
Originally Posted by Deleted 9f46789 ➡️
If Hog mode means no more summation or gain changing modules then perhaps this is a step in the right direction, and a sign that Apple may already have some infrastructure in place towards implementing my aforementioned Pass Thru mode, which takes Hog mode a little further by also including Integer mode.
I'm on Windows 10 for audio tasks (Cubase, WaveLab, ASIO) and am following this discussion out of general interest.

My educated guess is that audio streams from Apple's core audio in shared mode may be not bit-perfect (may involve extraneous subroutines such as re-sampling, bit-depth conversion and re-dithering) because shared mode is intended to mix multiple data streams (coming from different applications in the OS) to a single data stream for simultaneous loudspeaker reproduction.

For bit-perfection, one may need to resort to hog mode which, as far as I understand, allows an application to take exclusive control of the audio stream in macOS.

I don't know whether users can manually turn hog mode on in macOS, but applications (including audio players) can certainly be programmed to switch to hog mode. Perhaps talking to specific software developers about this would yield quicker answers.
Old 11th May 2022 | Show parent
  #170
Deleted 9f46789
Guest
Quote:
Originally Posted by Gsound ➡️
I really like having it all happen in one DAW.
Yea no kidding! My FOMO over Sequoia is at an all-time high. They get it all.

Last edited by Deleted 9f46789; 11th May 2022 at 11:01 PM.. Reason: removed inflammatory comment
Old 11th May 2022 | Show parent
  #171
Here for the gear
 
Quote:
Originally Posted by Deleted 9f46789 ➡️
Yea no kidding! My FOMO over Sequoia is at an all-time high. They get it all, except for a completely rubbish OS but that's besides the point.
I´ve used Sequioa too. Really nice app, but i just can´t stand working with Windows! My whole life is centered around Apple products.

I just made this post on the Wavelab forum. Maybe Philipe can implement an Exclusive Access option in WL?

https://forums.steinberg.net/t/future-request/785720
Old 11th May 2022 | Show parent
  #172
Gear Head
 
🎧 5 years
I share this link, maybe it has some useful information:

https://developer.apple.com/library/...003577-CH3-SW1
Old 11th May 2022 | Show parent
  #173
Tokyo Dawn Labs
 
FabienTDR's Avatar
 
Verified Member
🎧 10 years
https://developer.apple.com/library/...18-CH1-FORMATS

Quote:
What about the audio data format?

The AUHAL flattens audio data streams of a device into a single de-interleaved stream for both input and output. AUHALs have a built-in AudioConverter to do this transformation for you. The AUHAL determines what kind of AudioConverter is required by comparing the flattened device format with the client's desired format. Resetting either a device format or a client format will generally be a disruptive event, requiring the AUHAL to establish a new AudioConverter. If the channels of the device format and the desired format do not have a 1:1 ratio, the AUHAL unit can use channel maps to determine which channels to present to the user. Lastly, the device sample rate must match the desired sample rate.

The only relevant point is about keeping your sources and the "client's desired format" (the audio output format) equal, so that no converter is necessary. It would be silly to do it differently. Windows technically does the same.

Why do you guys assume developers being able of developing cutting edge operating systems to fail on such banal things? Critically complex products such as OSs use rather strict test driven development approaches (worth googling if you never heard of it before.).

Many other aspects would have already exploded in pain sight, not a single pixel would light up on screen. To a developer used to directly "talk" with the CPU, memory and peripherals, and getting all to work with each other, audio is a ridiculously trivial effort.

Last edited by FabienTDR; 12th May 2022 at 12:45 AM..
Old 11th May 2022 | Show parent
  #174
Deleted 9f46789
Guest
Quote:
Originally Posted by FabienTDR ➡️
[url]
The only relevant point is about keeping your sources and the "client's desired format" (the audio output format) equal, so that no converter is necessary. It would be silly to do it differently. Windows technically does the same.
If I understand you correctly, then of course we are keeping things equal end to end, praying CA doesn't DSP anything in between. Yet the sound still changes.

The difference is not subtle, even though I say it is subtle to others as to avoid insulting their hearing or their environment. 'Relatively subtle' is more accurate.
Old 12th May 2022 | Show parent
  #175
Gear Maniac
 
moostapha's Avatar
 
🎧 5 years
After listening to the files I was sent (originally in PM)....

I've been looking at them and doing some comparisons and have mostly just weird observations. I've done a couple ABX tests (but didn't save them...maybe you'll have to trust me, or not, whatever). One of the tests was 15/15 correct. I'm convinced that one is an anomaly. Maybe it was doing some incorrect normalization or something, because that one was super-quick....like less than 3 seconds per decision. Usually, IME, that means that they weren't level matched (or they're just completely different). Another comparison was literally 5/10, so...random.

I'm wondering if this could just be a problem with the files. They're around 1000 samples different in length (within pairs) and I had to align them manually to get them to null at all....some of the "missing" samples from the shorter files were at the beginning.

I clearly hear a difference sighted and prefer HDX, but I don't actually care about that....I've lost count of the number of times I've tricked myself with that kind of comparison.

I'm also not getting nulls down to -95dBFS. When the peaks are actually up near 0, the nulls I'm measuring are more like -80 after converting to 64-bit FP and flipping phase on one copy. That still should be close to inaudible without playing really loud on my setup (my listening volume is in the 80s with a noise floor around 25).

The really weird thing is that if I turn up the null really loud (adding 75-90dB of gain or so digitally, depending on where in the track I was listening), the signal kind of swells (cresc, decresc) with a period of a few seconds. That's....weird. I have no idea what would cause that, but I think that does show that something is going on. It's also almost the whole signal that's perceptible in the null, not just "pieces", if that makes any sense. Some kind of long-term periodic change in jitter (edit: or some other timing thing) could maybe cause that. It almost sounds like a null with one signal showing wow & flutter.

(end PM)

I really don't know what's going on. I'm not going to do it without permission, but it might be interesting to post a bounce/render (not recorded, just rendered...which should avoid CA even on a mac) of the gained-up null just to see if anybody has any ideas.

FWIW, I'm in the camp that believes a null down to -80ish should be inaudible or close to it in my setup.

Also, I use Windows. So....no chance of another "layer" of CA exaggerating problems in my case. And, frankly, I expected to hear nothing and one of my ABX tests showed that I heard nothing. But, the samples sent were long files and I didn't spend that long on it....so, IDK.

Maybe I just can't hear them blind and/or the problem isn't really there; maybe I set up the test wrong...the null is just weird to me.

And, to reiterate, I have basically no confidence in my sighted preference between them. I know I've fooled myself before.
Old 12th May 2022 | Show parent
  #176
Tokyo Dawn Labs
 
FabienTDR's Avatar
 
Verified Member
🎧 10 years
That's the price of function. There aren't too many ways to do this correctly. In fact, there's typically only one. Mixing is always an addition, scaling always a multiplication, different formats asking to be played at once always means they first have to be converted to linear PCM and same samplerate. There's nothing mysterious about it, just obvious necessity.

- No DSP if input is lossless and output SR are equal.
- You need multiplication when gain is changed.
- You need addition when multiple input meet less outputs.
- You need SRC when samplerates don't match (on desktop, a single low pass filter beyond audible range. On IOS possibly more lossy methods).
- You need format conversion when formats don't match (e.g. playing a lossy Apple format).

Now, if equal source and output samplerates are played at unity gain, without any mixing, the OS will simply map the source block to output block using a pointer, without even touching these bytes!

Energy savings has been a priority for all OS's, motivated by green compliance and necessary support for mobile platforms. A modern OS will pick the most efficient solution for whatever case - that's why:

Quote:
Resetting either a device format or a client format will generally be a disruptive event, requiring the AUHAL to establish a new AudioConverter.
This text quoted here and above answers the questions raised in this thread. If the case can be handled with "bypass", the OS will disrupt playback, trash any previous AudioConverter, and generate a new AudioConverter that does just that: Mapping source to output.
Old 12th May 2022 | Show parent
  #177
Lives for gear
 
Verified Member
Quote:
Originally Posted by Gsound ➡️
I just made this post on the Wavelab forum. Maybe Philipe can implement an Exclusive Access option in WL?

https://forums.steinberg.net/t/future-request/785720
How about Cubase/Nuendo?
Old 12th May 2022 | Show parent
  #178
Deleted 9f46789
Guest
Quote:
Originally Posted by FabienTDR ➡️
This text quoted here and above answers the questions raised in this thread. If the case can be handled with "bypass", the OS will disrupt playback, trash any previous AudioConverter, and generate a new AudioConverter that does just that: Mapping source to output.
So where do we go from here?
Old 12th May 2022 | Show parent
  #179
Lives for gear
 
Verified Member
Quote:
Originally Posted by chrisj ➡️
I would point out that to a developer, the CoreAudio standard has been one of the best things about the Mac platform for a really long time, and far from being a scary negative is the backbone of the system. The only thing you could even gripe about as a rule is that it restricts you to a 32-bit floating point bus, and even then you could run something like Reaper and use VSTs capable of 64-bit audio buss depth, plus Logic has had the option for 64-bit calculations in its internal summing for a while as well, even if the native CoreAudio buss it runs on is 32-bit float.
Is this still true in May 2022? Does Core-Audio still strict you to a 32-Bit floating point bus?
Old 12th May 2022 | Show parent
  #180
Lives for gear
Quote:
Originally Posted by Robb Robinson ➡️
If I understand you correctly, then of course we are keeping things equal end to end, praying CA doesn't DSP anything in between. Yet the sound still changes.

The difference is not subtle, even though I say it is subtle to others as to avoid insulting their hearing or their environment. 'Relatively subtle' is more accurate.
Do you have a 24-bit dither inserted as your final insert? Core Audio must still truncate to send it to the DA I believe. How do you have your Lavry DA hooked up?

ASIO is usually 24-bit fixed point PCM. The DAW in Windows truncates 64-bit floating point pcm to 24-bit fixed without dither. A dither as the final insert on your master track or monitoring inserts preserves detail in playback.

Most DAWs in Mac OS X take their 64-bit floating point PCM output and round it to 32-bit float for Core Audio. Then I believe most DAs over coreaudio can only still receive 24-bit fixed so Core Audio truncates the 32-bit float to 24-bit fixed without dither. Why not just not use the system volume control and add dither as the last insert on your master outs and see if the sound change for the better? Why not break out the audio analzyer? People have loopback tests to prove certain audio players are far from transparent

Apple's SRC is pretty good and so is Logic's now. It is better than Pro Tools and other DAWS but not as good as SoX (used by Cubase) and Voxengo (used by Reaper). Apple's and Logic's just seem to have rounding to 32-bit float somewhere so higher noise floor.

What sample rate and bit depth is your Lavry DAC expecting? What interface is it hooked up to? Could your interface be doing something? Does the Lavry always run at one sample rate and let the DA chip (if it uses one like the DA 11 does or it resistor based like the old DA92) convert itself? Does the DSP run at one sample rate like the Crane Song? Maybe the SRC isn't great internally and it always upsamples to one sample rate? Many lauded interfaces and converters have poor SRC internally.

Loop back test possible? Many of the better audio players like audio nirvana and jriver have a dither applied automatically. most daws do not.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 40109 views: 3268215
Avatar for Agreed
Agreed 1 hour ago
replies: 188 views: 13296
Avatar for Jerry Tubb
Jerry Tubb 4th January 2022
replies: 99 views: 8784
Avatar for Jerry Tubb
Jerry Tubb 11th January 2022
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump