Quote:
Originally Posted by
Mikael B
➡️
That was my first thought as well, but as latency is fixed according to the audio buffer size and for most DAWS also according to the delay compensation of any track in the project with non-zero latency plug-ins***, I'd suggest that any computer that in total gives you the ability to run with the lowest audio buffer without dropouts is in practice the best alternative.
The M4 may have an upper hand here, but that would depend on the whole typical project and what else those other tracks do, and on how the DAW manages to treat a recording track, i e is it able to truly secure a core for it with other tracks mostly on other cores or not?
Personally I have never had an audio route through busses/returns and the like cause dropouts for me in Ableton Live, only source tracks with virtual synths, so I'd assume the M4 might be better for that. But for others instruments like sample-based I wouldn't feel as sure.
*** It's true that in some DAWs you can bypass the delay compensation of other tracks with non-zero latency plug-ins on the recording track, but at low latencies it's not a given this is wanted as you don't necessarily will hear your own playing exactly where it will sit when playing back.
Yes that’s precisely what I was getting at.
The delay compensation will based on the VST or chain on a single channel that hogs the most cpu.
All i know is that the overall latency of my projects and what’s being compensated for largely depends on longest pole in the tent.
The buffer size affects incoming audio latency for monitoring. But that largely does not go through the VST synths. For playback response which was the use case of the post I responded to it is not just interface buffer, but how long the synth takes to respond. So both the buffer and the latency comp comes into play, the latency comp lets everything work in sync (from pre-recorded mid) - but does not tend to help much when playing synths in real time.
Configs for recording/mixing/jamming are not always the same. Hence why I run a machine for each job. Saves the pain of flop flopping, instant response in a synth and being able to sun all the plugins you want on 2-bus are not always compatible in terms of resources.
My personal recipe for optimised latency on live play (synth/drum) is to not run the vst on the main production machine whilst I’m jamming, run the 2bus from main production over MADI/AVB into the ‘performance’ computer set up with lowest buffer settings. Nothing else running on the performance machine other than the 2bus feed and the vst I want to play. Jam along then flip the midi that I want to keep back into the main machine. Yes it takes a few seconds to switch the monitor inputs and walk 3m across the room, but I find it works a lot better than messing with interface settings, figuring out which plugins to disable on the production rig to lower latency. Only need to do this on the heavy CPU plugins. Most just work.