Quote:
Originally Posted by
Deleted User
β‘οΈ
The latency figures and experiences I shared aren't theoretical or based on any particular manufacturer's marketing figures, they are based on day-to-day real world sessions.
Sorry, there is misunderstanding.. to me it's theoretical discussion here, because Burl doesn't have (and likely won't have) Ravenna motherboard. I haven't questioned your experience with their driver.
Quote:
Again, this isn't a "best case" scenario or some theoretical result based on someone else's marketing data.
Perhaps the Focusrite figures are "best case", but I haven't used their product, nor was I present during their testing to confirm their results to be accurate.
By "best case" figures, I've generally meant just simple one figure round-trip latency measurements typically with one looped track.
While this is sufficient, when talking about hardware layer transport latencies or about general ballpark of some interface, when we really want to compare audio interfaces, efficiency and stability under increased load are also very important things. Basically how far you can push the system and DAW project, before you experience dropouts and xruns at given buffer length or RTL.
It's very hard to measure that by vendors or even multiple different users, because it's time consuming and you have to use exactly identical HW/SW environment and test project to do that.
However one might find quite significant differences among various interfaces and drivers (regardless its virtual or real interface). Although particular implementation is always very important, in general PCIe or TB interfaces will rule that aspect, because its hardware DMA for hardware buffers has almost no CPU overhead. Whereas other ways to access hardware (say via USB or L3 network audio generally) requires passing of payload (samples) through several OS software layers, which can take more time and CPU cycles. Also as it's passing through those asynchronous layers, when computer is being loaded, it's more difficult to keep required tight scheduling (when you have short buffers) as normal OSes.
There are certainly use cases, where this might not be really apparent.. so for example Horus rig, which I've seen, was used at big studio for orchestral recording and scoring.. they have lot of simultaneous I/Os, but very sparse and rather light DAW processing, so it's running stable with short buffers even during mixdowns.. however I can really imagine, when heavier bus effects or VIs, which would put more strain to RT performance, would be used, it might completely change the situation even with few tracks.
Quote:
Not sure how this is relevant to the discussion, as I use neither Windows, nor do I employ Pyramix (nor do the colleagues I have had discussions with) where this may be relevant to the results mentioned.
It was relevant to my previously mentioned point and our discussion about software endpoints, because this kind of dedication (hog at the driver level) of one CPU core just to processing or network audio handling helps to reduce influence of OS scheduling and other processes to time critical parts. So no matter what you do at DAW or system it gets scheduled and executed in timely fashion. I've mentioned it in the context of DVS and it's long buffers compared to hardware endpoints.
Quote:
Point-to-point connections pretty much guarantee the signal will reach the destination on time and in tact, and critically, these systems were originally designed to carry realtime audio and video signals. They do this quite well, and are trusted by a variety of organisations, in a variety of disciplines.
Ethernet, on the other hand, wasn't designed to carry real time audio and video. Ethernet also doesn't offer the same guarantees as point-to-point analogue or digital cabling with regard to timing, complete data arrival, et cetera. I've had this same discussion on another forum, pointing this out as people seem to have missed this very point as included in documentation provided by Audinate and ALC NetworX (which incidentally is a Lawo company not a Merging one), as well as by AIMS, AMWA, SMPTE, VSF, et cetera. This shouldn't be news but yet it seems to be.
Much R&D, by major broadcasters and research institutes and industry alliances, is still being spent today on workarounds in order to build a reliable and secure IP-based system for realtime audio and video that matches point-to-point connections such as AES, MADI, SDI, et cetera.
Hence, my suggestions, and it is the suggestion of these broadcasters and researchers that redundant networks are necessary in IP-based systems.
I'm afraid, you're mixing couple of different things.
First of all redundancy for networked audio/video protocols doesn't have anything with the fact, it's asynchronous nor it's build on top of IP based networks or some guaranteed delivery.. This has to be already handled in underlying protocols itself (generally in comparison to common point to point audio protocols, you have checksums there) even without any path redundancy, which won't help you with that. It's already working well, R&D was already done.
Redundancy there has only one purpose - availability in case of transport failure at primary path. Nothing more, nothing less.
Essentially it will help you with broken cable or failed network switch.. depending on topology (you can connect both ports to software partitioned switch with VLANs - in that case, it will cover cable failure.. or it can be connected to completely separate redundant network for more robust setups).
Audio payload and synchronization is always going through both paths up to the last redundant device, where both streams gets buffered and synchronized.. thanks to that, failover takes place immediately without any disruption or clicks.
So a need for redundancy at bigger or critical setups (broadcast, installation, live) isn't related to some technical differences between older protocols (which were also commonly done redundant way.. for instance in case of MADI or some proprietary optical SDI muxes), but mainly due to high impact of such failure when you have lot of channels multiplexed in one path.
Say you might have complex network topology and lot of audio channels going at TV station or some stadium, backbone network between two routing switchers.. etc. One failed network switch or broken cable between buildings and rooms can be total showstopper. When you previously had bunch of individual cables, transmitters, receivers with some spare lines, then it disrupted it, but after relatively short downtime, you could "survive" that. So there wasn't so high pressure for redundancy, unless particular project really required that and there was appropriate budget.
However back to my original point.. this isn't really anything, which small/medium studios should be concerned about.
From its perspective, it's really just interconnection between various boxes at one or two rooms, which replaces previous ptp protocols with couple of advantages. As they aren't broadcasters.. there it really doesn't make much sense to do all the cabling twice, purchase another switch and get more expensive Dante gear aimed for different segment just to have seamless failover, when they have multitude of other things in the studio, whose failure would require to stop their work or session.
Michal