The No.1 Website for Pro Audio
Evaluating AD/DA loops by means of Audio Diffmaker
Old 9th August 2022 | Show parent
  #2581
Part 1

THE WHY?

The million dollar question, at the end of the day, is - what does the didier brest test, what does it tell us about an audio interface?

Does it provide useful information that is relevant to daily use in music production?

And even if it does not have any relevance to daily use in music production, it may still have value from a purely enthusiast perspective, cos knowledge is knowledge, however abstract, and not all knowledge has to have a practical application for it to be interesting or amusing or entertaining.

THE WHAT?

This test aims to boil all the factors in a chain or loop of an audio input and output, into a set of four numbers, 2 for frequency and two for dynamic range - ostensibly. Which is a laudable aim. If only it was so simple, to translate what is actually an artform - music, and audio, into a simple set of 4 numbers - If only.

There are obviously many factors that contribute to these final values, for each chain of input and output loops.

An non exhaustive list would include :

a. Frequency response
b. Total Harmonic Distortion
c. Noise
d. Phase change (most manufacturers do not quote this in their specs, but manufacturers like Prism and Merging, do include this).
e. Dynamic range.
f. Signal to Noise ratio.

One key obvious aspect of this test is - it is not applicable to devices which do only Digital/Analog conversion, so if you had a mastering plugin chain where everything was done digitally in the box, with only a Digital to Analog conversion, for monitoring, this Didier Brest test would not apply.

It typically assumes that you are using the same device for analog input and analog output. It does not really tell you anything about how your chain would measure if you had the analog inputs and outputs on different devices. Such a combination of testing across different input and output devices, would be humongous, just too many variables. But nevertheless it may provide even more value, because it would demonstrate the optimal combination of inputs and output combinations. That would be a really interesting segment to augment these tests, by creating a new section to store such results. In my minds eye this would require a matrix, rather than a list, with colour coding, so you have inputs on one axis, outputs on the other axis, and color coding to help identify the leaders of the scoreboard. Not easy to manage all the data though, Just ask Julian Krause (link below) the Youtuber who does a pretty decent job of comparing the more budget and prosumer end of the audio interface market, using a professional analyser.

EDIT - It does have a section with pairs of audio converters and loop back results through the pair.

https://www.youtube.com/c/JulianKrau...ts?app=desktop

Continued in Part 2, next post, so its easier to read, in chunks.
Old 9th August 2022 | Show parent
  #2582
Quote:
Originally Posted by OK1 ➡️
Part 1
Part 2, continued from Part 1

Unlike the similar bible on audio interface latencies, the other thread on gearspace, which is tested on the same computer, by the same person, the test on this thread has opportunities for user error testing, such as :

1. Using different cables (manufacturer, brand, cable composition - what kind of metal and diameter, shielding, etc, etc, etc)
2. Using different cable lengths
3. Using different cable types (e.g balanced vs unbalanced)
4. Using different filters where converter filters have options - I do notice in some of the test results, an effort is made to accommodate this. Very well done.
5. Different level settings for what 0dBFS means. For example on the higher end devices, you can set 0dBfS to different dBu values, such as on an RME audio interface. So setting 0dBFS to +24dBu may yield a different result from setting 0dBFS to + another value.

Here is a list of the possible calibration settings of 0dBFS for one RME output :

DA - Stereo Monitor Output XLR (1-2)
As DA, but:

Output: XLR, balanced
Output level switchable 24 dBu, Hi Gain, +4 dBu, -10 dBV
Output level at 0 dBFS @ 24 dBu: +24 dBu
Output level at 0 dBFS @ Hi Gain: +19 dBu
Output level at 0 dBFS @ +4 dBu: +13 dBu
Output level at 0 dBFS @ -10 dBV: +2 dBV

EDIT - Some test results show the calibration level of what dBu corresponds to 0 dBFS.

The point is there are a fair number of variables, which all have an impact on the result of the DB (Didier Brest ) evaluation into a single number/value or a set of 4 numbers, one for frequency variation and the other for dynamic range, ostensibly.

The question then becomes, one of weighting, of all the variations of cable types, and settings and criteria which may contribute to a DB Test based, score, which of these is more relevant, and contributes more or less to the score. That is the challenge we have, without doing extensive comparative testing and digging into all of these potential contributory factors, which would be extremely time consuming, its impossible to know exactly what causes a good DB Test score, or not.

It was such as surprise to see the TC Electronic device at the top of the leader board, but rather than challenge this, right out, it begs the question, exactly what does this scoring system mean? Its impossible to know all that it means, cos we will never know what factors contribute to a good result, or not. Unless someone has a lot of time on their hands and some very good equipment, and has the money to collect all these devices in one place. And even after all this, the comparison may not have any relevance to audio production, if the scores do not reflect anything that is audible.

To be continued in Part 3.
Old 9th August 2022 | Show parent
  #2583
Quote:
Originally Posted by OK1 ➡️
Part 2
Part 3 continued from Part 2

THE WHOM?

For many users, as a test with a loop back through the audio ins and outs of the same interface, where most of the devices out there - by nunbers sold, are those with fewer ins and outs, this may have limited use. I say this cos, if you only have a few channels to be recorded, you are most likely using the same audio interface, yet with only a few channels to be recorded, you are unlikely to be able to discern any difference, from one device to another, cos the noise levels of most modern audio interfaces, are already respectably quite low, as long as you are gain staging properly. The audio interface is unlikely to the the contributor to a poor recording or mix. Most of us are in this category.

The audio engineers who are recording the most number of analog channels, and outputting to the most number of analog channels, would be orchestral recording engineers, or the kinds of people recording sample libraries, for virtual sample based instruments.

While such bigger recording setups may use the same audio interface, for recording and playback, it is unlikely that they will be using the same audio interface for both analog to digital conversion, as well as for monitoring or passing on to other analog processing devices, the audio output, i,e digital to analog conversion. For such large multichannel needs, there are likely to be a chain of dedicated external converters, for inputs and outputs, with the audio interface only responsible for the digital aspects of input and output to the computer. So yes the audio interface using some of the higher channel count interconnects like MADI, or some kind of Audio over IP protocol (DANTE or AVB, etc) is the "audio interface", but it is not likely to be the only one responsible for AD/DA conversion, so measuring its loop-back audio quality becomes pointless.

Therefore those who would benefit the most from this kind of testing, where noise from many channels may accumulate, are unlikely to benefit from a test that uses the same device, for analog input and output, which rarely applies to high channel count processing.

-----------------------------------------
EDIT : The test results show pairs of audio converter devices, device A clocked by device B, and in some cases it also shows for the same pairing, that calibration 0dBFS, was set to, which is informative. But its, no easy to deduce which device contributed to the result, so for informational purposes it makes entertaining reading, but its hard to use this information to optimise the pairing, so choses device D and E and clock with D or clock with E, or clock with an external device which is not D or E. Informational nevertheless, maybe others will be able to derive decision making, or optimisation from these results, but I can't see how any further information can be immediately gleaned, in the current format.

Caveat, that this is a 100% volunteer endeavour, so one must temper expectations. There is value in seeing the results, if not value in being able to interpret the results of the paired tests, to do anything further with them.

------------------------------------------

Probably the kind of engineers who do hybrid mixing, and have the D/A and A/D using the same audio interface, may find some benefit.

But on the other hand, how many ins and outs to such hybrid engineers use. i.e how many times does a single audio track or bus, leave the analog domain and eventually contributes to an audio signal monitored or recorded in the analog domain, and what is the typical maximum number of conversion, in that chain?

The test I posted earlier, from PresentDayProduction - Youtube channel, showed a clear difference between an SSL and an Audient ID14 audio interface, after 500 ins and out between analog and digital, and even in that test, we have no clue how the gain staging was done, which may have had an important contribution to the validity of the test. By gain staging I also include the need to properly align the calibration of 0dBFS at inputs and outputs, why? Cos in some devices, such as the RME's, you can set this calibration individually, per input and per output, if my reading of their manuals is correct. A lot of the variation could be user error.

I cannot imagine that the maximum number of ins and outs, from digital to analog and/or analog to digital, in any sensible audio workflow, for hybrid processing, of any signal from the initial audio recording or synthesis in digital, to the final mix down, would exceed 10 conversion ins +10 conversion outs. (and this 10 would include the mastering chain using some analog devices)

In 2006, Ethan Winer - you can google this, did a similar test, using some pretty basic equipment, some of the worst measuring devices of that age, which would be easily outperformed, by almost any budget audio interface today, and proved that up to about 10 round trips, it was difficult to tell the difference, using 16 bit recordings, of what was then either consumer audio interfaces or budget prosumer audio interfaces. which almost all modern audio interfaces today would exceed.

At 24 bits, the difference would be even less.

https://ethanwiner.com/loop-back.htm

Continued in Part 4
Old 9th August 2022 | Show parent
  #2584
Gear Head
 
🎧 10 years
Quote:
Originally Posted by OK1 ➡️
As it stands now, this would have limited use, as a test cos it compared an audio loop through the inputs and outputs of a single device.
Just to note that if you check the thread properly there were quite a few combinations explored under "Separate Converters"

Also you could note that unless done completely wrong, cable choice doesn't really affect the result as you can find quite consistent results from multiple measurements, as any time and level mismatches are also dealt with...
Old 9th August 2022 | Show parent
  #2585
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by OK1 ➡️
what does the didier brest or more appropriately the diffmaker test
Audio DiffMaker is no longer used since two years ago:
Quote:
Originally Posted by didier.brest ➡️
Some changes in this new issue of the list of the results:
- The correlated null depth ('Corr. Depth') figures from Audio DiffMaker have been removed because no precise definition of this parameter, which seems specific to ADM, can be found.
- The 'Difference' figures measured on the difference files provided by ADM were replaced by lower 'Difference*' figures computed in Matlab. A few tests with Difference figures were removed when their loopback files were not available for computation of the Difference* figures.

Loopback tests requested by forum members: Eventide H8000FW and Eventide H9000R (for confirming the ones at the top of the list of the results), Pacific Microsonics Model One and Model Two, Slate Digital VRS-8, SSL 2+, RME M-32 AD Pro --> M-32 DA Pro, RME M-1610 Pro, Audient iD14 MKII, Prism Dream DA-2 --> AD-2, BlackLion Audio Revolution 2×2, RME MADIface Pro, RME Fireface UCX II, SSL BiG SiX, Prism Sound Dream ADA-128, anything from Qes Lab, JCF, Burl and Linn, PreSonus Quantum 2626, PreSonus Studio 192. Of course any other one welcome!
Old 9th August 2022 | Show parent
  #2586
Quote:
Originally Posted by OK1 ➡️
Part 3
Part 4 continued from Part 3

CONCLUSION

Because testing tools are relatively more available and magazine reviewers and Youtubers now have access to these tools, such as Audio Precision analyzers, by and large, the specs provided by manufacturers are usually valid, and representative of their products, cos its easy to check, and they know that especially for popular devices that are expected to sell in good numbers, these specs will be checked.

Of course it does not make it any easier, cos some of the specs are not equivalent, i.e they look the same but are not the same, e.g frequency response will vary with level. Some manufacturers will quote the frequency response at the point of least variance, while others who are working to a higher spec, will quote this spec at the highest output levels, where their distortion is likely to be more - i.e the worst case scenario. RME is one of these. their measured frequency response is usually better than what is in their specs. See Sound on Sound and Julian Krause's measurements, or AudioScienceReview.com, for validation.

I'd also like to comment on the test settings. The audio test files max out at 0dBFS, which is absolutely wonderful, so you can test your audio interface at its worst performance, where it is placing the most demand on your audio interface. Nice idea. But this also brings into question the issue of calibration. Many audio interfaces are not calibrated identically between the outputs and their inputs, for those audio interfaces which have no opportunity for user managed calibration.

in the attachments the 1st is an example from a MOTU M4, where the inputs and outputs are not identically calibrated, and it's probably impossible to change this.

The 2nd attachment is from the SSL 2+

This begs the question, for those tests where disparate converters were used for the input and outputs, what was their calibration?

Below (link) are some specs from a Tascam audio interface which has equivalent maximum input and output levels.

https://www.tascam.eu/en/us-2x2hr#specs

So back to the issue of weighting, which criteria (frequency response, THD, etc, etc) yields the most significant contribution to a superior DB Test score.

So many things come to my mind.

Continued in Part 5
Attached Thumbnails
Evaluating AD/DA loops by means of Audio Diffmaker-tdmtoip.png   Evaluating AD/DA loops by means of Audio Diffmaker-tfnq3da.png  
Old 9th August 2022 | Show parent
  #2587
Quote:
Originally Posted by didier.brest ➡️
Audio DiffMaker is no longer used since two years ago:



Loopback tests requested by forum members: Eventide H8000FW and Eventide H9000R (for confirming the ones at the top of the list of the results), Pacific Microsonics Model One and Model Two, Slate Digital VRS-8, SSL 2+, RME M-32 AD Pro --> M-32 DA Pro, RME M-1610 Pro, Audient iD14 MKII, Prism Dream DA-2 --> AD-2, BlackLion Audio Revolution 2×2, RME MADIface Pro, RME Fireface UCX II, SSL BiG SiX, Prism Sound Dream ADA-128, anything from Qes Lab, JCF, Burl and Linn, PreSonus Quantum 2626, PreSonus Studio 192. Of course any other one welcome!
Noted, and thanks, I'll edit all the recent posts where I reference the diffmaker test, and revise these. I was taking the info from the thread title (which I can imagine would be difficult to change now !!).

Hope its ok to call it the Didier Brest test (or DB Test - which is even nicer sounding - like some kind of a decibel test), since that username is not likely to change.
Old 9th August 2022 | Show parent
  #2588
Quote:
Originally Posted by OK1 ➡️
Part 4
Part 5 continued from Part 4

Logically - if one were able to measure inputs independently of outputs, then by simply combining the most transparent inputs with the most transparent outputs, we would have the most transparent loop, with the least difference in audio, and least audio quality loss.

Which reminds me, in the measurements with different audio converters for inputs and outputs, clocking is a third factor, and all manner of variances crop up, cos you can clock from input, clock from the output, or clock from a third independent clock that clocks both devices, so you have three measurements to make, which opens up another can of worms - which clock, and which external clock improves the score. It becomes a bit of a wild goose chase with so many alleys to pursue, cos the clocking now becomes a factor.

One more - the reclocking, cos some devices seem to have superior ways of reclocking from an external clock, examples would (in the Hi-Fi World) be the Digital Analog converters from Topping, which do this very well. RME claims to have superior reclocking. And the TC electronics Konnect I recall had some claims of good reclocking back then.

So from all this, nevertheless, without having access to expensive test devices, or the gear, there does seem to be a trend, which can be deduced from a visual inspection of the results, which proves that the test has value, albeit, its impossible to fully answer the question of why certain devices produce better results than others since there are so many factors, involved, and user involvement is not going to be 100% consistent.

If I was a statistician or some data warehouse, artificial intelligence specialist, by simply plugging in the results, and providing some elementary information about the known specifications of these devices, it would churn out the significant relationships between the criteria in the product specs and the result of the Didier Brest test. aka correlation.

Nevertheless :

1. Broadly the more recent devices, in design and manufacture, tend to have better results. So from the same manufacturer e.g. RME, their older Babyface has a worse score than their most recent version of the Babyface audio interface. So knowledge, competition, manufacturing excellence, has improved over time.

2. The higher end devices, e.g in the case of the MOTU's tend to perform better than their budget interfaces, from the same manufacturer. So cost has a factor to play. More money gets you better performance, that can be measured.

3. Line inputs measure better than using the preamp inputs. This is objective, but from a creative standpoint, I wonder how well highly sought after converters which are also very expensive like the Burls, would measure, with their deliberately coloured converters. So sometimes beauty is in the ear of the listener, Pretty sure the Burls would not measure well in this test, but some engineers swear by them. So either through dedicated line inputs, or inserts which bypass preamps, is the way to go, for audio purity. The test confirm this expectation, demonstrating that they are obviously measuring something that makes valid sense.

4. External clocking degrades performance - which is expected, see Sound on Sound contributors opinion. If you compare the results of the Eventide H9000R clocked internally and externally, this is confirmed.

"As I've explained above — and will prove below — today's converter designs generally work best on their own internal clocks, and most will deliver a slightly poorer performance when clocked externally. "

https://www.soundonsound.com/techniq...l-master-clock

This is a really interesting point cos, where you have large multichannel inputs and outputs, you may have no choice, but to clock externally. So where possible clock internally.

5. The better performing audio interfaces, predominantly come from the usual suspects, the RME's, Metric Halo, Merging, Motu, Lynx, Mytek, from their more recent designs and higher end products, and Apogee's Symphony is also one of these. Apologies for leaving out products like the Juli's which I'm not sure are still available as items to purchase or if anyone will be interested in such devices, in 2022, especially as typically they never had any preamps. Always good to have one or two preamps just in case. Also not sure how competent their software drivers will be in terms of efficiency and latency compared to more modern designs.

The Eventide's at the top of the list, Hmmm, food for thought, wonder what they are doing so right to qualify at the top of this test. Somewhat of an outlier, cos they are not well known for making dedicated audio interfaces.

Continued in Part 6
Old 9th August 2022 | Show parent
  #2589
Quote:
Originally Posted by OK1 ➡️
Part 5
Part 6 continued from Part 5

FINALE

So generally there are no surprises, you get what you pay for, if the utmost audio accuracy is that important to you, There is a theory that human beings can only discern audio dynamic variations, down to 115 decibels, and which of course means they have to be listening at levels in the region of 115 dB SPL, above the average noise floor, to achieve this. if the average noise floor in a room is 35 dB SPL i.e to discern a difference of 115dB dynamic range, you would have to be listening at an audio level of 150dB SPL (115+35)- which would either cause you to become insane permanently or permanently deaf, within a few seconds, from the utter destruction of your eardrums.

So on the one hand, for marketing purposes, these variances can be measured, down to dynamic ranges in excess of 120dB and signal to noise levels/THD+N, down to below - 110dB on many devices, including budget ones, but can we truly claim to be able to hear these differences. In theory no.

Furthermore, while theoretic testing of an output at 0dBFS is wonderful, does it make any sense to be pushing analog circuits to the extreme, when it is so much easier to simply run your audio interface output, to peak at no more than -10dBFS, or in some cases no more than -20dBFS - which is the nominal 0vU (+4dBu) in the days of washer wiper meters(where maximum levels before clipping in the analog circuitry for high end products would be about 24dBu - so +4dBu gives you 20dB overhead to ensure that it is unlikely that anything you hear will be distorted, unless of course in the days of tape you are using such high vaudio levels for creative effect, pushing the circuits and tapes into saturation/creative distortion, giving you 20dB of headroom to catch any peaks, better still if all peaks are no more than -20dBFS. This way avoiding any "stress" on the analog circuits, you gain on flatter frequency, might lose a bit on THD+N and Signal to Noise, and measurably reduce measured dynamic range, but peaking at -20dBFS, you are already far above the discernible noise level, that any reduction in dynamic range quality from measuring at 0dBFS, is academic.

As long as speakers/monitors are also calibrated to the -20dBFS, no issues. This is pretty easy. Especially for most nearfield monitoring, the speakers within their lowest distortion, are already more than loud enough, that they can accommodate a +20dB gain required of them, with no issues. A minor accomodation to obtain sterling performance from budget gear.

From a pure audio quality, on outputs, we should expect the difference in frequency response accuracy to narrow, between the budget and the boutique stratospheric audio interfaces, cos we are no more asking for unicorn performance from the analog components.

On the input side also, we should not be pushing components to peak at 0dBFS, where they begin to clip and stress the analog components. Clipping is not exactly like a cliff edge, where all of a sudden distortion sets in. Sensible glitch free recording should peak at no more than about -10dBFS, leaving us plenty of room to avoid the clipping distortion completely. Better still peak, at -15dBFS, even on a budget audio interface. Typically these now have easily over 110dB dynamic range, so you still have 95 decibels of dynamic range, above the noise floor, if peaks on input are set to no more than -15dBFS - Seriously how many normal recordings need 95 dB's of dynamic range - for normal music or audio.

With these adjustments to gain staging, because modern audio interfaces have truckloads of dynamic range and signal to noise ratio to spare, we can lower the "stress" on the analog components, and have more than acceptably clean recordings and playback, with excellent frequency response, from inexpensive audio interfaces.

While the Didier Brest test, pushes the audio interfaces to their limit, and helps to confirm the many assertions listed above, such as buying more expensive gear does have value cos these devices tend to measure better, the real practical question is - this test, tests for a scenario that is not valid in practice. Unless they do not understand their gear, no one, should be pushing gear to 0dBFS on outputs or inputs, just does not make sense. Fantastic test - highly appreciated but in practice, set gain sensibly on inputs and outputs, calibrate, gainstage, and this proper use of gear could save you thousands in cost, and give you discernibly identical performance to gear costing a lot more.

Please take this as a hypothesis, I have not myself done any testing, but the basis is pretty valid. If you look at the tests where Julian Krause - Youtuber drops his input test signal and no longer punishes the device to 0dBFS, most budget audio interfaces have a markedly improved frequency response, and in truth, there is absolutely no need to be using the input of these devices, at such extremes.

It would be interesting to perform this test across budget and expensive interfaces, at -10dB, and compare the results. I hope, yet suspect that this would be telling. Not sure if the esoteric manufacturers of high end expensive gear would like the truth exposed.

It reminds me of what occurs in the car industry - This car whose primary purpose is to get you from point A to point B, safely, and carry the number of people you have in mind, and your luggage, is fitted with expensive sound systems - and I'm scratching my head - does not make any sense, cos when road noise is already above 40 or 50dB, in spite of all the noise dampening (from your tire and engine and wind noise, plus the noise of other vehicles on the road, such as that noisy motorbike that just went past and that haulage truck, that zoomed past on the other side of the highway), what is the point in a super expensive sound system, that you can only enjoy properly, when you are in the car park, with your engine switched off. So yes the car measures well - tick that box - great sound system, but you never get the benefit of it fully, sure makes you feel good and can brag about it.

Or that car which can do 160 miles an hour. Just about any modern car will reach 120 miles an hour with ease, and not take that long to accelerate to that speed, but the speed limit is 70 miles per hour. like in the UK, or no more than 100miles an hour in most sane countries, except on the autobahn in Germany, so what is the point in the 160 miles an hour car which you will never ever use to that limit.??

For audio interfaces, I think we have reached that point where, almost any audio interface including the budget ones, will get the job done, if gain staged sensibly, and unless you need the features of the more expensive products, such as DSP, or exceptionally quick latency on Thunderbolt devices, or PCI, or lots of multiple channels in and out, any further expense above a budget audio interface is for bragging rights and cosmetics, and impressing your client. That's also important...! Any other reasons to splurge on more expensive gear - resale value - maybe. Maybe to reduce tax liability, at least you'll be looking at a nice more inspiring, better looking, more solidly manufactured, shiny toy.

I recall doing a loopback test in REW - room equalisation wizard, which gives you the opportunity to measure your audio interface using a loopback, similar to the test in this thread, but not at 0dB, if I recall rightly at something more sensible like -12dB, and the result was ruler flat, so flat that I told myself there was no point in applying any calibration of the audio interface, to any subsequent measurements done with that audio interface - None. So now I never ever calibrate the audio interface, for any measurements of speakers, no point. It's already as perfect as it needs to be.

Here endeth the reading.
Old 9th August 2022 | Show parent
  #2590
Quote:
Originally Posted by fedor.tche ➡️
Just to note that if you check the thread properly there were quite a few combinations explored under "Separate Converters"

Also you could note that unless done completely wrong, cable choice doesn't really affect the result as you can find quite consistent results from multiple measurements, as any time and level mismatches are also dealt with...
Thanks, I have noted this and have revised accordingly. Most appreciated.
Old 9th August 2022 | Show parent
  #2591
Quote:
Originally Posted by OK1 ➡️
Here endeth the reading.
Thank God. Did I miss anything skipping over this?
Old 14th August 2022
  #2592
Here for the gear
 
🎧 10 years
Dear Didier,

I use my converters to capture my analog mastering chain, as mastering engineer I like to do as least harm to the signal as possible. So there are some experimental files in here (modded RME ADI-2), I hope you could do the analysis for me, would be much appreciated. Here are my 5 captures:

https://www.dropbox.com/s/mhqlv5kkcz...%20BE.zip?dl=0

Motu 828Mk2 (out 3/4 -> in 5/6)
*** Motu 828Mk2 outboard (out 3/4 -> nulled DR-MQ5 EQ -> nulled SPL Kultube compressor -> in 5/6) IGNORE ***
RME ADI-2 Pro FS BE stock (SD Sharp -> SD Sharp, DC filters off)
RME ADI-2 Pro FS BE modded (SD Sharp -> SD Sharp, DC filters off)
RME ADI-2 Pro FS BE modded sharp (Sharp -> Sharp, DC filters off)

Modded = input DC coupling caps bypassed

TA, Joshua

Last edited by JoshuaK; 14th August 2022 at 09:30 PM.. Reason: Loopback should be loopback (noisy outboard gear will never be sample accurate)
Old 15th August 2022 | Show parent
  #2593
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by JoshuaK ➡️
Motu 828Mk2 (out 3/4 -> in 5/6)
22.616742 ms, 0.9603 dB (L), 1.5850 dB (R), -39.1606 dBFS (L), -37.9888 dBFS (R)
Something went wrong in this test, +20 dB with respect to previous 828 MkII tests in the list of the results. See attached graphs for the spectra of the differences of this test and the one with outboard gear, which is unexpectedly better:
Quote:
Originally Posted by JoshuaK ➡️
Motu 828Mk2 outboard (out 3/4 -> nulled DR-MQ5 EQ -> nulled SPL Kultube compressor -> in 5/6
22.613056 ms, 3.0974 dB (L), 3.3795 dB (R), -45.9629 dBFS (L), -48.1079 dBFS (R)
Attached Thumbnails
Evaluating AD/DA loops by means of Audio Diffmaker-loopback-test-motu-828mk2.jpg   Evaluating AD/DA loops by means of Audio Diffmaker-loopback-test-motu-828mk2-outboard.jpg  
Old 15th August 2022 | Show parent
  #2594
Here for the gear
 
🎧 10 years
Quote:
Originally Posted by didier.brest ➡️
Something went wrong in this test, +20 dB with respect to previous 828 MkII tests)
Thank you Didier, very interesting, I used two patch cables going directly from 3/4 out to 5/6 in on my Neutrik patchbay for the shortest loopback. Let me recapture it without these patch cables and leave all outboard switched off in (true) bypass mode instead.
Old 15th August 2022 | Show parent
  #2595
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by JoshuaK ➡️
RME ADI-2 Pro FS BE stock (SD Sharp -> SD Sharp, DC filters off)
27.891647 ms, 4.2138 dB (L), 4.2108 dB (R), -37.4374 dBFS (L), -38.7189 dBFS (R)

Same issue like for the 828 MkII ?
Old 4 weeks ago | Show parent
  #2596
Here for the gear
 
1 Review written
🎧 10 years
Here's an Avid Carbon, I did both line out and the monitor out but suspect they're the same from my testing.

I noticed I got much better jitter performance analyzing one of my own files previously. Maybe bad power today? Definitely performed much better at higher sample rates with a much better null value as well.

https://drive.google.com/drive/folde...1H?usp=sharing
Old 4 weeks ago | Show parent
  #2597
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by didier.brest ➡️
Same issue like for the 828 MkII ?
No. See the attached difference spectrum graph very different from the one for Joshua's 828 MkII test where there is an obvious noise issue that does not occur in this ADI-2 test. Might the combination of SD sharp and DC filters in the tests done by LesC, jrasia and thomasjacquot reported in the list of the results perform better (-45 dBFS) than the SD sharp filter alone in Joshua's test (-38 dBFS)?

Anyway removing the analog input caps does not provide significant improvement:
Quote:
Originally Posted by JoshuaK ➡️
RME ADI-2 Pro FS BE modded (SD Sharp -> SD Sharp, DC filters off)
27.891591 ms, 4.2161 dB (L), 4.2123 dB (R), -37.7646 dBFS (L), -39.0411 dBFS (R)

But combined with Sharp setting instead of SD sharp, it does:
Quote:
Originally Posted by JoshuaK ➡️
RME ADI-2 Pro FS BE modded sharp (Sharp -> Sharp, DC filters off)
27.038648 ms, 4.2279 dB (L), 4.2211 dB (R), -72.3055 dBFS (L), -71.1142 dBFS (R)

To be added to the next issue of the list of the results.
Attached Thumbnails
Evaluating AD/DA loops by means of Audio Diffmaker-rme-adi-2-stock.jpg  

Last edited by didier.brest; 4 weeks ago at 10:17 AM.. Reason: Completion
Old 4 weeks ago
  #2598
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by mfic ➡️
Here's an Avid Carbon
line out to line in
-1.334 µs, 0.9330 dB (L), 0.9369 dB (R), -47.8993 dBFS (L), -48.8434 dBFS (R)

monitor to line in
-1.466 µs, 0.6068 dB (L), 0.5972 dB (R), -47.9024 dBFS (L), -48.8361 dBFS (R)

To be added to the next issue of the list of the results.

Last edited by didier.brest; 4 weeks ago at 09:46 PM..
Old 4 weeks ago | Show parent
  #2599
Here for the gear
 
🎧 10 years
Quote:
Originally Posted by didier.brest ➡️
Might the combination of SD sharp and DC filters in the tests done by LesC, jrasia and thomasjacquot
Thanks again Didier, the ADI-2 was captured on the bench with just two 40cm Canare/Neutrik XLR cables, should be clean as a whistle. I'll recapture the ADI-2 with DC filter on. And also without caps in combination with filter on Sharp.

Not sure what could be the issue with my old faithful 828mk2. Double checked if I mislabeled the files, labels are correct based on time created.
Old 4 weeks ago | Show parent
  #2600
Here for the gear
 
1 Review written
🎧 10 years
I uploaded results from the MTRX Studio to the same folder. I ran at +24 for 0dbfs and also +18 both on the internal clock. Looked like the phase difference was a little better at +18. Curious if you get the same numbers as me , the results looked pretty good when I ran it myself (-60).

https://drive.google.com/drive/folde...1H?usp=sharing

I usually run this off a Big Ben with the rest of my interfaces....might upload that later as well if I have time.
Old 4 weeks ago | Show parent
  #2601
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by mfic ➡️
I uploaded results from the MTRX Studio to the same folder.
Avid Pro Tools | MTRX Studio +18 dBu
-1.284258 ms, 0.9466 dB (L), 0.9535 dB (R), -60.3390 dBFS (L), -61.3839 dBFS (R)


Avid Pro Tools | MTRX Studio +24 dBu
-1.284325 ms, 0.9476 dB (L), 0.9606 dB (R), -60.3263 dBFS (L), -61.3751 dBFS (R)

To be added to the next issue of the list of the results.

Last edited by didier.brest; 3 weeks ago at 02:09 PM..
Old 3 weeks ago | Show parent
  #2602
Gear Head
 
🎧 10 years
I've made another couple tests for Anubis Pre, since last time I did it wrong (there was an insufficient out level).
These were made only for Slow filter mode. If the measures for these don't differ with previous then you may ignore these.

1. aux out to line in, DA filter set to Slow
https://drive.google.com/file/d/1WlC...ew?usp=sharing

2. main out to preamp in, DA filter set to Slow
https://drive.google.com/file/d/1Lrr...ew?usp=sharing
Old 3 weeks ago | Show parent
  #2603
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by euggie2000 ➡️
I've made another couple tests for Anubis Pre
aux out to line in, DA filter set to Slow
-3.514911 ms, 0.2124 dB (L), 0.2067 dB (R), -51.3923 dBFS (L), -52.2519 dBFS (R)

main out to preamp in, DA filter set to Slow
-3.515180 ms, 0.1902 dB (L), 0.1968 dB (R), -50.5106 dBFS (L), -51.4753 dBFS (R)

Quote:
Originally Posted by euggie2000 ➡️
If the measures for these don't differ with previous then you may ignore these.
I will do so for for the line in test. I will remove in the next issue of the list of the results all your previous preamp in tests and add this new one.

Last edited by didier.brest; 1 week ago at 01:29 PM..
Old 2 weeks ago
  #2604
Gear Head
 
🎧 10 years
Eventide H8000FW - analog 1/2 out -> 1/2 in via routing using Fireware i/o

https://drive.google.com/file/d/1dXX...ew?usp=sharing

Double checked disconnecting ins that the signal comes in analog domain.

In addition, one more recording - from Lynx Aurora (n) - analog 3/4 out to 3/4 in

https://drive.google.com/file/d/1u1f...ew?usp=sharing

Last edited by euggie2000; 2 weeks ago at 04:12 PM..
Old 2 weeks ago | Show parent
  #2605
Gear Head
 
Antelope Galaxy64.

Analog out 1/2 to analog in 1/2
Monitor out L/R to analog in 1/2

All trims set to 22db, monitor level set to 0db. Internal clock.

Two files at this link, names should be clear which is which: https://we.tl/t-3377rJyKed
Old 1 week ago | Show parent
  #2606
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by euggie2000 ➡️
Eventide H8000FW - analog 1/2 out -> 1/2 in
There is an one sample L vs. R shift like in the test from jeamsler (list of the results). But the measurement result is much different:
-2.245201 ms, -0.3719 dB (L), -0.3478 dB (L), -45.1124 dBFS (L), -45.9663 dBFS (R).

To be added to the next issue of the list of the results.

Last edited by didier.brest; 1 week ago at 07:09 PM..
Old 1 week ago | Show parent
  #2607
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by euggie2000 ➡️
one more recording - from Lynx Aurora (n) - analog 3/4 out to 3/4 in
21.994 µs, 0.0053 dB (L), 0.0024 d (R), -58.9329 dBFS (L), -59.0429 dBFS (L)

To be added to the next issue of the list of the results.

Last edited by didier.brest; 1 week ago at 11:07 PM..
Old 1 week ago | Show parent
  #2608
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
Quote:
Originally Posted by demarcus_b ➡️
Antelope Galaxy64.
Analog out 1/2 to analog in 1/2
-23.941579 ms, 0.1807 dB (L), 0.1921 dB (R), -44.1543 dBFS (L), -45.1957 dBFS (R)

Quote:
Originally Posted by demarcus_b ➡️
Monitor out L/R to analog in 1/2
-23.982383 ms, -0.0821 dB (L), -0.0091 dB (R), -42.8521 dBFS (L), -42.8521 dBFS (R)
L and R difference levels being equal indicate that there is likely a time shift between L and R channels, large enough so that minimizing the largest level of the difference between the original and the copy, the loopback delay of which is compensated by means of the same time shift for both channels (a survival from Audio DiffMaker), can be achieved only by a trade-off between compensating the L and R delays, which results in equal L and R difference levels. But unlike in previous tests where the L vs. R shift was equal to one sample, which reveals a pure digital issue easily compensated for before joint measurement of both channels, here separate measurement of L and R channels results in a L vs. R shift slightly lower than half a sample, which is weird.
L channel: -23.986860 ms, -0.0789 dB, -44.1515 dBFS
R channel: -23.977158 ms, -0.0047 dB, -45.3444 dBFS

Last edited by didier.brest; 1 week ago at 02:18 PM..
Old 1 week ago | Show parent
  #2609
Gear Head
 
Thanks. Interesting result. The time shift may have something to do with the way i recorded it using Max MSP, where i had a soundfile playing straight into the dac output, and then the adc input straight into a soundfile recorder. It is odd though, you wouldn't think that such a large shift would result.

I'll try a different method, maybe just using something basic like Garageband as there isn't any other audio software installed on the computer running the interface, which i don't own, so i'm not able to install more programs on there.
Old 1 week ago | Show parent
  #2610
Lives for gear
 
didier.brest's Avatar
 
🎧 10 years
There is no issue about the time shift between the original and the loopback copy. It may be quite large, up to 1 s in some tests reported in the list of the results. The issue here is about the time shift between the L and R channels of the copy in your second test (monitor out). An one sample shift has been encountered in a few previous tests, equivalent to 23 µs time shift, which results likely from a digital streaming defect. In your case it is 10 µs. Since your first test is OK (analog out 1/2) and one may exclude the reason that there would be different digital filters on L and R monitor channels, I don't figure out how it is possible otherwise than the analog part of the monitor out of your unit would be defective. Anyway I will not report your second test in the list of the results because the measurement by means of the method used for all previous tests (same compensating time shift for both channels) would give the wrong idea that the monitor output of this Antelope model is not as good like the 1/2 output.

Last edited by didier.brest; 1 week ago at 02:20 PM..
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 17395 views: 2008546
Avatar for ElmoHope
ElmoHope 4 hours ago
replies: 63 views: 5653
Avatar for louis1
louis1 6th April 2016
replies: 15 views: 6763
Avatar for budweiser
budweiser 20th February 2018
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump