Quantcast
Foobar 2000 ABX Test - Redbook vs 192/24 - Gearspace.com
The No.1 Website for Pro Audio
Foobar 2000 ABX Test - Redbook vs 192/24
Old 2nd July 2013
  #1
Gear Nut
 
🎧 5 years
Foobar 2000 ABX Test - Redbook vs 192/24


Untitled: Photo

The picture shows foobar2000 displayed with two tracks, with 10/14 successful trials, app reports 9% chance I was guessing based on these statistics.

The track with just the title is Redbook Audio ripped straight to .wav by SoundForge 10. I upsampled to 192 kHz and increased word length to 24 bits, no other changes.

This is better than 90% confidence. When I do research in other domains, I never go with less than a sample n=30, but this is all I have time for this evening, and over 90% should be pretty damned impressive.

These are trials from various portions of 'Groove Tube', off 'Room Girl' by MEG. Very distinctive vocal sounds are available, and the levels are not dense and slammed as a lot of EDM. I expect to do at least as well with classical music--going to try string quartets and classical vocal music later on.

I did **not** train ahead of time on 44.1/16 vs 192/24. I did take plenty of time to make my decision on each trial, and I preferred either highly exposed vocal fragments, or deep complex textures with lots buried in the mix. Much background noise in my house, so I did all trials through DT 770 Pro's fed by RME Babyface. (My pending Asgard 2 headphone amp has yet to be shipped from the manufacturer, so I'm a bit crippled in this respect).

Keeping my attention focused for a proper aural listening posture is brutal. It is VERY easy to drift into listening for frequency domains--which is usually the most productive approach when recording and mixing. Instead I try to focus on depth of the soundstage, the sound picture I think I can hear. The more 3D it seems, the better.

These are 50+-year-old ears.
Old 2nd July 2013
  #2
Gear Nut
 
🎧 5 years
Untitled: Photo



Took it on past 95% confidence level.
Old 2nd July 2013
  #3
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years


But testing sample rates is one of the most difficult things to do right.
I explained it here: https://gearspace.com/board/9145259-post45.html
Old 2nd July 2013
  #4
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
In this thread I posted 2 tests on sample rates:
https://gearspace.com/board/8929106-post78.html

The first link describes how they had to put a software delay in the test to eliminate switching clues.

Bottom line: There are only 3 scientific listening tests online, that I could find, about sampling rates. All proved negative.
Old 2nd July 2013
  #5
Gear Guru
 
Yoozer's Avatar
 
1 Review written
🎧 10 years
Quote:
Originally Posted by UltMusicSnob ➑️

Untitled: Photo

(There's an IMG tag here, but it's not appearing. Oh well, the link works.)
It's not working because you're linking to the tumblr page itself instead of the actual image file. Right-click, save as, upload to imgur.

Old 2nd July 2013
  #6
Gear Nut
 
🎧 5 years
Thanks for the tip, Yoozer--I was using Copy Link Location instead of Save As.
Old 2nd July 2013 | Show parent
  #7
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by mainesthai ➑️


But testing sample rates is one of the most difficult things to do right.
I explained it here: https://gearspace.com/board/9145259-post45.html
Yes, thank you for the reference, I had read that beforehand, and I was wondering about it as I began. I haven't been able to reproduce the delay phenomenon on my system--it sounds instantaneous to me. If there were a difference to be noticed, I should be able to go to 100% correct just cuing on that, but I'm nowhere near that. I do many many listenings back and forth to make just one trial decision--no delay.
Old 2nd July 2013
  #8
Gear Nut
 
🎧 5 years
Obviously, I'm not planning to publish my results as a scientific result! This is just my home equipment, with many confounding factors. The quality of my sound reproduction monitoring, for example, is not near the pristine setups described in scientific tests. I don't have marvelous electrostatic speakers, nor a quiet listening environment (two computers humming away right next to me). This makes it even harder for me to get a result.
I took my cue from this helpful comment in "Audio Converters" thread: "Doing listening test with only your ears is not difficult if you know how to do it. Here's a good instruction video:" (ABX Audio Testing with foobar2000 video). This helped get me from blind (getting my daughter to click the file while my eyes are closed) to functionally double-blind (computer is not subject to interpretation of results bias).
What I have proven rigorously is that when I make the alterations to the file that I describe -- upsampling, word length, nothing else -- then the result at my ears, on my signal path, is one I describe as "superior", and can reliably detect, using the recommended blind tool foobar2000.

Caveats--Program material is crucial. Anything that did not pass through the air on the way to the recording material, like ITB synth tracks, I'm completely unable to detect; only live acoustic sources give me anything to work with. So for lots of published material, sample rates really don't matter--and they surely don't matter to me for that material. However, this result is also strong support for a claim that I'm detecting a phenomenon of pure sample rate/word length difference, and not just incidental coloration induced by processing. The latter should be detectable on all program material with sufficient freq content.
Also, these differences ARE small, and hard to detect. I did note that I was able to speed up my decision process as time went on, but only gradually. It's a difference that's analogous to the difference between a picture just barely out of focus, and one that's sharp focused throughout--a holistic impression. For casual purposes, a picture that focused "enough" will do--in Marketing, that's 'satisficing'. But of course I always want more.
Old 3rd July 2013
  #9
Gear Nut
 
🎧 5 years
Replication. Different tracks, but still 44.1/16 vs 192/24. I have to get warmed up, practice listening for the depth of the soundstage.
Old 6th August 2013
  #10
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
Can you post the sound files you used, give an explanation of what hearing clues you exactly used to determine the audible difference between the files and what resampling software you used with what settings.
I would like to try your tests myself.
Old 8th August 2013
  #11
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
I really would like to reproduce your results.
False negatives are now not a problem any more as you have proven to hear a difference.
Please post the files you used, describe the kind of artefact you used to get a positive result and describe what kind of sample rate converter you used with what settings.

If you want you can PM me.

Thanks in advance.
Old 9th August 2013
  #12
Gear Nut
 
🎧 5 years
I didn't see these responses earlier--I was relying on Gearslutz's "My Participated Threads" to keep me up, guess they don't go back as far as July 3rd.
I can't post my actual files here without copyright violation, but I'll give the info:
For the first two, I just used a track from a CD I purchased recently, "Groove Tube" by a Japanese artist who goes by "MEG", from her album 'Room Girl'. It's Redbook Audio, of course, and I used SoundForge 10 which comes with a resampler by Izotope that I used to go to 192 kHz, and another tool also by Izotope which I used to go from 16 to 24 bits. There are some individual settings within those tools, I'll follow up with details. The usefulness of the program content was that it was 1) live miked and 2) complex with many elements carefully placed within a large soundstage.
In re "kind of artefact", I tried to listen for soundstage depth and accurate detail. It took a lot of training repetitions, and remains a holistic impression, not any single feature I can easily point to. It seems to me that the 192 files have the aural analogue of better focus. To train, I would try to hear *precisely* where in front of me particular sound features were located, in two dimensions: left-to-right, and closer-to-further away--the foobar tool would then allow me to match up which two were easier to precisely locate. I know it muddies the waters, but I also had a very holistic impression of sound (uhhhhhh) 'texture'??--in which the 192 file was smoother/silkier/richer. The 192 is easier on the ears (just slightly) over time; with good sound reproduction through quality headphones (DT 770) through quality interface (RME Babyface) I can listen for quite a while without ear fatigue, even on material that would normally be considered pretty harsh (capsule's 'Starry Sky', for example), and which *does* wear me out over time when heard via Redbook audio.
Old 10th August 2013
  #13
Gear Nut
 
🎧 5 years
Conversion tools used to prepare the test files are from Soundforge 10:



Old 10th August 2013
  #14
Lives for gear
 
adydub's Avatar
 
🎧 5 years
Interesting that you do seem to be able to reliably hear a difference using ABX testing.

I'd venture that changing the bit depth makes no difference, but upsampling MAY help jitter reduction or possibly help move some artefacts from the reconstruction filter outside the audible range. Obviously it's not possible to magically recreate any missing information by upsampling, but it may still help improve the conversion quality for the reasons mentioned. It's also possible that the upsampling algorithm is imparting some subtle euphonic artefacts, but I'd suspect this is less likely.

Of course, it's also possible you've screwed up the test in some subtle but detectable way.
Old 10th August 2013 | Show parent
  #15
Gear Nut
 
🎧 5 years
Re switching cues: I tried repeatedly without success to detect some sort of timing artifact to detect the foobar files. Attempting to select files on this basis resulted in random results with very low significance numbers.
Old 10th August 2013 | Show parent
  #16
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by adydub ➑️
Interesting that you do seem to be able to reliably hear a difference using ABX testing.

I'd venture that changing the bit depth makes no difference, but upsampling MAY help jitter reduction or possibly help move some artefacts from the reconstruction filter outside the audible range. Obviously it's not possible to magically recreate any missing information by upsampling, but it may still help improve the conversion quality for the reasons mentioned. It's also possible that the upsampling algorithm is imparting some subtle euphonic artefacts, but I'd suspect this is less likely.

Of course, it's also possible you've screwed up the test in some subtle but detectable way.
Yes, I had the same thought about "magically recreate"--it is not possible that the upsampling could add information that wasn't already in the Redbook file. To get to the Redbook file I started with is just the first half of the process: from original sources to microphones to preamps etc. The end result of that was the Redbook representation. If I'm getting a different result after upsampling, it must be that the difference lies somewhere between the Redbook digital representation and the sound waves that arrive at my ear--essentially the upsampling + D/A and subsequent amplification/speaker chain.

It's also possible that I simply put MEG on one file, and Led Zeppelin on the other (with a phony name on the file), and then ran some trials to fake up my superior ears. Hopefully everyone can just take my word for now that I didn't do that.

Since my only reliable marker is "soundstage detail", I'm looking for effects of interaural differentials, that might be time-smeared by the D/A processes which are necessarily different for different sample formats. I don't know of any tests in this area, citations welcome.
Old 10th August 2013
  #17
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
I see from your pictures of the src that you did not use the highest quality settings.
Does changing the settings of the src change the outcome of the abx test?
Old 10th August 2013 | Show parent
  #18
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by mainesthai ➑️
I see from your pictures of the src that you did not use the highest quality settings.
Does changing the settings of the src change the outcome of the abx test?
No, this was the first time I had used the tool, so I accepted all the defaults as a first step. I haven't ABX'd with files using different settings yet.
Old 10th August 2013
  #19
Lives for gear
 
anigbrowl's Avatar
 
🎧 5 years
Quote:
Originally Posted by UltMusicSnob ➑️
The track with just the title is Redbook Audio ripped straight to .wav by SoundForge 10. I upsampled to 192 kHz and increased word length to 24 bits, no other changes.
Starting with a 44/16 recording does not make sense to me. Anything that you hear following upsampling (by which I mean to include the 16-24 conversion as well, I just don't want to type that out every time) it is essentially an artifact of the upsampling process, which will be equivalent to adding a little noise and then putting a very gentle lowpass filter (when you interpolate from 16 to 24).

I mean, if you record some acoustic or analog material at 192/24 and then downsample, we can have a useful conversation about what's getting thrown away and to what degree it matters. (I'm in the middle on this myself; I think 192 Khz is laughable overkill, but like the margin of 96 Khz so as not to worry about aliasing.) But if you take a (comparatively) low resolution recording and upsample, you're just training yourself to detect the sound of a particular upsampling algorithm.

Think about it in pictorial terms. If you started with a very low-resolution picture like this forum favorite:

...and brought it into Photoshop, you could scale it up to poster size and if you did enough smoothing and filtering and so on you could make it into a high resolution picture:



But it doesn't mean the high-res version was somehow encoded into the little Gearslutz icon waiting to be revealed, and it wouldn't be useful to argue about whether the head is sufficiently circular or whether the pixelation around the black/yellow internal border is optimal.

Another way to think about this would be to consider what would happen if you downsampled the 192/24 version back to Redbook quality, and then subtracted it from the original audio. You'd almost certainly not get a digital zero, but would instead get a faint spectral noise signature, which would be the accumulated artifacts of the up & downsampling processes.

I think it's great that you're exploring this and trying to take a rigorous approach to evaluating perceptual quality, but fear that you're trying to grade gold by comparing it to a known sample of brass.
Old 10th August 2013 | Show parent
  #20
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by anigbrowl ➑️
Starting with a 44/16 recording does not make sense to me. Anything that you hear following upsampling (by which I mean to include the 16-24 conversion as well, I just don't want to type that out every time) it is essentially an artifact of the upsampling process, which will be equivalent to adding a little noise and then putting a very gentle lowpass filter (when you interpolate from 16 to 24).

I mean, if you record some acoustic or analog material at 192/24 and then downsample, we can have a useful conversation about what's getting thrown away and to what degree it matters. (I'm in the middle on this myself; I think 192 Khz is laughable overkill, but like the margin of 96 Khz so as not to worry about aliasing.) But if you take a (comparatively) low resolution recording and upsample, you're just training yourself to detect the sound of a particular upsampling algorithm.

Think about it in pictorial terms. If you started with a very low-resolution picture like this forum favorite:

...and brought it into Photoshop, you could scale it up to poster size and if you did enough smoothing and filtering and so on you could make it into a high resolution picture:



But it doesn't mean the high-res version was somehow encoded into the little Gearslutz icon waiting to be revealed. Another way to think about this would be to consider what would happen if you downsampled the 192/24 version back to Redbook quality, and then subtracted it from the original audio. You'd almost certainly not get a digital zero, but would instead get a faint spectral noise signature, which would be the accumulated artifacts of the up & downsampling processes.

I think it's great that you're exploring this and trying to take a rigorous approach to evaluating perceptual quality, but fear that you're trying to grade gold by comparing it to a known sample of brass.
Yes, it would of course be much more interesting and useful in terms of the science of acoustics (and the business of Sony et al) to compare something recorded in 192/24 to the same source material recorded in 44.1/16.

On the practical side, though,
1) I have something I do to my CD's which makes them sound better to me
2) I can prove that I can tell the two versions apart
3) um, I think that's sufficient for my purposes of personal enjoyment

If there's something in the algorithms which causes the result, I'm all for it--I like the way it makes my music sound. The scientific questions of 192/24 vs. 44.1/16 are fascinating, and I want to read about them as they are properly addressed, but of course I have nothing to contribute to that body of knowledge.

It seems to me that the visual example you provide helps me make this practical/personal argument. If I want to get a good clean look at that picture, then of course I prefer the scaled up/cleaned up version. That's a better experience. As you say, it doesn't mean the information was there already in the small icon--clearly it wasn't. But it's useful to be able to 'clean up' the file algorithmically and get something better than what we started with. It may very well be that this is what's going on in my upsampling experiments--in which case, hooray, it's a useful result.

I'm no acoustical or electrical engineer, but I suspect that what is going on in the difference lies somewhere in the D/A chain. Related to your signal-differential idea above, it would be very interesting to pick up, with live microphones, the sound from the 44.1 khz played back, and then the 192/24 played back, and do the differential of those two. Of course, that would require far better equipment than I have to play with, but it would address the point at which I detect the difference, which is at my ears.

I realize that the ABX only reveals that *something* is detected that allows me to identify the proper pairs. No one need take my word for it that I'm listening for and hearing spatial detail--but that is in fact what I'm doing, so folks can take it or leave it in that respect.

I will note that IF it were the case that a consistent artifact/distortion is being added to the signal, then it would also have to be the case that this artifact would be detectable in all tested content. But this is not the case. If there's not soundstage depth present in a live-recorded signal on the disk, then I can't score above random guessing in foobar, period. It IS the fact that I can detect the difference on some, but not others.
Old 11th August 2013
  #21
Lives for gear
 
anigbrowl's Avatar
 
🎧 5 years
Hey, if you've found a consistent method for improving your subjective CD-listening experience then that's great

I guess I just reacted to this thread based on some others here on GS that degenerate into arguments about who's more scientific. I'll just disagree one one small point...

Quote:
I will note that IF it were the case that a consistent artifact/distortion is being added to the signal, then it would also have to be the case that this artifact would be detectable in all tested content.
Bear in mind that it could be dynamic based on the program material. For example, converting things to mp3s definitely introduces artifacts, but (as a necessity of the perceptual coding algorithm employed) it isn't the same from track to track - although eventually you get used to knowing what to look for.

Now, you're going in the opposite direction to mp3 of course, and I'm quite happy to believe that it's resulting in something more listenable. I wouldn't want to suggest that artifacts = bad as a matter of subjective judgment, only if we were trying to make statements about the mathematical information content of the different versions. Some people here (not you) equate 'I like the sound of process X better than process Y' with 'Process X is better because, er, science.'
Old 11th August 2013 | Show parent
  #22
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by anigbrowl ➑️
Hey, if you've found a consistent method for improving your subjective CD-listening experience then that's great

I guess I just reacted to this thread based on some others here on GS that degenerate into arguments about who's more scientific. I'll just disagree one one small point...

Bear in mind that it could be dynamic based on the program material. For example, converting things to mp3s definitely introduces artifacts, but (as a necessity of the perceptual coding algorithm employed) it isn't the same from track to track - although eventually you get used to knowing what to look for.

Now, you're going in the opposite direction to mp3 of course, and I'm quite happy to believe that it's resulting in something more listenable. I wouldn't want to suggest that artifacts = bad as a matter of subjective judgment, only if we were trying to make statements about the mathematical information content of the different versions. Some people here (not you) equate 'I like the sound of process X better than process Y' with 'Process X is better because, er, science.'
You're right, I was thinking only of artifacts that could serve as differentiating cues (thus "consistent"), but would represent otherwise undesirable characteristics in some way. If those are present (they may be), I can't find them.

I will claim to have supporting empirical evidence for the usefulness of my procedure, but of course establishing the reliability of a scientific claim requires a LOT more work than that, by a lot more people.
Old 11th August 2013
  #23
Gear Nut
 
🎧 5 years
Differentiating 192



Practice improves performance. To reach 99.8% statistical reliability, and to do so more quickly (this new one was done in about 1/3 the time required for the trials listed above in the thread), I mainly have to train my concentration.

It is *very* easy to get off on a tangent, listening for a certain brightness or darkness, for the timbre balance in one part, several parts, or all--this immediately introduces errors, even though this type of listening is much more likely to be what I am and need to be doing when recording and mixing a new track.

Once I am able to repeatedly focus just on spatial focus/accuracy--4 times in a row, for X & Y, and A & B--then I can hit the target. Get lazy even one time, miss the target.
Old 12th August 2013 | Show parent
  #24
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
Quote:
Originally Posted by UltMusicSnob ➑️
No, this was the first time I had used the tool, so I accepted all the defaults as a first step. I haven't ABX'd with files using different settings yet.
Try it, if it changes things then we'll know what was the cause.

I'll post my test results as soon as I can.
Old 12th August 2013
  #25
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
I need more training.

Can you be more specific than listen into the reverb, listen for space and size.
Old 12th August 2013
  #26
Gear Nut
 
🎧 5 years
It took me a **lot** of training. I listened for a dozen wrong things before I settled on the aspects below.

I try to visualize the point source of every single instrument in the mix--that's why I picked a complex mix for this trial. I pinpoint precisely where each instrument is, and especially its distance from the listener. Problem is, both versions already have *some* spatial depth and placement, it's only a matter of deciding which one is deeper, and more precise. I've tried making determinations off of a particular part, like a guitar vamp or hi-hat pattern, but can't get above about 2/3 correct that way.
The better approach is just to ask myself which version is easier to precisely visualize, as a holistic judgment of all the pieces together. Equally effective, or rather equally contributing to the choice, is asking which version holistically gives me a sense of a physically larger soundstage, especially in the dimension extending directly away from me--thus the idea of listening to reverb characteristics.
Having to listen to four playbacks (A/B, X/Y, for one choice) gives rise to the problem of desensitization. Neurons naturally give decreased response to repetitions, so I've found I can target my answer more easily if I pause 5-10 seconds between an A/B (or an X/Y). Otherwise, A/B is always easier than X/Y.
I have rather junky monitors, KRK Rokit 6's, so I'm kind of surprised I can get a result out of them. To get down into low single digits I shifted to my headphones pushed by a nice Schiit Asgard2 amp, which I just acquired--if your headphones are good, I'd recommend using them for the testing. This is more for isolation than anything else.

I chose this particular music clip because it's a complex mix, but also merely because it represents a genre I'm just now discovering and enjoy very much. So far, I've found that among my several peer groups, I'm pretty isolated in my tastes for this music. I've been meaning to do a good binaural string quartet example for testing, so I'll dig one up. Based on recording methods and the critical nature of space in string quartets, I hope it will be a good example. A true binaural orchestral recording could also help, if they didn't 'cheat' with a bunch of spot mikes on sections.
Hope this helps; I'll let you know when I zippyshare some classical excerpts.

In other news, I cranked the 'quality' settings in iZotope SRC to the max on the slider. That's kind of a misnomer, since steepening the filter (max quality on the slider gives a 150/db filter) causes other problems which have to be dealt with as a trade-off. On a first approximation I hit 4/4 last night using the "max-quality" 192 converted version, but then I had to stop, so that doesn't mean much.
There's a good discussion of settings I just found here: Izotope SRC
Looks like they all prefer a steepness *less* than the 32 I used as default, for their best results. If I'm trading time-smearing for aliasing, then my experience above suggests I should try a *less* steep filter--I hear timing-based differences, but not tonal ones. If you have a preferred list of best-tradeoff settings, by all means put them up, and I'll try it that way.
Old 12th August 2013
  #27
Gear Nut
 
🎧 5 years
Just to get this on the record.

None of the aural differences discussed in this thread are *remotely* as important to musical quality as the following:
Quality of the musical artist, and quality of his/her take
Quality of the recording engineer's choices--mics, placement, etc. Moving one microphone two inches will make a larger difference in the final result than all of the above put together.
Quality of the recording engineer's equipment
etc etc all the way through the process, until I get a CD disk available for purchase on the shelf.

Here's the thing--I can't move the microphones. It's too late. What I've got is what's on the disk. All I can do is try to obtain the best possible result at my ears, with the information the product provides me (never bought an MP3, never will).

I *can* upsample, essentially for free. It takes HD space and CPU cycles--they would be sitting unused in any case, as I have far more than I need (until I load up the VST's in the DAW, but that's a separate issue). I have no rational reason for foregoing a use of the information the disk gives me, if it improves the sound in my own subjective experience.

I will say this, I'm not putting any money into upsampling gear--apparently audiophiles have gadgets they use for this. I *already* own SoundForge 10, I'm using what came with it. If I DID have money for an upsampling playback device---that would go into Neumann's, Moog's, monitors, heck PRO TOOLS, long before I dropped a dollar on an upsampling box. I'm never going to be in the market for this sort of thing: Digital to analogue converter | DACs | Cambridge Audio, because I need that money for this instead: FEURICH - 218 - Concert I
Old 13th August 2013
  #28
Gear Nut
 
🎧 5 years
Classical Repertoire


This is a recording of classical guitar and orchestra, lots of reverb to listen for, which is useful. The close-up sound of the guitar and the liner notes photos show it's not binaural recording, but even so, a good test. Christopher Parkening, with London Symphony. Redbook vs. 192/24 again, using iZotope 64 bit SRC, default settings.
Old 13th August 2013
  #29
Lives for gear
 
mainesthai's Avatar
 
🎧 5 years
Try the highest quality simplified quality setting on your src.
If it changes the outcome of the test than the src is the cause.

I still haven't been able to confirm the test results. The differences are very small. I'll keep trying.

One more thing: Can you post the text file that foobar creates after the ABX test is done instead of a picture of the screen?
Old 13th August 2013 | Show parent
  #30
Gear Nut
 
🎧 5 years
Quote:
Originally Posted by mainesthai ➑️
Try the highest quality simplified quality setting on your src.
If it changes the outcome of the test than the src is the cause.

I still haven't been able to confirm the test results. The differences are very small. I'll keep trying.

One more thing: Can you post the text file that foobar creates after the ABX test is done instead of a picture of the screen?
I'll take these steps.
I have a couple of questions: By 'changes the outcome', which direction do you mean? Right now it's very difficult to get a positive result--I have to concentrate very hard, go slowly, rest my ears, listen multiple times. So a change could mean "easier to distinguish", or (more likely?) you mean it becomes even more difficult to get a positive result--perhaps impossible to get good numbers at all. ?
The other question is conceptual. On the simplest interpretation, the SRC of course IS the cause. We use it to change the file, the file sounds different. This is transparently the case, for example, if we use SRC to downsample to 11 kHz.
I think you mean something more specific, something that is more like a side effect, perhaps, but I don't know what it is.
πŸ“ Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 104 views: 37218
Avatar for sloper
sloper 9th September 2014
replies: 68 views: 27116
Avatar for Sweet Square
Sweet Square 7th December 2017
replies: 2436 views: 429920
Avatar for drlex
drlex 1 day ago
replies: 61 views: 12297
Avatar for Transistor
Transistor 21st February 2015
Topic:
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump