Quantcast
A potential flaw with ABX/Double-blind testing - Gearspace.com
The No.1 Website for Pro Audio
A potential flaw with ABX/Double-blind testing
Old 21st January 2013
  #1
Gear Addict
 
🎧 10 years
A potential flaw with ABX/Double-blind testing

I recently had an experience which leads me to believe that ABX and double-blind tests may not be 100% reliable.

Having recently implemented console emulation into my mixes, I was really enjoying the new sound. But I wanted to make sure I wasn't deceiving myself. So I did, an ABX test between the new tracks and the original untreated tracks.

I was 100% sure that I could both hear and feel the difference when listening, however my first ABX test was only 60% correct, and my second only 40%. Feeling a bit confused and dismayed, I listened closely and was able to zero in on a bit of high mid harshness that was present on the guitar track of the untreated version that was smoothed over in the treated version. After that, my next two ABX tests were 100%.

TLDR, I had always been able to hear and feel the difference, however if this had been an administered double blind test, it's possible the results would have said there was no discernible change.
Old 21st January 2013
  #2
Lives for gear
 
Bob Ross's Avatar
 
🎧 15 years
Quote:
Originally Posted by musichascolors ➡️
TLDR, I had always been able to hear and feel the difference, however if this had been an administered double blind test, it's possible the results would have said there was no discernible change.
Or, it's possible the results would have said there was a discernible change. The point is you don't know because you didn't do a true double-blind ABX test.
Old 21st January 2013 | Show parent
  #3
Gear Addict
 
🎧 10 years
Quote:
Originally Posted by Bob Ross ➡️
Or, it's possible the results would have said there was a discernible change. The point is you don't know because you didn't do a true double-blind ABX test.

Well, I used a program for ABXing. So I was pretty blind.
Old 21st January 2013
  #4
Lives for gear
 
JoaT's Avatar
 
🎧 10 years
And your post describes the potential flaw in ABX / Double blind test in what way?

Or was your intention to tell us that you did it wrong? If so, why did you want to do that?
Old 21st January 2013
  #5
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by musichascolors ➡️
I recently had an experience which leads me to believe that ABX and double-blind tests may not be 100% reliable.

Having recently implemented console emulation into my mixes, I was really enjoying the new sound. But I wanted to make sure I wasn't deceiving myself. So I did, an ABX test between the new tracks and the original untreated tracks.

I was 100% sure that I could both hear and feel the difference when listening, however my first ABX test was only 60% correct, and my second only 40%. Feeling a bit confused and dismayed, I listened closely and was able to zero in on a bit of high mid harshness that was present on the guitar track of the untreated version that was smoothed over in the treated version. After that, my next two ABX tests were 100%.

TLDR, I had always been able to hear and feel the difference, however if this had been an administered double blind test, it's possible the results would have said there was no discernible change.
It's a known fact that knowing the audio material well is a major advantage in blind tests. Another thing that helps a lot is to concentrate on small details, and identify the ones that change from one version to the other. I'd say the test had no flaws, it's more likely that with practice you understood what to listen for.

edit: Ultimately I think this test is good news for you, it proves that (amazingly enough) you can learn heh
Old 21st January 2013 | Show parent
  #6
Gear Addict
 
🎧 10 years
Quote:
Originally Posted by Ciozzi ➡️
It's a known fact that knowing the audio material well is a major advantage in blind tests. Another thing that helps a lot is to concentrate on small details, and identify the ones that change from one version to the other. I'd say the test had no flaws, it's more likely that with practice you understood what to listen for.

edit: Ultimately I think this test is good news for you, it proves that (amazingly enough) you can learn heh
Not everyone subject to double-blind tests has experience with the audio material beforehand. For example the tests where Ivor Tiefenbrun tried to tell the difference between straight analog and analog that had gone through digital conversion.

Or the Boston Audio Society tests that determined that people couldn't tell a difference between redbook and high-res audio.

I'm not saying that the results were incorrect, rather, I'm providing these as examples in which the test subjects didn't have prior experience with the material.

Perhaps a more accurate title would have been "A potential flaw concerning the reliability of ABX/Double-blind testing"
Old 21st January 2013
  #7
Lives for gear
 
sdelsolray's Avatar
 
🎧 15 years
Isn't familiarity with the program material a prerequisite for proper ABX testing?
Old 22nd January 2013 | Show parent
  #8
Lives for gear
 
Bob Ross's Avatar
 
🎧 15 years
Quote:
Originally Posted by musichascolors ➡️
Well, I used a program for ABXing. So I was pretty blind.
The official term is "single-blind": You administered the test to yourself, you knew in advance what the source materials were, you knew what you were supposed to be listening for... heck, technically that might not even qualify as a single-blind test, but it definitely wasn't a double-blind test!

If anything I think your experiment demonstrates why true double-blind ABX testing is important: Because otherwise your expectations corrupt the results.
Old 22nd January 2013
  #9
Gear Head
 
🎧 10 years
before you start the blind-testing in an abx-software, you are allowed to train yourself with the 2 files, and you should !

you know which file is which and can listen for critical parts which are telling.
then, when you are certain you are hearing it - you can start the test.

(when you are not so certain, you can start it too, maybe the difference is unconsciously percieved, which will show up in the statistics as over 50%)

for example, when you put your atttention to the upper highs, its very likely that you are not registering changes in the bass and vice versa. you have to know what to listen for.
Old 22nd January 2013
  #10
Gear Head
 
🎧 10 years
blind testing means you, the tested person doesn´t know which file is which.

double blind means, the tester who tests you also doesn´t know which file is which. (because the tester could unconsciously give away the results)

if you use an abx software alone, there is no need to double blind, because the software will certainly not give away the result in any way.
Old 22nd January 2013
  #11
Lives for gear
 
Arksun's Avatar
 
1 Review written
🎧 10 years
Blind tests are as much about whether a difference is significant enough, as it is about spotting the difference itself. Looking at it that way, they are still very useful.
Old 23rd January 2013 | Show parent
  #12
Lives for gear
 
Bob Ross's Avatar
 
🎧 15 years
Quote:
Originally Posted by reflection ➡️
before you start the blind-testing in an abx-software, you are allowed to train yourself with the 2 files, and you should !

you know which file is which

Clarification: you know which file is A and which file is B ...and you are indeed allowed to train yourself with the two files until you think you can differentiate A from B. (And then all you have to do is listen to X and say "Oh, that's A" or "Oh, that's B" and Bob's your uncle.*)

What you *can't* do in a strict double-blind test is know whether A (or B) is, say, the Lavry converter versus the Apogee converter. Or the vinyl record album versus the compact disc. As soon as you know what the sources are, the test ceases to be truly "blind".





*btw, one way to "cheat" (sic) at self-administered ABX tests...for whatever it's worth...is, instead of training yourself to differentiate A from B, simply listen to A ... a lot ...learn the **** out of A...and never listen to B.

Then when you listen to X, you don't have to worry about "Is that A or is that B?" ...you simply have to worry about "is that what I've been listening to all along, or something different?"
Old 23rd January 2013
  #13
Motown legend
 
Bob Olhsson's Avatar
 
🎧 15 years
ABX tests sound like a scientific idea but really aren't because you need to learn what details to listen for or you'll always get a meaningless random result. This is because of the physiology of hearing.
Old 23rd January 2013
  #14
Registered User
 
Rick Sutton's Avatar
 
🎧 15 years
I do what I think would be called self administered blind tests to check gear applications......this thread has answered most of my questions as to the term ABX/Double-blind testing.
The one thing I'm still dim on is the "x".
Anyone want to clarify what X is and why it is necessary?
Old 23rd January 2013 | Show parent
  #15
Gear Guru
 
theblue1's Avatar
 
🎧 15 years
I don't think there's a problem with ABX or any other double blind testing -- but there may well be problems with specific flawed designs or implementations and certainly a danger is that one may still misinterpret the raw data.

ABX testing is simply one perceptual testing technique and you have to have a good experimental design, or you will get inconclusive or unusable results. And, of course, everything will rely on the tester's analytical and logical skills in both design and interpretation of results.

Quote:
Originally Posted by musichascolors ➡️
Not everyone subject to double-blind tests has experience with the audio material beforehand. For example the tests where Ivor Tiefenbrun tried to tell the difference between straight analog and analog that had gone through digital conversion.

Or the Boston Audio Society tests that determined that people couldn't tell a difference between redbook and high-res audio.

I'm not saying that the results were incorrect, rather, I'm providing these as examples in which the test subjects didn't have prior experience with the material.

Perhaps a more accurate title would have been "A potential flaw concerning the reliability of ABX/Double-blind testing"
One can certainly 'learn the material' -- but if the difference was not there and audible in the first place, one could not differentiate.

But, to be sure, in design of experiments testing perceptual thresholds, care was was typically supposed to be taken in the design stage to mitigate or minimize subjects learning the test material.

So, there are valid considerations on both 'sides' of that issue.
Quote:
Originally Posted by sdelsolray ➡️
Isn't familiarity with the program material a prerequisite for proper ABX testing?
Some, for sure. It's the nature of the ABX comparative test paradigm.

Quote:
Originally Posted by Bob Ross ➡️
The official term is "single-blind": You administered the test to yourself, you knew in advance what the source materials were, you knew what you were supposed to be listening for... heck, technically that might not even qualify as a single-blind test, but it definitely wasn't a double-blind test!

If anything I think your experiment demonstrates why true double-blind ABX testing is important: Because otherwise your expectations corrupt the results.
I get what you're saying -- and those are valid considerations. And there are quotes around single-blind in your post -- but, I believe in a single blind trial, the test giver knows all the answers, that is, which is which, not just what the test materials will be. Still, for some testing, that might be a concern.

Quote:
Originally Posted by reflection ➡️
blind testing means you, the tested person doesn´t know which file is which.

double blind means, the tester who tests you also doesn´t know which file is which. (because the tester could unconsciously give away the results)

if you use an abx software alone, there is no need to double blind, because the software will certainly not give away the result in any way.
Definitions aside, I'm with you on that basic gist.

Quote:
Originally Posted by Bob Olhsson ➡️
ABX tests sound like a scientific idea but really aren't because you need to learn what details to listen for or you'll always get a meaningless random result. This is because of the physiology of hearing.
I can't say I fully agree.

Double blind testing is a norm in perceptual testing. ABX tests (at least when set up by a second party) are simply automated double blind perception tests.

It's a testing approach with limitations (like all of them), to be sure.

Again, the problem is not with the tests, but more often poor design or interpretation of results, trying to draw conclusions that the data don't support.

Now, I'm acutely aware of the variability of the human auditory system from moment to moment as well as the extreme variations that may effect us from changes of position in acoustically untreated environments.

But that is how our bodies work, that is the world we live in.

It can be understandably frustrating when one has the impression of a significant difference in normal listening between two slight variants of program material and then decides to test that notion with ABX testing and hears none. I've been there.

Still, I've had the opposite experience, and not just once.

Now, were there differences, if subtle, in the former cases? Undoubtedly. Were the differences in the latter cases significant? Probably not too.

As I was trying to suggest, the 'problem' isn't so much that the technique isn't useful at times, it's that we should be careful not to misinterpret or mis-weight the results.

BTW, to those who try to dismiss double blind perceptual testing out of hand or suggest that the 'problem' with double blind perceptual testing is that it puts a lot of pressure on the test taker or that the set of trials is too chronologically limited to be of value -- there is no rule that says one cannot design testing that goes on for an indefinite period -- indeed, it's not uncommon for certain tests to be administered over years, even decades.

And there's nothing to stop an individual from doing hundreds or thousands of trials over months or even years, either.

That said, one might want to refer to the prior discussions about the 'dangers' implicit for some sorts of testing in learning the material. Depending on what you're testing for, a little variety might not just be the spice of life, it might be a strategic precept that enhances the value of your data and the certainty of your conclusions.

That said, I never trust certainty. Ever.
Old 23rd January 2013
  #16
Gear Guru
 
🎧 15 years
double-blind ABX is to testing what democracy is to forms of government

it's the worst, except for everything else that has been tried

I myself have no issue with "practice", by all means, practice away, we should be seeking to widen the range of perception as much as possible. And as to foreknowledge of the elements and parameters being tested for - it totally depends on the goals of the testing.

if I am a researcher trying to learn something scientific about the ability of human beings to perceive this or that, then obviously the test subjects should not know what I am "looking for". For example, I might be researching "Order Bias" - I can't very well tell the subjects I am going to play them the same file twice, now can I?

However, if I am an engineer trying to decide if I should buy the Apogee or the Lavry converter, I obviously already know the parameters, but a double-blind ABX is still useful to me. If I can't reliably tell the difference, I can buy whichever costs less and feel just fine about it.

If I can reliably tell the difference - the most important information is that fact. If my preferences for which one is 'best' are biased, based on the name, or the cost, so be it. You never know which way your subconscious is going to cut, so there's no point trying to second-guess it.

Another plus of ABX is the 'forced choice' feature. Forced choice can sometimes reveal statistically significant differences even when the subject THINKS he is "probably only guessing". Instead of hiding the "fringe" perceptions, forced-choice ABX has the potential to discover fringe perceptions people did not even realize existed.
Old 23rd January 2013 | Show parent
  #17
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by Rick Sutton ➡️
I do what I think would be called self administered blind tests to check gear applications......this thread has answered most of my questions as to the term ABX/Double-blind testing.
The one thing I'm still dim on is the "x".
Anyone want to clarify what X is and why it is necessary?
Just read about it:

ABX test - Wikipedia, the free encyclopedia

Quote:
Originally Posted by joeq ➡️
If I can't reliably tell the difference, I can buy whichever costs less and feel just fine about it.
To expand on this, perhaps for the benefit of some ... If you can't reliably hear a difference in a given test, it simply means that you cannot reject the null hypothesis based on that test ... doesn't necessarily mean that you couldn't learn to hear a difference later on for example. May seem like a small nitpick ... but it isn't really; ABX tests cannot prove there is no audible difference. The test results define a probability, not a definite yes or no.
Old 23rd January 2013 | Show parent
  #18
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by Bob Olhsson ➡️
ABX tests sound like a scientific idea but really aren't because you need to learn what details to listen for or you'll always get a meaningless random result. This is because of the physiology of hearing.
I don't think you'd need much preparation if the tracks of the test were a flute and a fart. The reason why one performs a blind test is to actually know if they can hear a difference between two similar sounding tracks. The result is meaningless if you randomly push buttons, if you try to guess without taking your time and get a 50% then it means, that without paying careful attention you're not reliably able to tell a difference.
Old 23rd January 2013 | Show parent
  #19
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by -tc- ➡️
To expand on this, perhaps for the benefit of some ... If you can't reliably hear a difference in a given test, it simply means that you cannot reject the null hypothesis based on that test ... doesn't necessarily mean that you couldn't learn to hear a difference later on for example. May seem like a small nitpick ... but it isn't really; ABX tests cannot prove there is no audible difference. The test results define a probability, not a definite yes or no.
If you go by this principle nothing in the world could be considered scientifically proven. Even the laws of physics are deducted from empirical observations and based on the probability of a certain event repeating itself, still in a real world scenario we consider them a proven fact.
If you drop a stone, the stone falls. If you do it again, the stone falls again. If do it a hundred times the stone keeps falling.
Normal person's deductions: If I drop a stone, the stone falls.
Posible audio engineer's deductions:
a- This is a high end stone, it can't possibly drop like that low end stone, there must be something wrong with gravity today
b- That guy performing the test didn't really know how to let go of the stone...noobs...
c- This stone dropping here is useless, it tells me nothing about how my own stones will drop in my studio.
d- The test proves nothing, they just dropped it a hundred times, with a different moon and humidity things could have gone differently.

This is what I see every time I read this board

BTW no offence to anyone, just my opinion.
Old 23rd January 2013 | Show parent
  #20
Gear Guru
 
theblue1's Avatar
 
🎧 15 years
Quote:
Originally Posted by -tc- ➡️
Just read about it:

ABX test - Wikipedia, the free encyclopedia



To expand on this, perhaps for the benefit of some ... If you can't reliably hear a difference in a given test, it simply means that you cannot reject the null hypothesis based on that test ... doesn't necessarily mean that you couldn't learn to hear a difference later on for example. May seem like a small nitpick ... but it isn't really; ABX tests cannot prove there is no audible difference. The test results define a probability, not a definite yes or no.
You're much better (not to mention more concise) at saying this part of what I was getting at.
Old 23rd January 2013 | Show parent
  #21
Gear Guru
 
theblue1's Avatar
 
🎧 15 years
Quote:
Originally Posted by Ciozzi ➡️
If you go by this principle nothing in the world could be considered scientifically proven. Even the laws of physics are deducted from empirical observations and based on the probability of a certain event repeating itself, still in a real world scenario we consider them a proven fact.
If you drop a stone, the stone falls. If you do it again, the stone falls again. If do it a hundred times the stone keeps falling.
Normal person's deductions: If I drop a stone, the stone falls.
Posible audio engineer's deductions:
a- This is a high end stone, it can't possibly drop like that low end stone, there must be something wrong with gravity today
b- That guy performing the test didn't really know how to let go of the stone...noobs...
c- This stone dropping here is useless, it tells me nothing about how my own stones will drop in my studio.
d- The test proves nothing, they just dropped it a hundred times, with a different moon and humidity things could have gone differently.

This is what I see every time I read this board

BTW no offence to anyone, just my opinion.
LOL

Well, we're talking about degrees of probability, after all. It's a continuum from doubt to certainty and Science (capital-S science, the over-arching socioculural endeavor) can only move us along that continuum a little bit at a time, for the most part. It is the aggregate of scientific endeavor, when carefully collated and analyzed, that leads us to whatever degree of certainty we may ascertain.

But, for sure, when making practical decisions in the real world in a given time frame, you end up having to make decisions on available data.

And that's why having a good pool of available data is so important -- pure research isn't just some goofy, pie-in-the-sky dreamer going, Gee I wonder what would happen if... It's a foundation upon which we can base practical decisions today and for refining scientific understanding going forward.
Old 23rd January 2013 | Show parent
  #22
Gear Guru
 
🎧 15 years
Quote:
Originally Posted by -tc- ➡️
If you can't reliably hear a difference in a given test, it simply means that you cannot reject the null hypothesis based on that test ... doesn't necessarily mean that you couldn't learn to hear a difference later on for example.

If I try to lift a car, and I can't, that doesn't necessarily mean I couldn't bulk up and be able to lift a car "later on". Nevertheless the test has served its purpose well. If I went into the test thinking it was going to be easy, I have learned what I needed to learn about my own abilities. And I have learned a little something about cars!!

If you are trying to expand on the possible, then practice is a given. As a matter of scientific knowledge, I am willing to accept the "best performance by a human being" as a benchmark. But if after practice, I personally still don't hear the difference, then I have to admit there is no "audible" difference for me.

I saw a guy on TV pick up a car, but I decided not to throw away my jack.

Quote:
ABX tests cannot prove there is no audible difference. The test results define a probability, not a definite yes or no.
Science does not require us to prove a negative

In any case, it would be more precise to say ABX cannot prove there is "no difference whatsoever". Certainly a sensitive technological test might show a tiny residual that no human is going to be able to tell no matter how much practice he has.

But an "audible difference" is just that - a difference that is audible. I am more interested in these tests as a practical tool for myself. To distinguish between borderline perception and imagination for example. To make purchase decisions unbiased by price or brand name. To abandon onerous workflows that do not actually have sonic benefits. To kill the paranoia that "my gear is playing tricks on me".

We are so often making extremely subtle distinctions and often going on faith that these distinctions are correct. Even an imperfect test that you don't quite trust is a good reality check for where the line is drawn.
Old 23rd January 2013
  #23
Gear Guru
 
theblue1's Avatar
 
🎧 15 years
Can we agree that the procedural standards and criteria for making a practical, personal, one time decision may, at times, be more lax than those required to derive and evaluate experimental results that may potentially be added to the body of scientifically derived knowledge?

I just hate to see all us science-y types fussing.

Old 23rd January 2013
  #24
Lives for gear
 
WinnyP's Avatar
 
2 Reviews written
🎧 5 years
Quote:
Originally Posted by Ciozzi ➡️
If you go by this principle nothing in the world could be considered scientifically proven. Even the laws of physics are deducted from empirical observations and based on the probability of a certain event repeating itself, still in a real world scenario we consider them a proven fact.
If you drop a stone, the stone falls. If you do it again, the stone falls again. If do it a hundred times the stone keeps falling.
Normal person's deductions: If I drop a stone, the stone falls.
Posible audio engineer's deductions:
a- This is a high end stone, it can't possibly drop like that low end stone, there must be something wrong with gravity today
b- That guy performing the test didn't really know how to let go of the stone...noobs...
c- This stone dropping here is useless, it tells me nothing about how my own stones will drop in my studio.
d- The test proves nothing, they just dropped it a hundred times, with a different moon and humidity things could have gone differently.

This is what I see every time I read this board

BTW no offence to anyone, just my opinion.
Post of the week!
Old 23rd January 2013
  #25
Gear Guru
 
kennybro's Avatar
 
3 Reviews written
🎧 10 years
If the guitar harshness in untreated version was being smoothed over in the treated version, wouldn't that just add an extra strong clue as to which track was which? I'm .
Old 23rd January 2013
  #26
Gear Guru
AB, AB-X, ABCD-Z tests are good at one thing, selecting a personal, euphonic, subjective preference.

Then next week your opinion changes so you do it all again just so you can disagree with yourself.

The best "double-blind" test I ever did was done over at Wonderland Studios in LA. Stevie and another blind guitarist were listening, they didn't agree on anything.
Old 23rd January 2013
  #27
Gear Guru
 
2 Reviews written
🎧 10 years
Maybe someone already said this, but part of the issue is what are you testing FOR? If the test is whether the average person can tell the difference between two audio files, then just letting average people listen without any prior training would be appropriate. If the test is whether trained listeners can, using all their learned tricks, reliably pick out one or the other, then that's a whole other test.

The two tests would be relevant to different people for different reasons. The former might be useful for picking a final delivery format, while the latter might be more appropriate for selecting production tools.
Old 23rd January 2013 | Show parent
  #28
Gear Guru
 
🎧 15 years
Quote:
Originally Posted by kennybro ➡️
If the guitar harshness in untreated version was being smoothed over in the treated version, wouldn't that just add an extra strong clue as to which track was which? I'm .
I think he is saying that before he tuned in on what to "listen for" the ABX was not showing him the difference.

IMO, that is not a flaw in the concept of ABX testing itself. In fact, if it wasn't for the impetus of the test, musichascolors might not have done the "practice" part to Learn this difference. And if not for the confirmation of his subsequent "success" in the test (to prove he was NOW getting it) his practice might have been misdirected.

Of course none of it 'solves' the issue of Betterness. Maybe the 'smoothed' guitar is "insipid" and the 'harsh' guitar "cuts through the mix"? No test can tell you that, but at least now you can be confident that you are hearing the difference.
Old 23rd January 2013
  #29
Motown legend
 
Bob Olhsson's Avatar
 
🎧 15 years
The thing that offends me is that ABX tests are held up as representing some kind of scientific "proof." That's pseudo-science which in my book makes it an even bigger lie than claiming magical properties.

I'm not saying blind tests aren't a valuable tool or that expectation bias isn't always a real challenge but the fact that your lens is too long and fast to easily find a bee in the room doesn't mean that there aren't any bees around! Complete novices have found artifacts on their first listen that passed numerous panels of double blind tests. The very fact of being some kind of an "expert listener" disqualifies us out of the box on a certain level because we have the greatest expectations of anybody about audio.
Old 24th January 2013 | Show parent
  #30
Gear Addict
 
Amun Ra's Avatar
 
🎧 10 years
Quote:
Originally Posted by Bob Olhsson ➡️
The thing that offends me is that ABX tests are held up as representing some kind of scientific "proof." That's pseudo-science which in my book makes it an even bigger lie than claiming magical properties.

I'm not saying blind tests aren't a valuable tool or that expectation bias isn't always a real challenge but the fact that your lens is too long and fast to easily find a bee in the room doesn't mean that there aren't any bees around! Complete novices have found artifacts on their first listen that passed numerous panels of double blind tests. The very fact of being some kind of an "expert listener" disqualifies us out of the box on a certain level because we have the greatest expectations of anybody about audio.
You're really not making much sense here, Bob. The point of the double blinded / blinded abx test is to find out if the participant is able to reliably differentiate A from B. It is a widely accepted scientific method, and it is immune to many biases. If a study performs enough ABX-tests with a given result the confidence level of that particular study increases. Of course one study is not enough to label ANYTHING as "proven", but as more significant studies are done the consensus in the scientific community broadens - exactly like in any other field of cognitive science.

Why is this a problem?
📝 Reply
Topic:
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump