The No.1 Website for Pro Audio
Logic behind tube vs regular mic sound difference?
Old 2 weeks ago | Show parent
  #31
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by bowzin ➡️
Sometimes yes, for example there are transistors that are meant to plug in to tube sockets.
No, there are no transistors that can plug into tube sockets. There are transistor circuits designed to plug in and substitute for specific tubes in certain applications.
Quote:
Originally Posted by bowzin ➡️
Theres also the Nuvistor but I dont think that qualifies as a true transistor, not sure.
A Nuvistor is a kind of tube. It's not a transistor at all.

Quote:
Originally Posted by bowzin ➡️
No not really, but yes with major modifications. For example see the Innertube Audio drop in mods for U87ai which replaces the guts with a tube circuit, and the IO Audio/"Max Mod" drop in mod that replaces the guts of a TLM67 or U87ai with a U67 tube mic circuit. There's also a U47 FET vs. U47. None of which sound the same. To make a solid state mic into a tube mic, you'd need to add the power supply for one thing, so dramatic changes.
Replacing the "guts", the entire circuit with a different one using a different type of active device implies something far more extensive that what the OP was asking for.
Old 2 weeks ago | Show parent
  #32
Quote:
Originally Posted by Bushman ➡️
This is a ridiculous thread.
Far from it. I'm trying to get objective data on the topic. Not subjective opinions. I.e. science vs. gear religion.


Quote:
Originally Posted by Bushman ➡️
Every electronic component and physical component contributes to the sound of a mic.
My point exactly. This is why I'm asking how much the tube TRULY affects the sound itself. There is currently no way to test this with existing mics that you can buy. Reason being that you don't, know what kind of design choices the engineers made while developing the tube/SS mic. The engineers might have a conscious bias towards different sound they want out of tube mics when designing them. Thus you can't compare two identical looking mics that are advertised as one having SS and the other one tube inside them. There could be much more going on with the sound modifying factors inside the amp sections/etc of the electrical circuitry.

This can't be tested without getting involved in the actual electronics and creating proper tests.


Quote:
Originally Posted by Bushman ➡️
What the OP proposes in his first post is a rigorous, fair blind comparison test of various mics. That is done a lot.
So... what is this thread trying to understand?
The exact amount and type of coloring the tube itself causes to the recorded signal. I.e. is it even large enough coloration to be perceived by a human? In other words: does the coloring of tube mics come from all the other design choices the engineers made, than the tube itself? If not, then how much of the sound comes the from tube itself. This is a serious question which people seem reluctant to answer.
Old 2 weeks ago | Show parent
  #33
Quote:
Originally Posted by sdelsolray ➡️
...and these forums are filled with claims based on anecdotal evidence solely sourced from the opinions of one person.
Exactly! That's the exact reason I'm here trying to find out what's the truth instead of opinions. I want scientific facts. Everything else is an opinion which can be easily influenced by psychological effects and biased views. They don't have place in this thread.
Old 2 weeks ago | Show parent
  #34
Quote:
Originally Posted by jaddie ➡️
The electronics are a factor, but the biggest contributor to any mics sound is acoustic and transducer factors.
This is my current view on the topic as well.


Quote:
Originally Posted by jaddie ➡️
Look up convolution and impulse response. Many devices have been modeled quite well. Mics are especially tricky because you can’t determine what part of the signal is affected by the arrival vector, but many classic mics have been successfully modeled on their primary axis.
I'm using convolution daily for my DSP algorithms I develop for music gear. I'm familiar with the algorithm/technique.

Convolution & impulse responses have the flaw that they don't really model any of the non-linearities. I.e. they only model the delays/filtering of the sound but they don't help simulating rest of the stuff, i.e. saturation/distortion of the signal.
Old 2 weeks ago | Show parent
  #35
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by Aural Endeavors ➡️
Also, just because it can be measured doesn't mean it can be heard,
Absolutely correct! But, people here very often invert the statement to "you can't measure everything that is heard", which is absolutely incorrect, often stemming from a personal or related preference for a device that has higher distortion than another. "It measures worse, but sounds better, so the measurements don't tell the story". That's actually not how it works, and the trick is knowing how to measure, what to measure, and the really key, how the data you get relates to audibility. You won't find that in a spec sheet. That doesn't mean the specs are meaningless.
Quote:
Originally Posted by Aural Endeavors ➡️
although many here often claim to outside of a properly set up scientific double-blind test. Sure, some things are obvious enough not to warrant such a test, but once the differences are subtle enough, it's worth questioning one's own human biases, unless of course they're too proud of their golden ears and empty wallet.
Agreed. Biases alone can create a perceived difference when none exists. Part of calibrating an ABX/DBT is to test with a known null result, all choices deliberately identical. It's been found that during that test configuration you can introduce an artificial expectation bias and skew the null results significantly. That type of test confirms that bias will skew non-blind testing.
Old 2 weeks ago | Show parent
  #36
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by kraku ➡️
Far from it. I'm trying to get objective data on the topic. Not subjective opinions. I.e. science vs. gear religion.
Yes. Unfortunately, when we go for objective data, people get offended. It's part of science vs myth.

Quote:
Originally Posted by kraku ➡️
This is why I'm asking how much the tube TRULY affects the sound itself. There is currently no way to test this with existing mics that you can buy. Reason being that you don't, know what kind of design choices the engineers made while developing the tube/SS mic. The engineers might have a conscious bias towards different sound they want out of tube mics when designing them. Thus you can't compare two identical looking mics that are advertised as one having SS and the other one tube inside them. There could be much more going on with the sound modifying factors inside the amp sections/etc of the electrical circuitry.

This can't be tested without getting involved in the actual electronics and creating proper tests.
There are some problems here. As you are realizing, you can't have two mics, identical in every aspect, except for the active gain element. Tubes require very different surrounding support circuitry than SS devices. Designers always make compromises just to make their design successful and manufacturable. The type of active device clearly has an impact on that, and in a mic, perhaps more than in some other device.

There actually has been quite a bit of work on this, but not in mics, in power amps. The first problem is, there were too many generalizations being made. Things boiled down to "all tube amps sound better/different than all SS amps". That was shown not to be true fairly early on. Even David Manley in "The Vacuum Tube Logic Book" describes how different tube topologies can sound different from each other, and that some even sound very much like SS amps.

The next question (based on the assumption that a tube does something magical and unquantifiable) was, can a SS amp be made to sound like a tube amp? To answer that, there had to be some understanding of what the "tube sound" actually was, then attempt to replicate it. Perhaps the earliest and most classic attempt was the Stereophile, Bob Carver Challenge from back in 1985. You can read the article here. Or, to save some time, with no convolution available to him, and only a stock of parts and rather conventional analysis equipment, Carver replicated the expensive reference tube amp with one of his sub-$1000 SS amp. Now, if that was done (under quite a bit of pressure and not in ideal conditions) in 1985, what would make anyone think it couldn't be done now?
Quote:
Originally Posted by kraku ➡️

The exact amount and type of coloring the tube itself causes to the recorded signal. I.e. is it even large enough coloration to be perceived by a human?
The answer, unfortunately, is "it depends". Designs get very specific, and the compromises very individualized. Since you can't practically break a condenser mic into pieces to the point where you could ABX different active devices without swapping entire amplifier circuits, you may not be able to tell definitively. However...
Quote:
Originally Posted by kraku ➡️
In other words: does the coloring of tube mics come from all the other design choices the engineers made, than the tube itself? If not, then how much of the sound comes the from tube itself.
In general, when it comes to the character or color of a microphone, the big factors are the acoustic design and the transducer itself. Electronics can be signal modifiers, but the don't have the same issues of an acoustic transducer in an acoustic baffle. Think of it in terms of modeling. You can model an amplifier by capturing it's gain, noise floor, and (to use Carver's term) transfer function, a model of it's frequency response, phase response and distortion under load. But to model a transducer in an acoustic baffle, you now have to go deeply into the time domain and look at what happens after the initial stimulus, how it changes and for how long, and what distortions are generated to what level. The possible signal modifiers are bigger, and more influential. You can see that in an actual mic response curve vs an amplifier response curve. An amplifier will generally be very smooth and ruler-flat over a large portion of the spectrum, even without post-processing for smoothing. For that to occur in a mic is extremely rare. Mostly, they plot as jagged lines with pretty significant peaks and dips. A mic response graph will always include smoothing just to make it more presentable. Then you look at response in the time domain, and things don't get better for the mic. There's quite a bit, in some mics, going on after the first arrival.

All of this points to the acoustic design and transducer design as the big factors in color, with the amplifier performing a supporting role. How much of either? How would you even scale that if you could measure it?
Quote:
Originally Posted by kraku ➡️
This is a serious question which people seem reluctant to answer.
Of course it is, and yes they are. It comes very close to the Emperors New Cloths kind of investigation. People love their mics and myths, and whatever sound they get. Sadly, if you replicated the sound of someone's favorite tube mic with a brand new SS mic, the vintage mic lover would probably still hate it because of what it is, perhaps what it represents. It's probably why, even though there are convolution-generated models of quite a few classic mics (as plug-ins) they're not the hot items that the real vintage mics are. That, and of course, you can't simulate anything but the zero axis, even thought that's most of it.

And in the end, why would you want to take their fun away? You wouldn't, but you might want to make a mic that has all the color of a vintage tube mic but has a price that is more affordable. And that would be a good cause.
Old 2 weeks ago | Show parent
  #37
Quote:
Originally Posted by jaddie ➡️
And in the end, why would you want to take their fun away? You wouldn't, but you might want to make a mic that has all the color of a vintage tube mic but has a price that is more affordable. And that would be a good cause.
This is what I'm usually aiming for with my technical investigations. With proper objective knowledge one might be able to design gear with equal performance to well respected cult classics, but with fraction of the cost. I.e. use only those technical details in your design which truly matter for the results. The rest can be designed to be different.
Old 2 weeks ago
  #38
One way to test if tube affects the sound so much that the signal change is audible by a human perception:

1. Send digitally created test signal through the circuitry you want to test.
2. Take a long FFT of the original and resulting signals.
3. Calculate the difference of those frequency space signals.
4. Calculate the total amount of energy of the signal difference.

The total energy of the signal difference compared to the original signal is X decibels.
We can now compare the decibel value to scientific data of how low volume signals humans are able to perceive from noise.

To make this test better, we can take into account the fact that human hearing has different sensitivity at different frequencies:
We could scale the difference signal frequency spectrum according to human hearing sensitivity at different frequencies before calculating the total amount of energy it has.
Old 2 weeks ago | Show parent
  #39
Gear Maniac
Quote:
Originally Posted by kraku ➡️
science vs. gear religion.
It sounds like gearspace vs gearslutz
Old 2 weeks ago | Show parent
  #40
Gear Guru
 
kennybro's Avatar
 
3 Reviews written
🎧 10 years
Quote:
Originally Posted by Bushman ➡️
This is a ridiculous thread.
In one sense, it's a theoretical question that has no answer.

In another sense, gear manufacturers base their research, development and marketing on measurement; test after test while developing a new mic, tests for quality control in manufacture, and then the required line after line of "specs," in every mic description and advertisement. Then again, what choice do they have?

But I don't think I know anyone who ever purchased a mic based on specs. Mostly, purchases are made based on reviews, personal experience, a mic's reputation, or advice of people you trust who have used the mic. Does anyone here bother with specs when deciding a mic purchase?
Old 2 weeks ago
  #41
Gear Guru
 
Only Mr. Potato. But he's only a spectator.
Chris
Old 2 weeks ago | Show parent
  #42
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by kraku ➡️
One way to test if tube affects the sound so much that the signal change is audible by a human perception:

1. Send digitally created test signal through the circuitry you want to test.
2. Take a long FFT of the original and resulting signals.
3. Calculate the difference of those frequency space signals.
4. Calculate the total amount of energy of the signal difference.

The total energy of the signal difference compared to the original signal is X decibels.
Add 5. Repeat the test, stepping level over a 20dB or so range. There's your nonlinearity profile. Sort of.

Quote:
Originally Posted by kraku ➡️
We can now compare the decibel value to scientific data of how low volume signals humans are able to perceive from noise.
Mmm....well...it's not quite that simple, you have to account for masking.
Quote:
Originally Posted by kraku ➡️
To make this test better, we can take into account the fact that human hearing has different sensitivity at different frequencies:
We could scale the difference signal frequency spectrum according to human hearing sensitivity at different frequencies before calculating the total amount of energy it has.
What you're trying to get at is an audibility factor. It's not that simple. The audibility of a change in frequency response is based on the total area involved in the change, with peaks slightly more audible than dips. A small change over a wide bandwidth is easily heard, but a large change over a small area is not.

When it comes to the audibility of distortion, though, it's very difficult to model human response. Simple harmonic distortion has the audibility characteristic that even-order harmonics are less obvious than odd-order, high order more audible than low order, and the specific nonlinearity that generates the distortion affects dynamic audibility. Hard clipping is more audible per dB above onset than soft clipping. And the whole mess is affected by the spectrum of the signal itself, with pure sine wave producing hyper-sensitive results. Then you can move into IMD, various mechanisms and spectrums, and the rules change again. Classic IMD tests that stimulate with a low frequency tone mixed at a ratio with a high frequency tone expose a particularly audible type of IMD. But various types of two-tone tests reveal other mechanisms. The grand-daddy of all IMD tests is the "spectral contamination" test proposed by Dean Jensen, with a hundred or so high frequency tones mixed equally, and then looking for products between the tones, and out of the spectrum of the test signal, which reveals yet another form of nonlinearity that's been elusive for decades.

So item 1 in your list becomes very, very important. For expediency, looking for spectral differences with a complex stimulus stepped over a level range would probably do the trick. But that's just me giving it 5 minutes of thought.
Old 2 weeks ago | Show parent
  #43
Gear Guru
 
kennybro's Avatar
 
3 Reviews written
🎧 10 years
Quote:
Originally Posted by chessparov2.0 ➡️
Only Mr. Potato. But he's only a spectator.
Chris
Who is this Mr Potato guy?
Old 2 weeks ago
  #44
Lives for gear
 
DougS's Avatar
 
🎧 5 years
Stupid thread.
Old 2 weeks ago | Show parent
  #45
Lives for gear
[QUOTE=kraku;15382779]One way to test if tube affects the sound so much that the signal change is audible by a human perception:

1. Send digitally created test signal through the circuitry you want to test.
2. Take a long FFT of the original and resulting signals.
3. Calculate the difference of those frequency space signals.
4. Calculate the total amount of energy of the signal difference.

The total energy of the signal difference compared to the original signal is X decibels.
We can now compare the decibel value to scientific data of how low volume signals humans are able to perceive from noise.

To make this test better, we can take into account the fact that human hearing has different sensitivity at different frequencies:
We could scale the difference signal frequency spectrum according to human hearing sensitivity at different frequencies before calculating the total amount of energy it has.[QUOTE]

Zzzzzzz

Or you could buy some mics, experience them for yourself and get to actually recording something.

I would also suggest reading through the replies to your question here in this thread as there seem to be a number of experienced people here trying to point you in the right direction in spite of your resistance.

I'd say this thread is similar to "Mic's don't excite me" on Low End Theory and "How many licks does it take to get to the center of a cherry flavored tube mic" here on this forum.

The answers to the original question(s) have been addressed but you don't seem to like them. It's an all too frequent recurring theme here on emptyheadspace.com
Old 2 weeks ago | Show parent
  #46
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by JLast ➡️
Zzzzzzz

Or you could buy some mics, experience them for yourself and get to actually recording something.
He could, but that won't get him the answer to his question.
Quote:
Originally Posted by JLast ➡️
I would also suggest reading through the replies to your question here in this thread as there seem to be a number of experienced people here trying to point you in the right direction in spite of your resistance.
I don't sense any resistance by the OP, but LOTS from people who don't understand what he's asking.
Quote:
Originally Posted by JLast ➡️

The answers to the original question(s) have been addressed but you don't seem to like them. It's an all too frequent recurring theme here on emptyheadspace.com
Wow. Brutal.

I guess you're in the group that didn't understand his question.
Old 2 weeks ago
  #47
Lives for gear
 
🎧 10 years
The quality of the very first amp stage of any piece of equipment is the most important. The smaller the signal, the more impact on the quality of the sound the amp has.

Based on that, a Tube mic with a solid state preamp will sound better than a solid state amp with a tube preamp...if all other factors could be equal....they can't I don't think.

I think at the very high end, tubes are better than solid state if you are looking for pleasing to the ear. If you want accuracy of results vs the source, this is where the debate get's fun.

Out of all the preamps I have, I consider a Millinia Forsell the most accurate. About the same level of accuracy as a Grace 801 but with a touch of tube noise.
Old 2 weeks ago | Show parent
  #48
Gear Addict
 
🎧 5 years
A microphone's <sound / timbre / personality> isn't simply a function of its measurable frequency response. One of the most important qualities of a microphone is how it handles dynamics. Amplitude domain, not frequency. This isn't represented in the mic's specs.

Tube mics absolutely handle dynamics differently than solid-state mics. They seem to soften transients and subtly compress the signal in ways that many find pleasing and familiar. I believe it's this quality, not the frequency response, that makes some folks willing to shell out $20k+ for a vintage C12.

Last edited by Honkermann; 2 weeks ago at 07:33 AM..
Old 2 weeks ago | Show parent
  #49
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by Honkermann ➡️
A microphone's <sound / timbre / personality> isn't simply a function of its measurable frequency response. One of the most important qualities of a microphone is how it handles dynamics. Amplitude domain, not frequency. This isn't represented in the mic's specs.

Tube mics absolutely handle dynamics differently than solid-state mics. They seem to soften transients and subtly compress the signal in ways that many find pleasing and familiar.
Opinion? Or do you have data to support any of this?
Quote:
Originally Posted by Honkermann ➡️
I believe it's this quality, not the frequency response, that makes some folks willing to shell out $20k+ for a vintage C12.
Well, frequency response varies a lot in mics, between them and within them. That's data you can google. When it comes to changes that are audible, FR is the big one. Changes in frequency response actually are "amplitude domain" changes, they change the amplitude of certain sections of the spectrum. When discussing "domains" in audio together, there are two, amplitude and time. The "frequency" domain is a different window on the time domain, same data. The two domains must work together, though. For example, a mic will vary amplitude with frequency and time. Time, because there are always resonances in the transducer and its acoustic housing.

"Handle dynamics" isn't found in specifications because it's not actually something specifically quantified. Nonlinear amplitude response, however, is, and is reflected in distortion figures. The subjective impression that transients are softened is directly related to frequency response, and its joined-at-the-hip relative, transient response. Since there's no gain-variable element in a mic, and the thermal "dynamic compression" mechanism found in speakers doesn't exist in a mic, "compression" would be the result of nonlinear amplitude response, a direct cause of distortion of several kinds. A series of SPL vs distortion plots would reveal this easily, but won't be found in published specs. As to people finding any sort of distortion pleasing, that's generally found not to be true, but since a certain amount of distortion is accepted as part of a genre, it's conceivable that some who identify with that genre find it pleasing.

There are many reasons to shell out $20K for a C12, not the least of which is that it's a tool that is in demand, and part of a studio or engineer's value is the tools. If you could exactly replicate the sound of a C12 so well it could be differentiated from the original, and did that for $100, would that be a good substitute? I doubt it, because it isn't the original. It just isn't, sonics be damned. But I suspect, and you can prove me wrong here, that there is zero data to support that nonlinear amplitude response is what people buy the mic for.

There is no evidence to support the supposition that a particular class of active gain device has any more or less ability to pass, without modification, an audio signal, or for that matter, modify an audio signal in any specific characteristic way only achievable through the use of that type of active device. Put simply, there are good and bad active devices of all types, the application of any device, the circuit it is used in, along with the device characteristics, create the final performance. There are low and high distortion circuits, wide band and narrow band circuits, wide dynamic and limited dynamic circuits using all active device types.

I'm frankly surprised that we got to this point and nobody has once pointed a finger at the rather special impedance matching transformer that must be included in a tube mic. Interesting. You have this rather innocuous active gain device with one problem: a significantly high output impedance. But you want to drive a mic input impedance that's fairly low. What the heck do you do? Transformer. Look at a U67 schematic, it's the biggest component on the drawing. But even with that thing in there, we're pointing at the tube? OK then.
Old 2 weeks ago | Show parent
  #50
Quote:
Originally Posted by JLast ➡️
Zzzzzzz

Or you could buy some mics, experience them for yourself and get to actually recording something.

I would also suggest reading through the replies to your question here in this thread as there seem to be a number of experienced people here trying to point you in the right direction in spite of your resistance.

I'd say this thread is similar to "Mic's don't excite me" on Low End Theory and "How many licks does it take to get to the center of a cherry flavored tube mic" here on this forum.

The answers to the original question(s) have been addressed but you don't seem to like them. It's an all too frequent recurring theme here on emptyheadspace.com
You seem to have missed the essence of my questions here.
The thing I want to get answer to cannot be tested by the method you're suggesting.
Old 2 weeks ago | Show parent
  #51
Quote:
Originally Posted by jaddie ➡️
Mmm....well...it's not quite that simple, you have to account for masking.
Just to be sure we're talking about the same thing here:

Do you mean how well human hears a signal from under another signal, which is playing in much higher volume? I.e. you have signal A around frequency X and a second signal B at much lower volume around the same frequency X: how well signal B is perceived from under signal A by a human.

Won't that be taken into account at the step where the difference of the signals (original vs. the one which went through the circuitry, calculated in frequency space) is compensated by the human hearing systems sensitivity at different frequencies?

In other words, human hearing system's sensitivity changes by frequency. But the masking itself is dependent on the relative volume levels of the signal A and B around any given frequency. So if the difference signal (frequency space) is adjusted to human hearing sensitivity curve at all those frequencies, wouldn't that result in a graph which shows how well human perceives the difference in those frequencies? Now if you add all energies of those frequencies together, you should (in theory?) get one number, which tells how audible the whole difference signal is from under the original signal. I.e. how well human can hear the difference between the original and the processed signal.


Quote:
Originally Posted by jaddie ➡️
What you're trying to get at is an audibility factor. It's not that simple. The audibility of a change in frequency response is based on the total area involved in the change, with peaks slightly more audible than dips. A small change over a wide bandwidth is easily heard, but a large change over a small area is not.
Hmm, I'm not 100% sure if that's accurate. The mechanism could be somehow different. What comes to mind is how dithering of digital signals works:

When you convert digital audio into lower bit depth, you get small extra peaks into your signal. You can make this new signal much less audible by dithering, which transforms those peaks into noise which spreads across wide range of the frequency space. This noise is much less audible to human hearing.

The difference here is that we're talking about noise with dithering vs. boosting/lowering original signal. I.e. the change in the signal type isn't radical in your example. This might affect how human perceives those signal types.

But otherwise, perceivability of peaks vs. dips could be an issue for this test. I.e. we seem to be moving into the realm of psychoacoustics. I know very little about the topic, but I know enough that it's could potentially get really complex really fast.


Quote:
Originally Posted by jaddie ➡️
When it comes to the audibility of distortion, though, it's very difficult to model human response. Simple harmonic distortion has the audibility characteristic that even-order harmonics are less obvious than odd-order, high order more audible than low order, and the specific nonlinearity that generates the distortion affects dynamic audibility.
Hmm. The even vs. odd harmonics could be taken into account fairly easily in the test, if the signal used in the test was a sine wave. Then it's easy to pick even/odd harmonics and give them different weights when calculating the total "audibility" of the difference signal.

The audibility differences when going up the harmonics should probably be already taken into account by my original idea: adjust the difference signal's frequencies according to human hearing system's sensitivity.

I'm not sure what the "specific nonlinearity" vs dynamic audibility means in this context, though.


Quote:
Originally Posted by jaddie ➡️
Hard clipping is more audible per dB above onset than soft clipping.
I haven't given this much thought (I'd have to do some research and testing on the subject), but with hard clipping the signal changes abruptly, thus creating large quantities of fairly large amplitude (i.e. loud) higher frequencies. Soft clipping does gradual changes to the signal, thus introducing new signals more into the lower frequencies.

If you play two signals, low frequency and high frequency one, both at amplitude X, the high frequency one is much more audible or at least jarring to the ear. Regular sounds/music has the approximate frequency curve of being high amplitude at lower frequencies and gradually leveling down when going up in frequency space. This is what human hearing system has developed into receiving. This could explain why the unnaturally large high frequency signals with hard clipping can be so audible vs. soft clipping.


Quote:
Originally Posted by jaddie ➡️
And the whole mess is affected by the spectrum of the signal itself, with pure sine wave producing hyper-sensitive results. Then you can move into IMD, various mechanisms and spectrums, and the rules change again. Classic IMD tests that stimulate with a low frequency tone mixed at a ratio with a high frequency tone expose a particularly audible type of IMD. But various types of two-tone tests reveal other mechanisms. The grand-daddy of all IMD tests is the "spectral contamination" test proposed by Dean Jensen, with a hundred or so high frequency tones mixed equally, and then looking for products between the tones, and out of the spectrum of the test signal, which reveals yet another form of nonlinearity that's been elusive for decades.

So item 1 in your list becomes very, very important. For expediency, looking for spectral differences with a complex stimulus stepped over a level range would probably do the trick. But that's just me giving it 5 minutes of thought.
If there is a difference in perception of IMD when there are more than couple of sine waves in the test signal, we're entering deep into the psychoacoustic area and I've no idea (yet) how to take into account any of that. Sounds complicated to test that definitively.
Old 2 weeks ago | Show parent
  #52
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by kraku ➡️
Just to be sure we're talking about the same thing here:

Do you mean how well human hears a signal from under another signal, which is playing in much higher volume? I.e. you have signal A around frequency X and a second signal B at much lower volume around the same frequency X: how well signal B is perceived from under signal A by a human.
Yes.
Quote:
Originally Posted by kraku ➡️
Won't that be taken into account at the step where the difference of the signals (original vs. the one which went through the circuitry, calculated in frequency space) is compensated by the human hearing systems sensitivity at different frequencies?

In other words, human hearing system's sensitivity changes by frequency. But the masking itself is dependent on the relative volume levels of the signal A and B around any given frequency. So if the difference signal (frequency space) is adjusted to human hearing sensitivity curve at all those frequencies, wouldn't that result in a graph which shows how well human perceives the difference in those frequencies? Now if you add all energies of those frequencies together, you should (in theory?) get one number, which tells how audible the whole difference signal is from under the original signal. I.e. how well human can hear the difference between the original and the processed signal.
Well, that's a sizable task, scaling a difference based on a dynamic model of human hearing. You're into the world of perceptual coding there, and that's a really complicated world. The precision and quality of lossy codecs is still evolving.

Quote:
Originally Posted by kraku ➡️

Hmm, I'm not 100% sure if that's accurate. The mechanism could be somehow different. What comes to mind is how dithering of digital signals works:

When you convert digital audio into lower bit depth, you get small extra peaks into your signal. You can make this new signal much less audible by dithering, which transforms those peaks into noise which spreads across wide range of the frequency space. This noise is much less audible to human hearing.

The difference here is that we're talking about noise with dithering vs. boosting/lowering original signal. I.e. the change in the signal type isn't radical in your example. This might affect how human perceives those signal types.

But otherwise, perceivability of peaks vs. dips could be an issue for this test. I.e. we seem to be moving into the realm of psychoacoustics. I know very little about the topic, but I know enough that it's could potentially get really complex really fast.
What I said is 100% correct, and is not a new concept. It's why you can make a 1dB change in a filter tuned at 1kHz with a very low Q, and it's easily heard, but you can put a 40dB deep notch at 1kHz with a Q of 10000, and it's not heard at all. It's always be easiest to think of response change audibility as the area below or above the curve that changed. That's not quite right because gain is easier to hear than loss, but it's totally true. My first real experimentation with this goes back to a UREI 565T filter set to notch out single frequency tones in broadband audio, though I played with deep gyrator based notch filters years before that.
Quote:
Originally Posted by kraku ➡️
Hmm. The even vs. odd harmonics could be taken into account fairly easily in the test, if the signal used in the test was a sine wave. Then it's easy to pick even/odd harmonics and give them different weights when calculating the total "audibility" of the difference signal.
But harmonic audibility is more complex than that. It's not only the distribution, which could be weighted, its also about masking because while harmonic distortion, even or odd, is audible with pure tones, it's not nearly so when using complex waveforms. For example, a device with 3% THD, even order, will still sound pretty clean, but the same level of odd harmonics will not. But when you put music through that device, 3% even-order becomes pretty much inaudible, where 3% odd is starting to sound pretty bad. You also have to consider that analog mechanisms that generate harmonic distortion are different, and can co-exist to varying degrees. I highly recommend reading up on analog systems and distortion mechanisms, non-linearities etc. Just way to0 much for me to be writing up here.
Quote:
Originally Posted by kraku ➡️
The audibility differences when going up the harmonics should probably be already taken into account by my original idea: adjust the difference signal's frequencies according to human hearing system's sensitivity.
Human hearing is not a 2d model as simple as the above. Hearing spectral sensitivity changes with SPL and frequency, and is directly affected by masking effects, and harmonic energy distribution. Not simple.
Quote:
Originally Posted by kraku ➡️
I'm not sure what the "specific nonlinearity" vs dynamic audibility means in this context, though.
If you consider just two radically different transfer functions, I think you'll see my point. One might be a type of nonlinear response that changes slowly and evenly over a wide dynamic range. The other might be a function that is nearly perfectly linear up to a point, then goes radically nonlinear above that. Both can generate harmonic distortion, both would sound radically different from each other. Now, take that first transfer functions into the other quadrant and make that function in that quadrant a different curve. Now, you've changed the balance of even/odd harmonic distribution, and changed audibility again.

The only simple way to say it is that distortion audibility is affected by the order of the harmonics generated, the spectral energy distribution of all harmonics, and the presence of other masking signals, combined with the specific SPL. Remember, the ear is also nonlinear!
Quote:
Originally Posted by kraku ➡️
I haven't given this much thought (I'd have to do some research and testing on the subject), but with hard clipping the signal changes abruptly, thus creating large quantities of fairly large amplitude (i.e. loud) higher frequencies. Soft clipping does gradual changes to the signal, thus introducing new signals more into the lower frequencies.
The harmonics generated by either one always have higher energy at lower frequencies, with successive harmonics energy falling off as the harmonics are more removed from the fundamental.

But this is just simple harmonic distortion. Intermodulation distortion is in many cases more audible, more objectionable, and comes in many different styles. It is nearly impossible to have low THD and high IMD, but certain devices with dynamic gain control can actually measure that way.
Quote:
Originally Posted by kraku ➡️
If you play two signals, low frequency and high frequency one, both at amplitude X, the high frequency one is much more audible or at least jarring to the ear. Regular sounds/music has the approximate frequency curve of being high amplitude at lower frequencies and gradually leveling down when going up in frequency space. This is what human hearing system has developed into receiving. This could explain why the unnaturally large high frequency signals with hard clipping can be so audible vs. soft clipping.
You have to be a little careful when considering the human hearing response characteristic. Yes, its a very non-flat curve, but it is also a constant "mask" applied to all hearing to a greater or lesser degree. I would agree that a high level of 3rd harmonic of 1kHz would be more audible than the same level of 3rd harmonic of 6kHz, or 20Hz, but again, the basic hearing sensitivity curve us just one part of the story.

Quote:
Originally Posted by kraku ➡️

If there is a difference in perception of IMD when there are more than couple of sine waves in the test signal, we're entering deep into the psychoacoustic area and I've no idea (yet) how to take into account any of that. Sounds complicated to test that definitively.
The test isn't all that complicated anymore because we have computers, software and really good audio interfaces. The reference paper is "Spectral Contamination Measurement" by Deane Jensen and Gary Sokolich, Nov. 1988. He had to use a rather cumbersome test setup, but it revealed a lot of audibility information that was not available before. He did not continue to scale the data to audibility. Multi-tone generation and analysis is now built into REW, you can generate a multi-tone test signal similar to Jensen's setup, and now that audio interfaces and FFTs are of getter resolution, you can get back the kind of data Jensen did with minimal effort. At least one commercial audio test product company has adopted some of this technology, but since their market is automated industrial testing, and the products are financial out of my world, I haven't followed them much.

BTW, if you're not a member of the AES, I recommend that just for access to the papers. SO much information there. Not all is definitive, of course, but it's where most of the cutting edge audio stuff is published.
Old 2 weeks ago | Show parent
  #53
Lives for gear
 
nosebleedaudio's Avatar
 
🎧 15 years
As for tube's "Sound", several years ago I was working on a Tubetech compressor for the 3rd time and for some reason tested the distortion, it was VERY low, far lower than I was expecting from a all tube & transformer based unit..In the .02% if I recall..
Old 2 weeks ago
  #54
Lives for gear
 
Jaddie - this is a really interesting read - can you suggest some "standard texts" ... books, chapters, classic or current review articles ... even YT links ... ?
Old 2 weeks ago | Show parent
  #55
Lives for gear
 
haysonics's Avatar
 
🎧 5 years
Quote:
Originally Posted by Bushman ➡️
Every electronic component and physical component contributes to the sound of a mic.
Therefore, every change to a mic or differences between mics can be described as “coloring” the sound in some way and to some degree.
^This

Quote:
Originally Posted by kraku ➡️
I don't doubt that u87 and u67 sound objectively different. I'm interested in how much of that sound difference is due to other design choices than just adding tube into the microphone?
IMO the U87 vs U67 comparison was a (non-intentional) red herring as on top of the many component differences between the circuits there is often notable audible difference between one capsule and the next in the production line. Its only relatively recently that some manufacturers have been able to reduce capsule variance so that all capsules coming off a production line can be considered to be matched.
Old 2 weeks ago
  #56
Gear Guru
 
I still have a high respect for the U87, even the AI.
(I kid I kid)
I just prefer a mic that can use minimal (or no)
Processing.
Chris
Old 2 weeks ago | Show parent
  #57
Lives for gear
 
jaddie's Avatar
 
🎧 10 years
Quote:
Originally Posted by TobyB ➡️
Jaddie - this is a really interesting read - can you suggest some "standard texts" ... books, chapters, classic or current review articles ... even YT links ... ?
Distortion audibility doesn't appear in classic texts. The information is found in AES papers and the like, and papers typically reference each other creating this wonderful chain of research you can wander through backwards.

I am collecting a short list, just can't work on it today.

No YT links I'm aware of. Not often authoritative anyway.
Old 2 weeks ago | Show parent
  #58
Quote:
Originally Posted by jaddie ➡️
...

The test isn't all that complicated anymore because we have computers, software and really good audio interfaces. The reference paper is "Spectral Contamination Measurement" by Deane Jensen and Gary Sokolich, Nov. 1988. He had to use a rather cumbersome test setup, but it revealed a lot of audibility information that was not available before. He did not continue to scale the data to audibility. Multi-tone generation and analysis is now built into REW, you can generate a multi-tone test signal similar to Jensen's setup, and now that audio interfaces and FFTs are of getter resolution, you can get back the kind of data Jensen did with minimal effort. At least one commercial audio test product company has adopted some of this technology, but since their market is automated industrial testing, and the products are financial out of my world, I haven't followed them much.

BTW, if you're not a member of the AES, I recommend that just for access to the papers. SO much information there. Not all is definitive, of course, but it's where most of the cutting edge audio stuff is published.
Awesome! Thank you!

I got to read through that paper and learn from it. And thank you for the tip for joining AES. I actually might do it. I'm constantly trying to educate myself about all interesting areas of audio technology design etc.
Old 2 weeks ago | Show parent
  #59
Gear Guru
 
Quote:
Originally Posted by DougS ➡️
Stupid thread.
Hey, it's the "Thread That Could Not Die!"

Chris
Old 2 weeks ago | Show parent
  #60
Lives for gear
 
DougS's Avatar
 
🎧 5 years
Quote:
Originally Posted by chessparov2.0 ➡️
Hey, it's the "Thread That Could Not Die!"

Chris
I'm definitely entertained. But to be honest, I'm not sure which is more interesting. jaddie's enlightened trip down the fascinating rabbit hole of the science of hearing and perception. Or the sheer strength of the Dunning-Kruger syndrome in the OP.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 57 views: 20416
Avatar for gpiccolini
gpiccolini 12th May 2016
replies: 1331 views: 225618
Avatar for drBill
drBill 2 weeks ago
replies: 435 views: 17631
Avatar for cixelsyd
cixelsyd 4 hours ago
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump