Quantcast
Intersample peaks - Massey's opinion - Page 3 - Gearspace.com
The No.1 Website for Pro Audio
Intersample peaks - Massey's opinion
Old 26th May 2013 | Show parent
  #61
Tokyo Dawn Labs
 
FabienTDR's Avatar
 
Verified Member
🎧 5 years
Quote:
Originally Posted by nms ➡️
Nice cheap shot, but he was right and you were confused & very rude. In the digital realm there is nothing between two digital samples.
Ugh..

1. I saw tpad's steep claim and had to react (#35). He said: "In any given digital stream, there are NO intersample peaks, because that runs counter to the definition of a digital signal." This is completely wrong and even several screenshots posted in this thread clearly indicate that the opposite is true! The video I posted does this too, with fantastic clarity. You just repeated the same nonsense without any explanation. Prove it if you can! Just measure the analogue waveform, and you'll see what I mean. It's obvious on the scope.

2. I provided reasonable backup references and clear information to support my claim. I am definitely not confused. Why so personal?! You're not here for arguments, right?!

3. The wisdom and the certainty of math always feels offensive to the insecure. That's not "rude", just very solid facts. Believe it or not, I've implemented over a dozen ADA converters, sample rate converters and sampler interpolation engines in my life. They wouldn't work if my understanding of the theory was wrong. Very simple.

Seeing your claim, you have no idea of digital audio. For the third time, of course there is something between the samples, this area is just totally undefined in the encoded signal. Undefined does not mean "nothing", it only means it could be "anything". The definition of "the between" is done by the Nyquist filter (i.e. the bandlimitation Nyquist mentions). However, the digital storage representation has no idea of the between (this is the detail that makes digital sampling possible), and this clearly implies that the between can be higher (or lower) than the nearest discrete sample-point.

Nyquist defined a digital sampling system in such a way:

Analogue input -> [Nyquist filter] -> [Discrete representation] -> [Nyquist filter] -> Analogue output

Misinformed people tend to ignore the implications of the Nyquist filters (a.k.a. "Band-limiting") and only look at the discrete storage module. But it gets really messy as soon the same people begin to draw conclusions from their half-wisdom and post it in public. The Nyquist filters are absolutely essential, you cannot ignore them and call the digital theory flawed. Nyquist is much more complicated than it seems and extremely counter intuitive for the uneducated.

It's all explained above. In my posts, in the video, in the wikipedia articles and everywhere else.

Ignoring my perfectly verifiable arguments and calling me "Rude" and "Confused" is hilarious. Calling those very few with arguments and reference "Confused" is rude and seriously questions your credibility, nms. The Kruger Dunning link was well chosen, I realize.

Feel free to falsify my claims, but there's no need to get personal against me. Facts, wisdom and maths hurt, eh?! But it's Gearslutz, I know... ...watch the video at least, then come back for discussion.
Old 26th May 2013 | Show parent
  #62
Lives for gear
 
🎧 10 years
I had better qualify some of my following statements as facetious ( as this is GS after all !!)



I may be a simple country audio bummmmpkin , but why is it that when some 44, 000 or more discreet samples are taken in a second , were all freaking out on " all that missing information " ??? ( try to count to 44,000 in a second ; I dare you !!)

It's true that in nature these phenomena are continuous , and we are using a method that breaks the continuous into a broken up bunch of encoded samples . But why is it when we have put a man on the moon ( using as much processing power as a $5 child's toy possess today ), people don't seem to believe that science and mathmatics can come up with a method that fills in " all that space " between 44,000 samples per second???




I should also take this opportunity to remind folks that there would be no overs if we would simply leave a little headroom ...



O.K. back to the regularly scheduled pedantry...



.
Old 26th May 2013
  #63
nms
Lives for gear
 
nms's Avatar
 
2 Reviews written
🎧 10 years
Quote:
Originally Posted by FabienTDR ➡️
You just repeated the same nonsense without any explanation. Prove it if you can! Just measure the analogue waveform, and you'll see what I mean. It's obvious on the scope.
Good god man! lol. Read what you just typed. If I go measuring the analog waveform I am not measuring the DIGITAL signal in the DIGITAL realm anymore am I! I would be measuring the reconstructed ANALOG waveform in the ANALOG realm.

I don't understand how it was easier for you to throw out insults instead of realizing that everyone in this thread knows the basic concept of how ISP's can happen once a digital signal is reconstructed to analog or resampled to a higher sample rate. ISP's don't exist in any digital file.. wav or MP3. They don't exist until the file had been resampled, decoded, or converted to analog.

Quote:
Originally Posted by sanddigger1 ➡️
Why should it be a relationship between a 'good' mix and ISPs ? It sounds as silly as when people ask for what a spectrum analysis of a 'good' mix should look...
In my experience over 0 ISPs (the problematic ones) depend on how loud it was pushed and specially on the way loudness was achieved.
I define a good mix as one that wasn't limited and clipped to hell. The worse it was clipped, the worse those ISP's will be.
Old 26th May 2013
  #64
Tokyo Dawn Labs
 
FabienTDR's Avatar
 
Verified Member
🎧 5 years
nms, that's kind of the point. The digital "realm" as you call it doesn't matter at all. This is not where the music is! Even a digital system has analogue I/Os. It's perfectly continuous. This is exactly what Nyquist means when he mentions "band-limited". This makes the whole thing able to "read" and output a continuous signal. A digital sampling system is practically analogue! Only the storage isn't (which is very similar to how tape and vinyl works on a microscopic scale).

IMHO, the main problem relates to the confusing and simplified way most audio editors draw audio waveforms and meters. They intentionally "forget" the second half of Nyquist's theorem (the output bandlimiting), often without further mention in the manual. And it's also the main reason why the first generation of digital dynamics processors sounded like sh!t, they had no idea how the actual waveform looked above ~1-2kHz and answered with the horrible unmusical sound they became famous for. They also made the mistake to ignore the second part of Nyquist's system.
Old 26th May 2013 | Show parent
  #65
Gear Guru
 
UnderTow's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by nms ➡️
Good god man! lol. Read what you just typed. If I go measuring the analog waveform I am not measuring the DIGITAL signal in the DIGITAL realm anymore am I! I would be measuring the reconstructed ANALOG waveform in the ANALOG realm.
The sample points are not the audio signal. They are merely the encoding format.

Don't confuse the two!

Alistair
Old 28th May 2013 | Show parent
  #66
Gear Addict
 
vladg's Avatar
 
🎧 10 years
Such a great topic! I had a lot of fun reading it (very interesting opinions, really)

For "what is between digital samples" holywar my opinion is, there're zeros between. Because mathematical representation of sampled signal uses Dirac delta function:
File:Dirac distribution PDF.svg - Wikipedia, the free encyclopedia It is really useless information for mastering engineers so forget it :-)

I want to add some more thoughts about ISP which related with built-in processing inside modern OSes and cheap gear. For example, Windows 7 and 8 audio engine has internal upsampling (set by default to 96 kHz AFAIK). Also for AC97 chips their device driver resample everything to 48 kHz. So for your 44.1 kHz audio file you may have digital peaks above 0 dBFS after software upsampling but before DAC (and they clip of course). And these peaks are related to ISP we are talking about. Thus, monitoring and controlling ISPs is neccessary not only for CD players.

About Mac "hidden" audio processing. I don't know much about Mac audio core, but I know that sound played on internal notebook/iMac speakers is processed using built-in pre-defined EQ (to make audio response sweet). If this EQ is min-phase (I may suspect it), signal peaks are raised and output limiter can be really helpful, but that's another story. :-)

Last edited by vladg; 28th May 2013 at 04:42 PM.. Reason: typos
Old 28th May 2013 | Show parent
  #67
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by vladg ➡️
I want to add some more thoughts about ISP which related with built-in processing inside modern OSes and cheap gear. For example, Windows 7 and 8 audio engine has internal upsampling (set by default to 96 kHz AFAIK). Also for AC97 chips their device driver resample everything to 48 kHz. So for your 44.1 kHz audio file you may have digital peaks above 0 dBFS after software upsampling but before DAC (and they clip of course). And these peaks are related to ISP we are talking about. Thus, monitoring and controlling ISPs is neccessary not only for CD players.

About Mac "hidden" audio processing. I don't know much about Mac audio core, but I know that sound played on internal notebook/iMac speakers is processed using built-in pre-defined EQ (to make audio response sweet). If this EQ is min-phase (I may suspect it), signal peaks are raised and output limiter can be really helpful, but that's another story. :-)
Hey,
do you have any technical papers or links to resources from Microsoft and/or Apple regarding this?

Very curious about this!

Thank you.
Old 29th May 2013
  #68
Lives for gear
 
🎧 10 years
Quick test:

set up a mastering chain with 2 limiter settings one with the input limited to taste and the max output set to -0dBfs, and the other brought down -.4dBfs at the input and output ( so in L2 or L2007 it could be one with -9/0 and one with -9.4/-.4 ). now a/b between the two back and forth with your eyes closed or something so you forget what one you are on.

which sounds better?

on most crap hardware such as an mbox pro i will always prefer the -.4 version. it sounds like you wanted it to sound without the dying dac amplifier sound on top of that.

New Alliance East Mastering | Professional Audio Mastering
Old 30th May 2013 | Show parent
  #69
Gear Addict
 
vladg's Avatar
 
🎧 10 years
Quote:
Originally Posted by kosmokrator ➡️
Hey,
do you have any technical papers or links to resources from Microsoft and/or Apple regarding this?

Very curious about this!

Thank you.
Hmm, I didn't read any official papers about Microsoft Core Audio resampling. There're some topics on hydrogenaudio.org that "Windows 7 resampling sucks". It doesn't really so bad as AC97 resampling IMHO but it really has "lack of highs, lack of transients, distant" sound :-) I did a quick google search about this topic again and found this:

An audiophile’s look at the audio stack in Windows Vista and 7 | Trying To Be Helpful

You can safely ignore all this stuff (because it can damage your brain :-) except the last paragraph:

----------

I couldn’t find the answer to this question anywhere, so I wrote to Larry Osterman, who developed the Vista and Win7 audio stacks at Microsoft. His answer was that the sample rate that the engine uses is the one that the user specifies in the Properties window. The default sample rate is chosen by the audio driver (44.1 kHz on most devices). So if your music has a sample rate of 44.1 kHz, you can choose that setting and no sample rate conversion will take place. (Of course, any 48 kHz and higher samples will then be downsampled to 44.1 kHz.)

Larry Osterman's WebLog - Site Home - MSDN Blogs

There is some interesting technical information on the Windows Vista audio stack in this Channel9 video.

Vista Audio Stack and API | Going Deep | Channel 9

---------

I can't confirm that default sample rate for Windows 7 is 44.1 and thus it resamples all stuff to 44.1 and leaves your 44.1 audio untouched. But I should look into several default Win7 installations. I think it's very important to know, because when some time ago AC97 chips were dominating on consumer PCs (they had fixed 48 kHz ratio and very poor software resampling), mp3s with 48 kHz sample rate had much better sound!

About Macs, some time ago I found an article about installing Windows as 2nd OS on Mac. And that article stated that Windows have just terrible sound on built-in Mac speakers due to lack of EQ-compensation. And there was a link to plugin (or driver) that fixes Windows' frequency response to match Mac OS X one. But the link to this article is lost and I don't remember the way how I found it.
Old 30th May 2013
  #70
Deleted User #43636
Guest
I changed my computer and OS last year, and yes, win7 resamples all audio. Défaut SF was 96 kHz. By using a sound card with ASIO drivers, win7 engine is bypassed and audio is routed with its original sampling rate. It's OK with audio softwares but the general public video players, web browsers and so on do not recognize ASIO, they use WDM drivers and have their outupt resampled.
I heard vista worked the same way, but XP didnt resample anything.
Old 30th May 2013 | Show parent
  #71
Lives for gear
 
Mr. Lau's Avatar
 
🎧 5 years
Quote:
Originally Posted by nms ➡️
ISP's don't exist in any digital file.. wav or MP3. They don't exist until the file had been resampled, decoded, or converted to analog.
This is a question of perspective. We can also say ISPs exist in the file because it has certain values, which, when interpolated, will produce the ISPs.

Not audible of course.


Hmmm... haven't we discussed this in another thread? deja vu
Old 30th May 2013
  #72
Lives for gear
 
Trakworx's Avatar
 
Verified Member
3 Reviews written
🎧 10 years
FWIW this is from Apple:

"A less obvious issue when setting gain for digital masters can occur on playback.
Whether it's a compressed file like an AAC file or an uncompressed file such as a CD,
digital data goes through several processes to be converted to an analog signal for
playback.
One common process is called oversampling. This upsamples the digital data at four
times the original sample rate to improve the quality of the digital audio signal being
converted to analog. If the original digital audio data is at 0dBFS, oversampling can
result in undesirable clipping. And if the original was already clipped, oversampling can
make it worse. A growing consensus is emerging that digital masters should have a
small amount of headroom (roughly 1dB) in order to avoid such clipping.
"

http://images.apple.com/itunes/maste...for_itunes.pdf

.
Old 30th May 2013 | Show parent
  #73
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by Trakworx ➡️
A growing consensus is emerging that digital masters should have a
small amount of headroom (roughly 1dB) in order to avoid such clipping.
"
Not using every dam bit ????


Blasphemy !!!!!:
Old 30th May 2013 | Show parent
  #74
Lives for gear
 
Mr. Lau's Avatar
 
🎧 5 years
Quote:
Originally Posted by flatfinger ➡️
Not using every dam bit ????


Blasphemy !!!!!:
Old 30th May 2013 | Show parent
  #75
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Regarding the battle in this thread triggered by the ontological status of inter-samples:

The word "sample" as it is used in the context of AD/DA conversion is somewhat of an analogy, or a figure of speech, isn't it? It is not like we're taking a smaller quantity of something to generalize about it as a whole. A sample must have the same physical constitution as the reality it is a sample of, no?

That is clearly not the case with AD/DA conversion. The digital data stream resulting from AD conversion contains no samples of electro/acoustic events commonly referred to as signal/sound. It is just a data stream with a set of guidelines for interpreting it and acting upon it. And when it is acted upon by way of DA conversion it is done so not to "reveal" what is in that digital stream, let alone what is in-between it, but to use it as a set of conventionalized instructions for the generation ("reconstruction") of an electro-acoustic signal identical or nearly identical to the one fed into the AD converter at an earlier time.

Regarding lowering the peak value of linear audio data files:

It is quite ironic that instead of fixing what is in fact broken, i.e. certain converters and certain audio codecs, the industry chooses to manipulate that part of its technology which is flawless. A WAV file with peaks at 0dBFS has no defects whatsoever. If a device or a program cannot deal with it, perhaps there is something wrong with the device or the program, no?
Old 31st May 2013 | Show parent
  #76
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Quote:
Originally Posted by flatfinger ➡️
Not using every dam bit ????


Blasphemy !!!!!:
They're actually asking to give up some of the most significant one!
Old 31st May 2013 | Show parent
  #77
Gear Guru
 
Muser's Avatar
 
1 Review written
🎧 10 years
I should have thought a sample is really just akin to a voltage reading at a certain rate. those are then able to be represented in numerical values held in binary form.
Old 31st May 2013 | Show parent
  #78
Lives for gear
 
🎧 5 years
Quote:
Originally Posted by Trakworx ➡️
FWIW this is from Apple:

"A less obvious issue when setting gain for digital masters can occur on playback.
Whether it's a compressed file like an AAC file or an uncompressed file such as a CD,
digital data goes through several processes to be converted to an analog signal for
playback.
One common process is called oversampling. This upsamples the digital data at four
times the original sample rate to improve the quality of the digital audio signal being
converted to analog. If the original digital audio data is at 0dBFS, oversampling can
result in undesirable clipping. And if the original was already clipped, oversampling can
make it worse. A growing consensus is emerging that digital masters should have a
small amount of headroom (roughly 1dB) in order to avoid such clipping.
"

http://images.apple.com/itunes/maste...for_itunes.pdf

.

I leave 4dB headroom. Don't even know what an ISP is!
Old 31st May 2013 | Show parent
  #79
Lives for gear
 
Alexey Lukin's Avatar
 
Verified Member
🎧 10 years
Quote:
Originally Posted by Aivaras ➡️
That is clearly not the case with AD/DA conversion. The digital data stream resulting from AD conversion contains no samples of electro/acoustic events commonly referred to as signal/sound.
Why not? Digital samples from the A/D converter are literally samples of the analog signal, i.e. measurements of instantaneous analog voltages at uniformly spaced moments of time. At least they can be considered this way (with the fine print stating that the actual process of measurement may be more complex and involving downsampling, and if the signal contains energy above Fs/2 it gets filtered out prior to measurement, etc.)
Old 31st May 2013 | Show parent
  #80
Lives for gear
 
Mr. Lau's Avatar
 
🎧 5 years
Quote:
Originally Posted by The_K_Man ➡️
I leave 4dB headroom. Don't even know what an ISP is!
ISPs occur when you have many consecutive samples at a top value, not just 0dB FS. Typical result of limiting

Audible? No

Something to worry too much? No

And the master will definitely not be at -4dB FS, but close to 0

Old 31st May 2013 | Show parent
  #81
Gear Guru
 
UnderTow's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by Aivaras ➡️
Regarding lowering the peak value of linear audio data files:

It is quite ironic that instead of fixing what is in fact broken, i.e. certain converters and certain audio codecs, the industry chooses to manipulate that part of its technology which is flawless. A WAV file with peaks at 0dBFS has no defects whatsoever. If a device or a program cannot deal with it, perhaps there is something wrong with the device or the program, no?
No no. The signal is definitely too loud!

Take the case of a straight AD/DA loop. The only way to create ISPs over 0 dB FS (or the equivalent thereof) is to feed it a signal that had those peaks to start with. This means that you only have the ISPs because the peak was lucky enough to fall between two sample points. If the signal's timing would have been slightly earlier or slightly later, the peak would have fallen on a sampling point and would have clipped the AD quantizer. This then makes the behaviour of the AD/DA loop arbitrary depending on arrival timing. This breaks Shanon-Nyquist!

In the same vein, if any processing done in the digital domain takes the actual signal into account rather than simply the sample points, you shouldn't ever have ISPs over 0 dB FS (or the equivalent thereof). The meters in your DAW would warn you that your signal is too loud . A limiter used in mastering would kick in slightly earlier. etc etc.

So any signal causing ISPs over 0 dB FS (or the equivalent thereof) is an illegal signal and is most certainly broken!

(Whether this is audible or not or whether manufacturers should make allowances for the huge number of broken signals is another discussion entirely).

Alistair
Old 31st May 2013 | Show parent
  #82
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Quote:
Originally Posted by Alexey Lukin ➡️
Digital samples from the A/D converter are literally samples of the analog signal, i.e. measurements of instantaneous analog voltages at uniformly spaced moments of time.
Strictly speaking, a sample should refer to what is measured (a portion of the original thing) not the measurement itself, nor the description of that measurement in some language.

Quote:
Originally Posted by Alexey Lukin ➡️
At least they can be considered this way.
Indeed they can, I'm not suggesting we should subvert the concept, just drawing attention to the fact that it is somewhat of a figurative concept when it is used to describe a digital data stream.
Old 31st May 2013 | Show parent
  #83
Gear Addict
 
vladg's Avatar
 
🎧 10 years
Quote:
Originally Posted by flatfinger ➡️
Not using every dam bit ????
You should be -6 dBFS to loose the most significant bit. So with -1dBFS you loose only a fraction of it.

To put more oil in the fire of this discussion, IMHO the reason of ISPs on fast limiting and clipping is Gibbs phenomenon.

Gibbs phenomenon - Wikipedia, the free encyclopedia

Limiting (and clipping) process extends signal bandwidth by intermodulation (and harmonic) distortions. To return the signal to your initial bandwidth (in ADC or before downsampling in oversampled limiters) you should low-pass it and as the result you'll have peaks above limiting threshold (see Gibbs phenomenon). That's we're talking about.

Now imagine, you have not-oversampled digital limiter or clipper. You have digital samples peaking at 0 dBFS. But "true" peaks of reconstructed high-resolution or continous signal (in DAC or upsampling engine) are above. Why? Because signal is band-limited by half of sample rate.

So you should leave small headroom for that Gibbs guy :-)

Okay, you can ignore him but the argument that most people won't hear distortions that probably can occur, it's a bit frighten me to hear it on mastering forum :-)

Last edited by vladg; 31st May 2013 at 11:31 AM.. Reason: typos
Old 31st May 2013 | Show parent
  #84
Lives for gear
 
🎧 5 years
Quote:
Originally Posted by Mr. Lau ➡️
ISPs occur when you have many consecutive samples at a top value, not just 0dB FS. Typical result of limiting

Audible? No

Something to worry too much? No

And the master will definitely not be at -4dB FS, but close to 0

"I don't know what ISPs are" = my way of saying I never encounter them because I master @ -4dBfs.
Old 31st May 2013 | Show parent
  #85
Gear Maniac
 
Aivaras's Avatar
 
🎧 10 years
Quote:
Originally Posted by UnderTow ➡️
Take the case of a straight AD/DA loop. The only way to create ISPs over 0 dB FS (or the equivalent thereof) is to feed it a signal that had those peaks to start with.
An analog signal one feeds to an AD converter cannot contain ISPs (“to start with”) as these are digital values, not the physical properties of the analog signal’s amplitude. As with any analog signal transmission, proper calibration of analog levels is a necessary precondition for the proper functioning of AD conversion. Given the AD converter’s proper design and calibration plus the fact that its analog inputs are not over-driven, the converter should be able to measure and quantize the incoming analog signal so that an identical or nearly identical analog signal can be generated by a DA converter from the AD digital data stream.

Quote:
Originally Posted by UnderTow ➡️
This means that you only have the ISPs because the peak was lucky enough to fall between two sample points. If the signal's timing would have been slightly earlier or slightly later, the peak would have fallen on a sampling point and would have clipped the AD quantizer. This then makes the behaviour of the AD/DA loop arbitrary depending on arrival timing. This breaks Shanon-Nyquist!
And I was told it was all deterministic (a playful and friendly reference to one of your utterances in this thread)!

In the initial (pre-decimation) phases of AD conversion the analog signal is measured at a rate that for all practical purposes makes the issue of “falling between sample points” moot if the conditions of proper design and operation of an AD converter are met.

Furthermore, AD/DA conversion is not about describing the “peaks and dips” of a signal (nor about the arrival schedule), it’s about the minimum quantity of measurement points that are necessary to produce a set of digital values based on which an identical analog signal can be generated. Moving the incoming signal in time should not change a thing because the measurement of the incoming analog signal is conducted both discretely (quantitative description) and structurally (temporal/relative description).

The Nyquist theorem is an exercise in conceptual frugality, an economy of thought for doing a perfect scientific measurement/explanation/reproduction of a physical phenomenon with the least needed resources for such a measurement/explanation/reproduction. That implies that the set of digital values (PCM stream) that is produced by AD conversion in accordance with the Nyquist Theorem is not meant to be the “whole story” of the incoming analog signal (to capture the "whole story" would have involved too much resources and, from a scientific point of view, would have been redundant), it is instead meant to be a “selective story” of that signal just enough to allow a deterministic reconstruction of the “whole story” at a later point in time.

Granted, the “whole story” is likely to include bits and pieces that the “selective story” doesn’t, but it is the task of the reconstruction/generation process to accommodate (have a necessary headroom for) those bits and pieces, because a “selective story” is actually perfect for what it is. If inter-sample peaks can be derived from, say, an 0dBFS data stream when it is re-sampled, or if higher amplitude peaks can be derived from the same data stream when it is converted to an analog signal, then haven’t it been thereby said that this data stream, incomplete and selective as it is by design, in fact contains all the necessary elements for the later reconstruction of the "whole story," nothing less, nothing more?

As things stand now, the headroom is being expanded by pushing the head down, not by raising the ceiling. A nice example of a Procrustean bed, one could append.

Quote:
Originally Posted by UnderTow ➡️
So any signal causing ISPs over 0 dB FS (or the equivalent thereof) is an illegal signal and is most certainly broken!
How about a restatement like this: any digital processing such as re-sampling or any analog processing such as DA conversion that produces but is not able to accommodate ISPs and/or respective amplitude values thereby distorting analog reproduction circuitry is broken by design?

Just to give an example: why not require a re-sampling algorithm to normalize the positive peak values (“overs”) to the value of the source (say, 0dBFS) if such values actually occur when re-sampling? Let the procedure which adds extra information do the accommodating and corrective work, not the source which is valid and sound as it is.

Please, excuse me for the long post. All these loose considerations of mine are more in the form of relaxed questioning for the pleasure of it. Thank you for your replies, it is a learning experience.
Old 31st May 2013 | Show parent
  #86
Lives for gear
 
Alexey Lukin's Avatar
 
Verified Member
🎧 10 years
Quote:
Originally Posted by Aivaras ➡️
Strictly speaking, a sample should refer to what is measured (a portion of the original thing) not the measurement itself, nor the description of that measurement in some language.
That's what I meant: they are measured values.
Old 1st June 2013 | Show parent
  #87
Lives for gear
 
scraggs's Avatar
 
4 Reviews written
🎧 15 years
Quote:
Originally Posted by The_K_Man ➡️
"I don't know what ISPs are" = my way of saying I never encounter them because I master @ -4dBfs.
and once again i will ask you for a link to one of these records you've mastered.
Old 1st June 2013 | Show parent
  #88
Gear Guru
 
UnderTow's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by Aivaras ➡️
An analog signal one feeds to an AD converter cannot contain ISPs (“to start with”) as these are digital values, not the physical properties of the analog signal’s amplitude.
The ISPs are most certainly a property of the audio signal, and are not the digital samples. The digital samples are limited to 0 dB FS (assuming a fixed point format). Again, we should not confuse the digital encoding format with the actual signal that we have encoded. If you picture the actual signal, you will see that there are Inter Sample Peaks all over the place.

Or you could do the opposite exercise and imagine the sample points in the analogue domain. For instance if you would use a plotter to plot the analogue waveform on graph paper in the analogue domain and then meticulously put a little dot on the waveform every 44100th of a second (or whatever sample rate you want to represent) you would see that many times the little dots would not fall on the peaks of the waveform. All these peaks above the sample dots are Inter Sample Peaks even if in practice we only refer to them as such when they pass over 0 dB FS.

I'll keep repeating this until it sinks in: The sample points are not the signal. They are merely the encoding format. This confusion of the two (due mainly by the representation in DAWs) is at the root of this whole discussion.

Quote:
As with any analog signal transmission, proper calibration of analog levels is a necessary precondition for the proper functioning of AD conversion. Given the AD converter’s proper design and calibration plus the fact that its analog inputs are not over-driven, the converter should be able to measure and quantize the incoming analog signal so that an identical or nearly identical analog signal can be generated by a DA converter from the AD digital data stream.
Exactly. And if you feed a the ADC with a well calibrated (band limited) signal, assuming no processing, you wouldn't need any headroom in the DAC because the DAC, after reconstructing the well calibrated signal, will never produce anything that goes over the equivalent of 0 dB FS.

If you assume processing, then if that processing takes the actual signal into account and doesn't just look at the sample values of the encoding format, you again don't really need any overhead in the DAC because the processing would make sure the signal never goes over 0 dB FS. Of course this means much more computationally heavy processing so in practice we compromise and good DACs have overhead.

I think it is important to realize that this really is a compromise and not a strict application of the sampling theorem.

Quote:
And I was told it was all deterministic (a playful and friendly reference to one of your utterances in this thread)!
And indeed it is unless you break the system by overloading it!

Quote:
In the initial (pre-decimation) phases of AD conversion the analog signal is measured at a rate that for all practical purposes makes the issue of “falling between sample points” moot if the conditions of proper design and operation of an AD converter are met.
Indeed, in modern oversampling converters the signal would just clip. This just confirms my point that such a signal is too high!

Quote:
Furthermore, AD/DA conversion is not about describing the “peaks and dips” of a signal (nor about the arrival schedule), it’s about the minimum quantity of measurement points that are necessary to produce a set of digital values based on which an identical analog signal can be generated. Moving the incoming signal in time should not change a thing because the measurement of the incoming analog signal is conducted both discretely (quantitative description) and structurally (temporal/relative description).
Indeed. The system should never be dependent on the arrival timing and in my little conceptual exercise it does. That just confirms my point that the signal is overloading the system!

Quote:
The Nyquist theorem is an exercise in conceptual frugality, an economy of thought for doing a perfect scientific measurement/explanation/reproduction of a physical phenomenon with the least needed resources for such a measurement/explanation/reproduction. That implies that the set of digital values (PCM stream) that is produced by AD conversion in accordance with the Nyquist Theorem is not meant to be the “whole story” of the incoming analog signal (to capture the "whole story" would have involved too much resources and, from a scientific point of view, would have been redundant), it is instead meant to be a “selective story” of that signal just enough to allow a deterministic reconstruction of the “whole story” at a later point in time.
Indeed. Or rather not quite, assuming a correctly band-limited signal, there is no "selective story". The whole story is always there ready to be decoded by the DAC. In a well implemented system, the whole story is never lost. The information, and that is what this is all about, is always there albeit in encoded form. And thus we should not confuse the encoding format with the actual information being encoded: The signal.

Quote:
If inter-sample peaks can be derived from, say, an 0dBFS data stream when it is re-sampled, or if higher amplitude peaks can be derived from the same data stream when it is converted to an analog signal, then haven’t it been thereby said that this data stream, incomplete and selective as it is by design, in fact contains all the necessary elements for the later reconstruction of the "whole story," nothing less, nothing more?
Exactly, unless we break the system by feeding it (or manipulating the signal internally) with signals that overload that system. These are out of bounds, or so called illegal signals. If we work within the bounds of the system, it is fully deterministic and no information is lost.

Quote:
As things stand now, the headroom is being expanded by pushing the head down, not by raising the ceiling. A nice example of a Procrustean bed, one could append.
Procrustean bed. I had to look that one up.

Anyway, you can't freely raise the ceiling as that also raises the noise floor. You can only chose a new arbitrary calibration for the equivalent of 0 dB FS. What happens now when you add headroom to the DAC, you are actually just pushing the signal down into the system's noise floor. The dynamic range of the system suffers. (I think that is also what you are saying). It is always a compromise. If the system would never cause any output over the equivalent of 0 dB FS, no headroom would be needed and the maximum possible dynamic range of the system could be utilized.

Assume an idealized DA/AD loop, by adding headroom at the top, every time a signal passes through this loop the noise floor is raised (above and beyond any noise added by dither or analogue self-noise). Without this headroom, the noise floor would remain constant (still ignoring dither etc).

Quote:
How about a restatement like this: any digital processing such as re-sampling or any analog processing such as DA conversion that produces but is not able to accommodate ISPs and/or respective amplitude values thereby distorting analog reproduction circuitry is broken by design?
I don't quite agree. I would say that adding headroom is a compromise by design to accommodate illegal signals. Of course in practice this is the way to do things but in my eyes it is most certainly a compromise even if it is a good one.

Quote:
Just to give an example: why not require a re-sampling algorithm to normalize the positive peak values (“overs”) to the value of the source (say, 0dBFS) if such values actually occur when re-sampling? Let the procedure which adds extra information do the accommodating and corrective work, not the source which is valid and sound as it is.
The procedure is not adding extra information at all! It is just revealing the information that was always there in encoded format! The signal, the actual information rather than the sample values, was always overloading the system! It just happened to be so that those peaks fell between the sample values of the encoding format. Just a bit of luck for the truant signal who needs a good dosage of the type of digital slapping you propose.

In other words, I don't think your suggestion isn't a good one. Basically it is a method to re-integrate into the society of good signals those rogue illegal signals.

Quote:
Please, excuse me for the long post. All these loose considerations of mine are more in the form of relaxed questioning for the pleasure of it. Thank you for your replies, it is a learning experience.
And the same applies to my posts on the subject.

Have a wonderful Saturday!

Alistair
Old 9th November 2013 | Show parent
  #89
Lives for gear
 
dcollins's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by Lagerfeldt ➡️
There's no information available on this anywhere on the net (as far as I can find). The only thing that pops up in Google is my thread about the subject. Until now... ta da daaa!

I thought I could find it using Activity Monitor, but no. I asked one of my friends (who's a coder) and he found this, using a utility that's part of XCode.

Turns out it isn't a regular limiter, but a multiband compressor. This pops up whenever you're using the internal speakers on a MacBook (and iMac).
I can’t say that the speakers in my MBP sound like there’s a MBC, though. How the actual compressor works is TBD.

HTH!


DC
Old 10th November 2013
  #90
Audio Alchemist
 
Lagerfeldt's Avatar
 
Verified Member
3 Reviews written
🎧 15 years
It's easiest to hear when sub frequencies are making it pump. Also, try firing up a tone generator with square waves and go near full scale.

I assure you it's there and active with the internal analog output. You'll see it pop up in the XCode when you use the internal output and not when you use any digital output/external sound card.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 189 views: 66020
Avatar for tonecontroller
tonecontroller 21st November 2009
replies: 63 views: 12147
Avatar for carlosdanger
carlosdanger 27th January 2019
replies: 381 views: 120697
Avatar for fabb2004
fabb2004 8th August 2012
replies: 59 views: 13024
Avatar for Baz
Baz 20th January 2013
Topic:
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump