Quantcast
Paul Frindle - Is This Truth Or Myth? - - Page 58 - Gearspace.com
The No.1 Website for Pro Audio
Paul Frindle - Is This Truth Or Myth? -
Old 31st July 2017 | Show parent
  #1711
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by andy3 ➡️
Thank you paul I realaborate my first question, please.

Is that true that the important thing, in order to avoid a deterioration of the file (sorry I am noob) is that you don't have to exceed the 0 on the master bus. You can exceed the 0 on single channels if you like the sound.
With host DAWs (running on the machine itself) they process in floating point math. This means that signals above flat out 0dBFS are preserved and not clipped - so it's not a problem in the channels or on the master buss as long as it remains internal to the DAW.

However, because the final destination (CD, DAC, AES, SPDIF etc.) cannot accept signal greater than 0dBFS (it will simply clip it off), you have to make sure your final mix output does not exceed 0dBFS peak sample value.

To take it one step further; while digital transfers (CD AES and SDIF) can accurately deliver full sample values up to 0dBFS, it's not a given that the customer's destination machine will treat it well when converting it to analogue. The reason for this is that a succession of near full level sample values (due to compression, limiting and such like) can produce a converted result which if bigger than full level in the equipment DAC or sample rate converter. The destination devices may clip, or do even worse things when your mix gets played out. Some people call this 'inter-sample peaking'.

A rough rule of thumb to avoid possibility of this is to aim for max of around -3dB peak sample values on your final mix. This should be safe for any 'normal' music signal. This is generally known as leaving some 'headroom' - i.e space above the max peak sample value to accomodate overs caused by stuff down the line.

For 'mastering' engineers who are hell-bent on getting the maximum possible loudness at any cost, they employ 'real level' reconstruction metering to monitor decoded overloads to try and prevent them manually, or by further processing the signal in order to avoid the peak loudness loss which reducing overall level would cause. I have a home made VST plugin we made to do this myself.

Some even find themselves some outboard DACs which 'apparently' don't sound too awful when over driven; use them to convert the mix to analogue - and then reconvert back to digital again with an ADC, to provide a 'quasi-legal' output signal for the final master. In other words, they are clipping their own DACs, so that the customer's play out machine doesn't have to.......
Old 31st July 2017
  #1712
Lives for gear
 
🎧 10 years
so 'inter-sample peaking' is a misnomer?
Old 31st July 2017 | Show parent
  #1713
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by Dave.R ➡️
so 'inter-sample peaking' is a misnomer?
Well yes, like so many things in the audio industry. No one can apparently get beyond the idea that the samples are somehow the 'signal' - when of course they are only a stream of PCM code values which create the signal when decoded.

So the expression is based (like much other misunderstandings) on this false notion of 'samples', which are somehow 'missed' from the original signal (due to sampling), that 'reappear' magically when the signal is decoded.

The peaks do not occur because of phantom samples that may (or may not) have been 'missed' due to finite sampling rates.

They occur because the PCM data coming out of the DAW (for all sorts of reasons, including limiting) gives rise to actual decoded signal which is above the max sample values. In other words, because of the processing people have done on it, the data which is coming out could not have been produced by a real ADC on a real signal - and is therefore illegal.

The only way to accurately accommodate this overloaded PCM data is by reducing it before decoding it, to the point where the final decoded signal remains within legal bounds.
Old 31st July 2017
  #1714
Gear Guru
 
theblue1's Avatar
 
🎧 15 years
Do you not think it reasonable to say that the sample values within the context of the specific format do, indeed, comprise signal as it tends to be defined in information theory?
Old 31st July 2017 | Show parent
  #1715
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Paul Frindle ➡️
Well yes, like so many things in the audio industry. No one can apparently get beyond the idea that the samples are somehow the 'signal' - when of course they are only a stream of PCM code values which create the signal when decoded.

So the expression is based (like much other misunderstandings) on this false notion of 'samples', which are somehow 'missed' from the original signal (due to sampling), that 'reappear' magically when the signal is decoded.
Are you saying that there are problems also at places where only "legal" samples existed (in time), and not only "between samples"?
Old 31st July 2017 | Show parent
  #1716
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by mattiasnyc ➡️
I don't think that's true.

If you have stream of samples that are all at full level, you wouldn't get what we call "inter-sample peaks", correct? Because as far as I understand it "the PCM data coming out of the DAW (for all sorts of reasons, including limiting) gives rise to actual decoded signal which" in that case would not be "above the max sample values."

Instead, when we have a problem "the data which is coming out" and "could not have been produced by a real ADC on a real signal - and is therefore illegal" exists not where the samples are, but between them.

So since the reconstructed signal doesn't only exist where the samples "were", it's actually perfectly reasonable to refer to it as "inter-sample" peaks, because it correctly illustrates the problem.

When I heard the term the first time I actually understood it exactly in the context you point out, which you use to say that the term is misused. So I would in contrast say that it's highly useful because it is in a sense visualizing the deception that our reconstructed signal is only the samples, or is the samples (which isn't true). I think term actually implies exactly what you say, that we're in a sense "decoding" or reconstructing a signal based on the data, and we can end up with problems in places where legal samples are not, even though "illegal" samples aren't present.
I have mixed feelings about the term. As you say, it makes it clear what is happening in a visual manner. On the other hand, the term implies that it is an exception, an unusual case, when in reality, the actual signal "between samples" can go above or below those sample values even if they do not pass 0 dB FS. Or to put it differently, the signal has inter-sample peaks all over the place but they usually don't pass 0 dB FS as the sample values are lower.

For anyone that doesn't know what we are talking about, here is a picture I created for another thread. The squares are the sample values. The green line is the actual signal and if you look at the scale on the right, you will see that the signal passes over 0 dB FS many times even though the sample values don't. The signal goes over +6 dB FS at its highest peak. (Right click and choose "View Image" to see the picture in full size).




Alistair
Old 31st July 2017 | Show parent
  #1717
Lives for gear
 
🎧 15 years
I totally understand mixed feelings about this. I knew this would cause controversy, because it always does, because it's about meaning attached to concepts. Everyone sits in front of DAW which displays sample values and everyone is encouraged to think of them as 'signal' - when in fact they are only a suite of numbers following Pulse Code Modulation format.

It depends on what people attribute to the language they use - what is actually meant by it - and crucially if that meaning is accurate and universal. I suppose if people and the industry rename samples as 'signal' leaving the 'digital' part out, then that's what everyone accepts they are? But it does not mean the understanding is correct. An AES connection is a data connection. Call it a signal if you like, but it sure as hell is not an audio signal in essence.

But in reality the only digital audio signal you can ever experience is that which is decoded, by filtering the above nyquist artefacts of sampling, decoding and returning a continuous analogue signal.

We must never forget that whatever process we actually run in the digital domain - it is not running on decoded real world signal, it's simply changing the data stream to produce the required effect when the signal is eventually decoded. All signal processing designers must live with this hugely important concept permanently :-)

This is what I mean by signal - i.e. the one that we will hear and experience in the real world.

The example you present is not an accurate one. In fact the peaks are produced over a large number of 'samples' depending on what the decoded signal will reach. They are not 'inter-sample' - they can appear anywhere in the decoded signal - and you cannot accurately recreate the real process with simple graphics..

But I realise all too well that it is impossible to convince anyone of this fact.. I have been trying for more than 20 years..


Quote:
Originally Posted by UnderTow ➡️
I have mixed feelings about the term. As you say, it makes it clear what is happening in a visual manner. On the other hand, the term implies that it is an exception, an unusual case, when in reality, the actual signal "between samples" can go above or below those sample values even if they do not pass 0 dB FS. Or to put it differently, the signal has inter-sample peaks all over the place but they usually don't pass 0 dB FS as the sample values are lower.

For anyone that doesn't know what we are talking about, here is a picture I created for another thread. The squares are the sample values. The green line is the actual signal and if you look at the scale on the right, you will see that the signal passes over 0 dB FS many times even though the sample values don't. The signal goes over +6 dB FS at its highest peak. (Right click and choose "View Image" to see the picture in full size).




Alistair
Old 31st July 2017 | Show parent
  #1718
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Paul Frindle ➡️
I totally understand mixed feelings about this. I knew this would cause controversy, because it always does, because it's about meaning attached to concepts.
There's no controversy here Paul, we were just asking a question.

Quote:
Originally Posted by Paul Frindle ➡️
the peaks are produced over a large number of 'samples' depending on what the decoded signal will reach. They are not 'inter-sample' - they can appear anywhere in the decoded signal - and you cannot accurately recreate the real process with simple graphics..

But I realise all too well that it is impossible to convince anyone of this fact.. I have been trying for more than 20 years..
Why are you telling us this? It's a bit presumptuous to assume that we will refuse to agree with you. Completely unnecessary.

I asked you if these peaks can occur also "over" samples, and your answer is "yes". So, the term isn't really valid because "inter" doesn't necessarily apply. That's all you had to say.

Why you're arguing the word "signal" I have no idea. The question was about "peaks" that are "inter-sample", not what a signal is. We know what a signal is.
Old 31st July 2017 | Show parent
  #1719
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
I totally understand mixed feelings about this. I knew this would cause controversy, because it always does, because it's about meaning attached to concepts. Everyone sits in front of DAW which displays sample values and everyone is encouraged to think of them as 'signal' - when in fact they are only a suite of numbers following Pulse Code Modulation format.

It depends on what people attribute to the language they use - what is actually meant by it - and crucially if that meaning is accurate and universal. I suppose if people and the industry rename samples as 'signal' leaving the 'digital' part out, then that's what everyone accepts they are? But it does not mean the understanding is correct. An AES connection is a data connection. Call it a signal if you like, but it sure as hell is not an audio signal in essence.

But in reality the only digital audio signal you can ever experience is that which is decoded, by filtering the above nyquist artefacts of sampling, decoding and returning a continuous analogue signal.

We must never forget that whatever process we actually run in the digital domain - it is not running on decoded real world signal, it's simply changing the data stream to produce the required effect when the signal is eventually decoded. All signal processing designers must live with this hugely important concept permanently :-)

This is what I mean by signal - i.e. the one that we will hear and experience in the real world.

The example you present is not an accurate one. In fact the peaks are produced over a large number of 'samples' depending on what the decoded signal will reach. They are not 'inter-sample' - they can appear anywhere in the decoded signal - and you cannot accurately recreate the real process with simple graphics..

But I realise all too well that it is impossible to convince anyone of this fact.. I have been trying for more than 20 years..
I'm sure many are aware of the difference between the sample values and the actual signal. Not enough people obviously but still plenty.

Anyway, the example may not be 100% accurate but it is a good illustration of what is meant IMO. The signal line is created by interpolation of the sample values. Similar, albeit not identical, to what a DAC does. It isn't 100% accurate but it beats all the editors and DAW representations that just "join the dots" with straight lines between the sample values.

As for language and meaning, I read the "inter sample" part of "inter sample peak" as a location in time. So we have a peak in the actual signal level after reconstruction at a point in time that corresponds with a point in time between two samples of the encoded data stream before reconstruction. Or to put it another way, an imagined decoded signal overlaying the sample points.

Alistair
Old 31st July 2017 | Show parent
  #1720
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by UnderTow ➡️

As for language and meaning, I read the "inter sample" part of "inter sample peak" as a location in time. So we have a peak in the actual signal level after reconstruction at a point in time that corresponds with a point in time between two samples of the encoded data stream before reconstruction. Or to put it another way, an imagined decoded signal overlaying the sample points.

Alistair
But this isn't true. The peaks do not occur 'between samples' in sync with sample boundary times, as you suggest. They appear anywhere and in any time - because the decoded output is not sampled in discrete time - it is continuous time. The sampling period itself does not exit in the decoded signal and has no meaning.

Also your drawn-in peak, being situated between 2 samples, would never reach the output in that form, because it is implying a freq content which is beyond the nyquist point where no output content (should) exist. It could only ever contribute to the signal which exists below the nyquist point in the legal freq band.

You have to think of this thing the other way around. Namely; what could possibly enter the system via an ADC which is properly filtered below nyquist - and what could possibly make it out of the system via a DAC which is properly reconstructed.

So; a properly constructed ADC could only ever produce a valid PCM stream whatever signal is fed to it. And that data would be accurately decoded back into signal by a properly implemented DAC.

But very crucially........... stuff you do to the data along the way in trying to change the nature of the eventual decoded signal, may not, even if you can apparently see it on a DAW sample value display.

Why? Because the sample values you see on your DAW are not signal - they are intermediate data only :-)
Old 31st July 2017 | Show parent
  #1721
Gear Addict
 
BadYodeler's Avatar
 
Quote:
Originally Posted by diggo ➡️
The latest problem I'm having with DSM v2.5.1 is the parameter knobs flipping between OFF and FULL. The knobs don't work, in other words. I have to manually enter the parameters, which also can't be tweaked by dragging the cursor over the numeral slot. This nearly makes the plugin unusable for me.
Are you using the VST or VST3 version? I ran into the knob problem too with the VST3 version. Since then I just use the VST version
Old 31st July 2017 | Show parent
  #1722
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by mattiasnyc ➡️
There's no controversy here Paul, we were just asking a question.



Why are you telling us this? It's a bit presumptuous to assume that we will refuse to agree with you. Completely unnecessary.

I asked you if these peaks can occur also "over" samples, and your answer is "yes". So, the term isn't really valid because "inter" doesn't necessarily apply. That's all you had to say.

Why you're arguing the word "signal" I have no idea. The question was about "peaks" that are "inter-sample", not what a signal is. We know what a signal is.
Sorry - I'm not in the least accusing anyone or trying to cause an argument. That's the very last thing i want to do.

The issue about the meaning of signal and data is an important one, because it's the fundamental reason why the term 'inter-sample peak' was ever coined. The peaks are not 'inter-sample'. That was why someone asked if it was a misnomer originally? I hope I've answered them :-)

I know it sound pedantic - but believe me an understanding of this is important to people using their gear in real life. A post after yours confirms this?

But I'm very happy to shut up if people think it's unhelpful and would rather I did. It's a very old conversation I've had many times -but the misunderstandings persist.

Sorry - I'll just shut up :-)
Old 31st July 2017 | Show parent
  #1723
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Paul Frindle ➡️
But this isn't true. The peaks do not occur 'between samples' in sync with sample boundary times, as you suggest. They appear anywhere and in any time
So there's zero correlation between the source sample stream and the reconstructed signal? None?
Old 31st July 2017 | Show parent
  #1724
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Paul Frindle ➡️
But I'm very happy to shut up if people think it's unhelpful and would rather I did. It's a very old conversation I've had many times -but the misunderstandings persist.

Sorry
Oh come on! My only point was that you're expressing yourself as if nobody except you and a handful of expert people understand anything about this.

Your contribution is helpful, but you don't need to imply such things.
Old 31st July 2017 | Show parent
  #1725
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by mattiasnyc ➡️
So there's zero correlation between the source sample stream and the reconstructed signal? None?
My last post in this subject, because I'm upsetting people that know this stuff already who feel I'm being condescending. I apologise for this..

There is obviously tight correlation between sample stream and the reconstructed signal - after all the data directly ends up AS the signal we get from it. No data - no signal.

However, there should be no correlation between the actual sampling boundary times themselves and the reconstructed analogue signal - because the resulting analogue signal is not quantised in time. Thankfully we don't get new parts of the signal at 20uS bursts at the output - it is continuous.

Sampling does not constrain the decoded output to boundaries of the sampling rate.
Old 1st August 2017
  #1726
Lives for gear
 
🎧 10 years
sampling rate indicates the number of times a continuous signal is sampled during the given time period. so there is some space/time between the sampling periods which remains unsampled.

'inter-sample peaking' implies that this unsampled signal period can somehow be recreated when in fact it doesn't even exist within the sampling data

it's just not there and therefore cannot be recreated. ergo 'inter-sample peaking' is a misnomer

have I got it right Paul?
Old 1st August 2017 | Show parent
  #1727
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Dave.R ➡️
sampling rate indicates the number of times a continuous signal is sampled during the given time period. so there is some space/time between the sampling periods which remains unsampled.

'inter-sample peaking' implies that this unsampled signal period can somehow be recreated when in fact it doesn't even exist within the sampling data

it's just not there and therefore cannot be recreated. ergo 'inter-sample peaking' is a misnomer

have I got it right Paul?
No.
Old 1st August 2017 | Show parent
  #1728
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
But this isn't true. The peaks do not occur 'between samples' in sync with sample boundary times, as you suggest. They appear anywhere and in any time - because the decoded output is not sampled in discrete time - it is continuous time. The sampling period itself does not exit in the decoded signal and has no meaning.
You are not understanding me. Let me explain in a different way:

Take the output of a DAC and feed it to an analogue oscilloscope. You can now picture the signal as it is after reconstruction. At the same time, generate another image stream that shows the sampling points of the encode signal before it hits the DAC and before reconstruction. Adjust the timing of this created image stream to take into account the latency of the DAC. Feed this image to the same oscilloscope screen as an overlay over the oscilloscope image.

If you set the refresh rate of the oscilloscope screen and the image of the sampling points to the sampling rate of the digital data stream before reconstruction, the sampling points will appear to be static. You now have an overlay of the sampling points over the image of the reconstructed signal. That is basically what my image is but generated synthetically.

You can now observe a point on the screen that falls between two sampling points and, depending on the signal you are using, you could see a peak in the signal that passes above the two adjacent sampling points. An inter sample peak.

Quote:
Also your drawn-in peak, being situated between 2 samples, would never reach the output in that form, because it is implying a freq content which is beyond the nyquist point where no output content (should) exist. It could only ever contribute to the signal which exists below the nyquist point in the legal freq band.
If you observe the image carefully you will see that there are no frequencies above Nyquist. The highest peak, whatever frequency content it contains, is less than half a cycle and fits between two sampling points. In other words, less than half the sampling rate and thus below the Nyquist frequency. But pretty close to the Nyquist frequency or it wouldn't have such a high peak. If the frequency content was not so close to the Nyquist frequency, the curve of the peak would be shallower and thus peak less high as the signal has to pass through the two adjacent sample points.

I generated another image with a shallower inter sample peak that makes it more obvious that the signal doesn't contain anything above the Nyquist frequency. (But very close to the limit):



Quote:
You have to think of this thing the other way around. Namely; what could possibly enter the system via an ADC which is properly filtered below nyquist - and what could possibly make it out of the system via a DAC which is properly reconstructed.
I understand how sampling works. The thing is though that even with a naturally occurring signal that you record, you can end up with inter sample peaks as long as there is headroom in the analogue stages before the quantizer in the ADC. As long as the signal is not clipped on the way in, the waveform, the actual signal, can momentarily pass the highest sampling value possible.

You can create a signal that will cause inter sample peaks or clip depending on timing. If that peak happens at a time when the signal gets sampled, it clips the quantizer as it is above its maximum value. If it happens to hit between two sampling points, it goes through the system unclipped as the signal isn't even sampled at that point but on reconstruction, the peak is recreated.

Quote:
But very crucially........... stuff you do to the data along the way in trying to change the nature of the eventual decoded signal, may not, even if you can apparently see it on a DAW sample value display.

Why? Because the sample values you see on your DAW are not signal - they are intermediate data only :-)
Yes this I fully understand this but you can show the illegal signal and thus inter sample peaks by generating a waveform that represents the decoded signal. You just interpolate the sample values in the same way that a DAC does. (A.K.A. a low-pass filter). And trust me, my image IS accurate in that sense because that is all it is: An interpolation of sample point values. You could feed that data stream to a DAC and observe the exact same peak on an analogue oscilloscope.

Alistair

Last edited by UnderTow; 1st August 2017 at 02:09 AM..
Old 1st August 2017 | Show parent
  #1729
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
However, there should be no correlation between the actual sampling boundary times themselves and the reconstructed analogue signal - because the resulting analogue signal is not quantised in time. Thankfully we don't get new parts of the signal at 20uS bursts at the output - it is continuous.

Sampling does not constrain the decoded output to boundaries of the sampling rate.
No of course but nor does it constrain us to imagine the sampling points that existed in the encoded signal before reconstruction. THAT is why they are called inter sample peaks. Because we can imagine the sampling points and the reconstructed waveform together, one as an overly over the other, we can also talk about signal peaks that are interstitial to the sampling points.

Alistair
Old 1st August 2017 | Show parent
  #1730
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by UnderTow ➡️
You are not understanding me. Let me explain in a different way:

Take the output of a DAC and feed it to an analogue oscilloscope. You can now picture the signal as it is after reconstruction. At the same time, generate another image stream that shows the sampling points of the encode signal before it hits the DAC and before reconstruction. Adjust the timing of this created image stream to take into account the latency of the DAC. Feed this image to the same oscilloscope screen as an overlay over the oscilloscope image.

If you set the refresh rate of the oscilloscope screen and the image of the sampling points to the sampling rate of the digital data stream before reconstruction, the sampling points will appear to be static. You now have an overlay of the sampling points over the image of the reconstructed signal. That is basically what my image is but generated synthetically.

You can now observe a point on the screen that falls between two sampling points and, depending on the signal you are using, you could see a peak in the signal that passes above the two adjacent sampling points. An inter sample peak.
Yes your plot shows a reconstruction signal that happens between 2 sample points. But you can just as easily show that they can happen anywhere in the timing field, depending on the data itself. Such peaks are not constrained to the timings between samples - unless of course your DAC has no reconstruction filter.


Quote:
If you observe the image carefully you will see that there are no frequencies above Nyquist. The highest peak, whatever frequency content it contains, is less than half a cycle between two sampling points. In other words, less than half the sampling rate and thus below the Nyquist frequency. (But pretty close to the Nyquist frequency or it wouldn't have such a high peak).
The waveform of your original peak which happens solely between 2 adjacent samples must at least rest at the nyquist point where there should be no output? What am I missing?

Your new waveform shows a sine which happens to have its peak between 2 samples (it's in sync). If you now change the phase of that wrt to the sampling times, it will still have a peak but not at sample boundaries.

If those samples on your picture correspond with full level values - then the waveform is an illegal signal that will get clipped.


Quote:
I understand how sampling works. The thing is though that even with a naturally occurring signal that you record, you can end up with inter sample peaks as long as there is headroom in the analogue stages before the quantizer in the ADC. As long as the signal is not clipped on the way in, the waveform, the actual signal, can momentarily pass the highest sampling value possible.


You can create a signal that will cause inter sample peaks or clip depending on timing. If that peak happens at a time when the signal gets sampled, it clips the quantizer as it is above its maximum value. If it happens to hit between two sampling points, it goes through the system unclipped as the signal isn't even sampled at that point but on reconstruction, the peak is recreated.
Yes if you overdrive your ADC it can indeed produce peaking in the data which cannot be sensibly decoded. This is particularly so since the filtering on the ADC (especially these days is after the initial quantiser) can produce overshoots. But I contest the notion that these only happen between samples - they can spread over several. They do not necessarily coincide with the space between samples. For instance you can easily create an offending wave form at lower freqs which will cause an overshoot that lasts over many samples and does not have its centre between any 2 samples.

Quote:
Yes this I fully understand this but you can show the illegal signal and thus inter sample peaks by generating a waveform that represents the decoded signal. You just interpolate the sample values in the same way that a DAC does. (A.K.A. a low-pass filter). And trust me, my image IS accurate in that sense because that is all it is: An interpolation of sample point values. You could feed that data stream to a DAC and observe the exact same peak on an analogue oscilloscope.
Alistair
Ok - I'm assuming it is a very highly oversampled digital filter outputting a vastly higher rate (like a DAC) - so that you are not simply creating a new filtered signal at the original sampling rate?

Ok now do the same thing with a different signal and show that the peaks can coincide anywhere within the timing frame, not necessarily between samples at all :-)

Remember that the original question was whether the term 'inter-sample peak' was a misnomer? In other words are they actually inter-sample.?


And yes I did promise to shut up arghhh :-)
#
Old 1st August 2017 | Show parent
  #1731
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
Yes your plot shows a reconstruction signal that happens between 2 sample points. But you can just as easily show that they can happen anywhere in the timing field, depending on the data itself. Such peaks are not constrained to the timings between samples - unless of course your DAC has no reconstruction filter.
I didn't say they were constrained to the sampling points. The image is just one example of a signal that upon reconstruction will peak above the equivalent of 0 dB FS.

Quote:
The waveform of your peak which happens solely between 2 adjacent samples must at least rest at the nyquist point where there should be no output? What am I missing?
Close to Nyquist but not above. Here is another clearer example. The signal is the sum of a 5505 Hz and a 11010 Hz sine waves (at a SR of 44.1 KHz):



The signal is perfectly valid. The frequency content is below the Nyquist frequency and the level remains below 0 dB FS but you can still see a peak that goes above the two nearest sampling points. (This is usually not referred to as an inter sample peak though as it doesn't pass 0 dB FS. Maybe it should be as it is a peak in between two sampling points ).

Quote:
Yes if you overdrive your ADC it can indeed produce peaking in the data which cannot be sensibly decoded.
Sure but what I am saying is that you can theoretically have momentary peaks that go through the whole system untouched if they per chance happen to occur between two sampling points of the encoded data stream. It is unlikely but not impossible.

For this to be possible you of course need headroom at all stages except the quantizer. The analogue stages before the ADC and after the DAC and headroom in the decimator and reconstruction filter DSPs. Just the quantizer has no extra headroom and if the signal would hit with a slightly different timing, it would clip the quantizer. So technically it is an illegal signal but in theory it can still get through unclipped if the timing is lucky.

The only point of me mentioning this is to explain the theory of how sampling works (not for you but for other readers) and what is usually referred to as inter sample peaks. Just a mind exercise. In practise, with most ADCs and DACs and with most recorded signals, things would either be below clipping or just clip. (Not least due to oversampling at the ADC and DAC).

Quote:
This is particularly so since the filtering on the ADC (especially these days is after the initial quantiser) can produce overshoots. But I contest the notion that these only happen between samples - they can spread over several.
I don't think anyone here claimed that these only happen between sampling points but having said that, I'm having a hard time picturing a signal that would reconstruct with a peak higher and with a span longer than multiple sampling points. That signal would be clipping in the digital domain before even hitting the DAC and thus the resultant reconstructed waveform would only have peaks above the equivalent of 0 dB FS at the edges of the clipped area (due to the Gibbs effect) with ripples between sampling points that could also pass the equivalent of 0 dB FS. (Basically, a band-limited square wave).


It would look something like this:




Quote:
They do not necessarily coincide with the space between samples. For instance you can easily create an offending wave form at lower freqs which will cause an overshoot that lasts over many samples and does not have its centre between any 2 samples.
But wouldn't such a signal simply clip? I mean before you can even feed it to a DAC. In other words, could you even create such a signal digitally?

Quote:
Ok - I'm assuming it is a very highly oversampled digital filter outputting a vastly higher rate (like a DAC) - so that you are not simply creating a new filtered signal at the original sampling rate?
It is a band limited signal with no content above Nyquist. Obviously the graph has more points than just the sample points so in that sense it is oversampled, yes. (About 108x)

Quote:
Ok now do the same thing with a different signal and show that the peaks can coincide anywhere within the timing frame, not necessarily between samples at all :-)
I'm not sure that that is actually possible but if you can create such a signal, I am more than willing to be corrected!

Quote:
And yes I did promise to shut up arghhh :-)
Why? This is interesting!

Alistair

Last edited by UnderTow; 1st August 2017 at 04:59 AM..
Old 1st August 2017 | Show parent
  #1732
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by UnderTow ➡️
I didn't they were constrained to the sampling points. The image is just one example of a signal that upon reconstruction will peak above the equivalent of 0 dB FS.



Close to Nyquist but not above. Here is another clearer example. The signal is the sum of a 5505 Hz and a 11010 Hz sine waves (at a SR of 44.1 KHz):



The signal is perfectly valid. The frequency content is below the Nyquist frequency and the level remains below 0 dB FS but you can still see a peak that goes above the two nearest sampling points. (This is usually not referred to as an inter sample peak though as it doesn't pass 0 dB FS. Maybe it should be as it is a peak in between two sampling points ).


Alistair
Ah finally I'm getting it! :-)

You are saying that if we ignored the DAC and all the rest, the above example situation could luckily get through the digital domain unscathed, because it was synchronous with sample rate with its peak exactly between 2 samples - which were both flat out - thereby causing a greater than normal flat out signal?

So what you are calling 'inter-sample peaks' are in fact potentially illegal data which 'happen' by chance to get through unscathed, which might therefore mess up down line when played out?
And if it were just off phase the data itself would clip it anyway and it would be damaged at this level anyway?

Yes of course I agree, this is perfectly correct :-) The maximum legal signal level of the system spec is somewhat less than the maximum possible signal that might pass under some very specific conditions - which is why setting digital operating levels at 0dBFS sample value was such a dumb idea :-(

So by 'inter-sample peaks' you refer to 'illegally' high inferred PCM data, which by chance doesn't get clipped off naturally by the data limitation, and therefore forms overly large reconstructed signals, whilst other PCM data might not - kind of in the sense of a negative, negative meaning?

I get it :-)

So by the same token, do you not think that overly high levels of PCM data for other wave forms which are actually clipped (by common limiting) would not also cause the reconstruction to peak and overload too, even though no sample goes beyond flat out? The common excessive programme limiting problem people face very often..

But even then, this situation might also be called 'inter-sample peaking' too, because the clipping is causing excess HF content, which is more or less like the above situation? Meaning that the term is in fact valid after all?

But for instance if you try to pass an artificially generated square wave at significant level and freq in the digital domain it will most certainly cause a reconstruction over, because the loss of harmonics due to the band limit reconstruction filtering will create a much larger peak signal. Could that situation be called inter-sample peaking too?

Hmm... not sure, but I'm surely thinking LOL :-)

Last edited by Paul Frindle; 1st August 2017 at 05:58 AM..
Old 1st August 2017 | Show parent
  #1733
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
Ah finally I'm getting it! :-)

You are saying that if we ignored the DAC and all the rest, the above example situation could luckily get through the digital domain unscathed, because it was synchronous with sample rate with its peak exactly between 2 samples - which were both flat out - thereby causing a greater than normal flat out signal?

So what you are calling 'inter-sample peaks' are in fact potentially illegal data which 'happen' by chance to get through unscathed, which might therefore mess up down line when played out?
And if it were just off phase the data itself would clip it anyway and it would be damaged at this level?

Yes of course I agree, this is perfectly correct :-)
Great, we are on the same page.

Quote:
So by 'inter-sample peaks' you refer to 'illegally' high inferred PCM data, which by chance doesn't get clipped off naturally by the data limitation, and therefore forms overly large reconstructed signals, whilst other PCM data might not - kind of in the sense of a negative, negative meaning?

I get it :-)

So by the same token, do you not think that overly high levels of PCM data for other wave forms which do actually clip (either by data limitation or by common limiting) would not also cause the reconstruction to peak and overload too?

But even then, this situation might also be called 'inter-sample peaking' too, because the clipping is causing excess HF content, which is more or less like the above situation? Meaning that the term is in fact valid?
I've always understood the term to be used in reference to signals that would clip any conventional DAC. Usually people assume the signal clips the analogue stages but I think in most cases it would clip the digital anti-imaging filter of the DAC(1). So yes, these are illegal signals but they are easily created so they received a name.

(1) I've read of at least one DAC design the shifts the entire signal down in level before hitting the digital reconstruction filter to allow for extra headroom to cater for these "inter sample peaks", well most of the peaks at least, but this is not the case with all DACs.

It was an Audiophile DAC. Not something I would use for mastering purposes as this would hide potential problems when material that sounds fine on this DAC is played back on a DAC that simply clips at 0 dB FS.

Alistair
Old 1st August 2017 | Show parent
  #1734
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
But for instance if you try to pass an artificially generated square wave at significant level and freq in the digital domain it will most certainly cause a reconstruction over, because the loss of harmonics due to the band limit reconstruction filtering will create a much larger peak signal. Could that situation be called inter-sample peaking too?

Hmm... not sure, but I'm surely thinking LOL :-)
Yes, what you describe above is a perfect example of what most people refer to as inter sample peaks. Illegal signals that, when reconstructed, will clip your DAC and the peaks will be between the sampling points (if you overlay the reconstructed signal on the originating sample points).

The peaks will always be between the sampling points because although the peaks are not constrained to the samples, as my examples clearly show, but the signal itself is most certainly constrained to the sampling points. The signal always passes through every sample. That is the very nature of sampling. What happens in between the sampling points is (Re)created by filtering the digital stream of sample values and that can, with specific signals, create peaks "in between samples".

Alistair
Old 1st August 2017 | Show parent
  #1735
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by UnderTow ➡️
Great, we are on the same page.



I've always understood the term to be used in reference to signals that would clip any conventional DAC. Usually people assume the signal clips the analogue stages but I think in most cases it would clip the digital anti-imaging filter of the DAC(1). So yes, these are illegal signals but they are easily created so they received a name.

(1) I've read of at least one DAC design the shifts the entire signal down in level before hitting the digital reconstruction filter to allow for extra headroom to cater for these "inter sample peaks", well most of the peaks at least, but this is not the case with all DACs.

It was an Audiophile DAC. Not something I would use for mastering purposes as this would hide potential problems when material that sounds fine on this DAC is played back on a DAC that simply clips at 0 dB FS.

Alistair
Converter system I designed in the 90s for the OXF-R3 console did indeed leave 3dB of headroom to allow for this. In other words the DACs were driven at 3dB below normal operating max level so that they would not crack up. This of course cost us a 3dB worsening of DAC SNR than would otherwise be available. This is of course something that no commercial DAC manufacturer would contemplate these days, simply because it would make their published specs look a few dB worse than anyone who didn't bother.

The reason we had to do this is that the Sony digital tape machine at that time had meters with an overload detection which came on only when 5 successive full scale samples were detected! Which meant that every signal the console handled in mix contained such errors :-(

We argued relentlessly with them for more than a year about it - but nothing changed. It was heartbreaking :-( When we then tried to set the max operating level on the console metering some distance down from full level, to discourage people from actually recording such errors - that too was overruled, because it was contrary to their notions of operating level 'standards'. It was awful :-( Now of course, as we all know, the problems are everywhere.. These early digital audio people have lots to be ashamed of..

But still - I find it hard, to call the fact that the digital part of the system can apparently pass data which will produce higher than max reconstructed modulation, an 'error'? In concept the 'error' is in programme being sent.

The fact that some of it (when viewed on a data display) resides in attempted programme which by chance occupies the space between samples, kind of misses the point of the concept?

The reason the erroneous stuff escapes detection is the simple fact that the metering is not metering signal - it's only metering sample data values.

BTW it is possible to create and send programme to the DAC which will cause direct overs and clipping, which does not rely on inter-sample transfer. Not all such errors (mostly caused by programme limiters) are inter-sample related. When I get time I'll post an illustration :-)

Last edited by Paul Frindle; 1st August 2017 at 02:00 PM..
Old 1st August 2017 | Show parent
  #1736
Gear Maniac
 
jazbina's Avatar
 
🎧 10 years
Hi Paul,

could you tell me, when clipping ADC, which stage has been clipped, analog (power supply cannot deliver voltage) or digital like adc chip, or both ?
Sorry for the noob question !
Old 1st August 2017 | Show parent
  #1737
Gear Guru
 
UnderTow's Avatar
 
🎧 15 years
Quote:
Originally Posted by Paul Frindle ➡️
Converter system I designed in the 90s for the OXF-R3 console did indeed leave 3dB of headroom to allow for this. In other words the DACs were driven at 3dB below normal operating max level so that they would not crack up. This of course cost us a 3dB worsening of DAC SNR than would otherwise be available. This is of course something that no commercial DAC manufacturer would contemplate these days, simply because it would make their published specs look a few dB worse than anyone who didn't bother.

The reason we had to do this is that the Sony digital tape machine at that time had meters with an overload detection which came on only when 5 successive full scale samples were detected! Which meant that every signal the console handled in mix contained such errors :-(

We argued relentlessly with them for more than a year about it - but nothing changed. It was heartbreaking :-( When we then tried to set the max operating level on the console metering some distance down from full level, to discourage people from actually recording such errors - that too was overruled, because it was contrary to their notions of operating level 'standards'. It was awful :-( Now of course, as we all know, the problems are everywhere.. These early digital audio people have lots to be ashamed of..
Aah the irony! These days I would avoid such a DAC with extra headroom for mastering purposes because so much gear down the line might not behave so gracefully. If as you say, people would have understood early on the difference between the encoded signal and the data points we probably wouldn't be in this situation today.

That said, things are slowly catching on. Most modern mastering limiters have a True Peak limiting mode.

Quote:
But still - I find it hard, to call the fact that the digital part of the system can apparently pass data which will produce higher than max reconstructed modulation, an 'error'? In concept the 'error' is in programme being sent.

The fact that some of it (when viewed on a data display) resides in attempted programme which by chance occupies the space between samples, kind of misses the point of the concept?

The reason the erroneous stuff escapes detection is the simple fact that the metering is not metering signal - it's only metering sample values.
Indeed although things have improved with True Peak meters but even there the specs are not entirely clear on the amount of oversampling required resulting in different meters showing different "True Peaks".

Quote:
BTW it is possible to create and send programme to the DAC which will cause direct overs and clipping, which does not rely on inter-sample transfer. Not all such errors (mostly caused by programme limiters) are inter-sample related. When I get time I'll post an illustration :-)
Great. Looking forward to it!


Alistair
Old 1st August 2017 | Show parent
  #1738
Gear Guru
 
🎧 10 years
Quote:
Originally Posted by Paul Frindle ➡️
Ah finally I'm getting it! :-)
:-)
Old 1st August 2017 | Show parent
  #1739
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by UnderTow ➡️
Yes, what you describe above is a perfect example of what most people refer to as inter sample peaks. Illegal signals that, when reconstructed, will clip your DAC and the peaks will be between the sampling points (if you overlay the reconstructed signal on the originating sample points).

The peaks will always be between the sampling points because although the peaks are not constrained to the samples, as my examples clearly show, but the signal itself is most certainly constrained to the sampling points. The signal always passes through every sample. That is the very nature of sampling. What happens in between the sampling points is (Re)created by filtering the digital stream of sample values and that can, with specific signals, create peaks "in between samples".

Alistair
Sorry to go on about it again - as you can see from distant history this is something I get 'emotional' about. Converter design in such an environment is a royal pain in the neck - and has been since the 90's. And the problem does not exist only within DACs, it can also plague sample rate converters - and even some plugin processors, which may themselves produce what is essentially yet more illegal audio data from being fed illegal audio data up-line.

I'm not at home and can't whip up a graphic example yet. But again we need to be careful of what we call 'signal'?

For instance a square wave 'function' created in the digital domain (either by generation or by processing of other programme) is actually an error - since the wave form itself cannot exist in that form outside the system.

It could not have been gathered in that form from the 'real' environment by an audio ADC (because the ADC is band limited) - and it cannot be output from an audio system in that form either for the same reasons. The error doesn't have to exist only in the form of 'inter-samples' (even though they may be involved too if one looks at it) and it can cause clipping and nasties in an audio play out system, even if peak data sample values are significantly below 0dBFS.

The data system can indeed, create and pass data-legal stuff which is not valid audio signal at all.

This is why a few years back (in some very very long threads on this forum) I was advocating target operating levels which where as much as -6 to -10dBFS below max metering levels. Many people took this advice and noticed welcome improvements in overall mixing results. The problem amounts to the unintended abuse of the data system - which is not visible to the engineers doing the job.

Last edited by Paul Frindle; 1st August 2017 at 03:29 PM..
Old 1st August 2017 | Show parent
  #1740
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by jazbina ➡️
Hi Paul,

could you tell me, when clipping ADC, which stage has been clipped, analog (power supply cannot deliver voltage) or digital like adc chip, or both ?
Sorry for the noob question !
This is complex and so dependent on the ADC and architecture it uses and of course what else you have in the ADC box (mic amps, artificial saturation and such like)

In devices in use these days - used directly without other stuff in the box, the actual converter is much more likely to clip (or do bad stuff) than the simpler analogue stages which precede it. At the limit the data coming out of it should clip significantly before the analogue stages - unless the design of the unit is very bad indeed.

However - some converter internal architectures may produce partially inferior results when drive close to their limits - which does not necessarily produce actual clipping.

But be careful with combined ADC and Mic amp boxes. There was a PT Mbox where some settings of the controls would cause the analogue sections to clip before the internal converter got a chance to convert it. This is simply bad design.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 1039 views: 162526
Avatar for KarmaPolice
KarmaPolice 30th November 2013
replies: 295 views: 76030
Avatar for anguswoodhead
anguswoodhead 26th March 2013
replies: 1296 views: 185816
Avatar for heraldo_jones
heraldo_jones 1st February 2016
replies: 456 views: 70268
Avatar for gearsuser
gearsuser 4th August 2021
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump