Quote:
Originally Posted by
Deleted 844579b
β‘οΈ
you don't listen to the audio data, you listen to the interpolation of the data
Well it's a semantically blurred area, but not in as much as how many people envision it.
It's an understandable misconception because it's completely unintuitive when you look at the sampled waveform, but then that makes it a nice example of how you can draw the wrong conclusions making
uninformed extrapolations from visual data.
The interpolation is not necessary to recreate the original signal below the Nyquist frequency. That is already there. So if Nyquist is above your audible threshold, then if we could rely on the rest of the system being linear, you could simply omit the interpolation.
I don't like the term interpolation in this context because although it describes what you see visually, it doesn't describe what is going on in terms that actually matter for audio.
The "interpolation" is actually a REMOVAL of content ABOVE the Nyquist frequency when the stream of samples is converted to a series of voltages.
What is below, what we actually listen to, is untouched, it was there all the time.
When Nyquist is above the threshold of hearing, we don't remove the stuff above because it would directly be audible, but because it might cause distortions down the line which would create audible components.
That's why I much prefer the term "anti imaging filter" to either "interpolating filter" or "reconstruction filter".
(all of which is of course a further deviation from the main focus of this thread, but I think that if more people understood this, there would be less misunderstanding of sampling in general and digital in particular)