Quote:
Originally Posted by
Bob Olhsson
➡️
A lot of assumptions about dither from the 1990s turn out to apply mostly to 1990s 18 and 20 bit converters and not more recent 24 bit ones. A I wrote above, with modern converters the theoretical flat TPDF turns out to sound consistently better provided the signal is clean.
I was thinking about this while riding my bike today, Bob. (Some people walk and chew gum at the same time. I pedal and do mathematics at the same time. It's a geek thing.) I have a mathematical theory for why TPDF might be more useful for high-resolution files today than it was in 1990.
Recall how optimal dither eliminates distortion: The error function for a uniform quantizer looks like a sawtooth wave. Its Fourier transform is an equally-spaced set of impulses whose spacing is inversely proportional to the converter's LSB size. Dither is a random variable whose amplitude described by a probability density function. The Fourier transform (i.e. characteristic function) of rectangular PDF dither has a "sinc" or sin(f)/f shape with periodic nulls. When you work out the expected value of the quantization error, it turns out to be independent of the input signal because the nulls in that sinc function cancel the impulses from the transformed error curve. (All but one, anyway.) For this cancellation to be perfect, the rectangular dither has to have exactly the right peak-to-peak amplitude (1 LSB), and the quantizer has to be perfectly uniform: all LSB's the same size.
What we had in the 1990's were single-bit sigma-delta converters. A single-bit quantizer yields perfect linearity, so all LSB's are the same size. But single-bit converters had other problems like overloads and limit cycles which led converter designers to switch over to the multi-bit architectures that predominate today. Multi-bit converters
don't have perfect linearity. Not only can the LSB steps be different from one another, they even change size from instant to instant. Consequently what were once zero-width impulses in the error characteristic get spread out and
leak out of the dither nulls.
Now consider optimal triangular PDF dither. It's made by adding two optimal rectangular dither signals together, so their PDF's get convolved (yielding a triangle) and their characteristic functions get multiplied together yielding a sinc-squared function. The nulls are still in the same places, but
they are wider than in the rectangular case. This means that triangular dither will do a better job at cancelling signal correlated errors from non-ideal quantizers.
It used to be that we liked triangular dither because its sinc-squared characteristic function decorrelated the second moment (power) of the quantization error from the signal. (That is, it prevented noise modulation.) But in a 24-bit system, the quantization noise is way too low to hear, so noise modulation shouldn't matter any more. Distortion still does, though. I think what matters today is that triangular dither is better than rectangular dither at coping with the non-uniform quantization steps that come with multi-bit delta-sigma converters.
Hopefully Alexey is still watching this thread, because I'd like to have his thoughts on this.
David L. Rick
P.S. to Steve Berson: I'm fortunate to have one client who can distinguish between different dithers and noise shapers (better than I can), but I recognize that he's highly unusual.