Quote:
Originally Posted by
elambo
➡️
Help me understand this better. Let's say Lexicon or Eventide wants to move the algorithms from their hardware into a plugin and their goal is to make them sound as close to identical as possible. I don't think it's just a matter of taking the original algo and copy/pasting it over to the AAX/AU/VST/etc. form, right? At least not in most cases. And I know this procedure varies depending upon the architecture of the hardware vs. the architecture of the software, but in *most* cases, aren't you required to emulate the behavior of the hardware in a different way within the confines of the software? I'm using general terms so that the specifics don't cloud the question, which, I know, will be tough to answer simply, and also ignoring the effects of any D/A converters or even the sound of digital outs.
I'm trying to understand how someone like Weiss can claim a 1:1 port when other companies seems to spend a lot of time trying to figure out how to recreate their products in software form.
Modern computers tend to run floating point DSP processes. Either 32-bit (single precision) or 64-bit (double precision). There are some possible quantization and distortion issues that you can run into with single precision floating point and some recursive filters. But, for the most part, if your algorithm is theoretically stable, it will be stable and clean in floating point.
Older digital effects, for the most part, use fixed point processing, with much less precision than 32-bit floating point. The H3000 discussed above originally used 16-bit DSPs, 16-bit convertors, and wrote the delay memory to 16-bit integer buffers. This sort of processing will introduce a TON of quantization noise and error, as well as clipping (in the BEST case) or wraparound distortion (if saturation math isn't enabled).
In order to get anything remotely decent out of older fixed point hardware, all sorts of clever tricks had to be used. The order of operations was critical, as some sorts of filters and feedback operations will clip if executed in one way versus another (see: direct form I filters versus direct form II versus transposed direct form II). In order to avoid overloads or wraparounds, the gain often had to be reduced before an operation, which will result in less precision for fixed point signals.
Analog processing was often used in conjunction with the digital processing. It was very common to have pre-emphasis filters on the input (to boost the high frequencies and/or cut the low frequencies) and de-emphasis on the output (the complementary filter of the pre-emaphasis) in order to avoid distortion while reducing noise. Other weirder analog tricks were sometimes used, such as instantaneously tracking the input signal level and setting a "gain bit," in order to use a cheaper 12-bit convertor and increase the perceived signal to noise level. This can be viewed as floating point math of sorts.
Older digital boxes had almost NO signal processing horsepower compared to today. The idea that older digital boxes were more powerful than modern computers is just straight up wrong. The algorithms used in those boxes therefore had to use the less expensive computational options. The H3000 used linear interpolation in the modulated delay algorithms that worked at a fixed sampling rate, which caused a noticeable high frequency cut when the modulation depth was turned up from zero. The swept filters in the H3000 were most likely a variant of the Chamberlin state variable filters, which are computationally efficient and sound great, but will blow up if the cutoff is set above 1/6th the sampling rate. Which is probably why the filters in the H3000 max out at 7 kHz or 8 kHz (I forget which). In some older Lexicon boxes, the modulation was generated by an 8 bit Z80 processor, which resulted in a lot of noise, as the modulation waveforms had 8 bit stair steps in them.
So, the algorithm designer for older digital hardware was always trying to balance computational efficiency with sonic clarity. There were no "perfect" solutions, only compromises. The end results were as clean as they could be, given the circumstances, but they were far from pristine.
It turns out that a lot of people LIKE sounds that aren't pristine. The older digital boxes had some dirt, noise, reduced high end, stuff like that. What some people perceive as flaws, other people view as character.
So, long story short, a port of a basic algorithm topology from an older digital box to a modern plugin needs to take into account the importance of the artifacts of older fixed point processing. A floating point port might avoid most to all of those artifacts. Either the plugin needs to be created in fixed point (which is a real pain, as modern CPUs don't have the saturation features of older/custom DSPs used in audio signal processing hardware), or those artifacts need to be emulated, or the people porting the algorithms can decide "you know what? I always HATED that noise/clipping, and always wanted things to sound THIS way."
The PCM96 is a totally different story. It was programmed on a very high speed 32-bit floating point processor (TigerSHARC). Any differences between the hardware and the PCM96 plugins was either a difference in features/parameters, an error, or a bug fix when porting to the plugin. There is literally no magic mojo in a TigerSHARC - it's just a high speed floating point number cruncher.
P.S. If you want to learn more about what programming fixed point DSP was like in the 1980s, there's a great paper from the folks that programmed the H3000:
https://www.aes.org/e-lib/browse.cfm?elib=5449