<A HREF="http://phonophunk.phreakin.com/p4denormal.html" target="_new">Clickable 1</A>
<A HREF="http://www.digitalfishphones.com/main.php?item=2&subItem=6" target="_new">Clickable 2</A>
Does anyone know anything about Denormalization (Denormalisation) and "CPU spikes" occurring when using audio applications on a computer, evidently especially observed in Pentium 4 systems?
Anyone know how any possible resulting problems can be avoided?
I'd never actually heard of it before, but after reading just the first link it all sounded pretty straight forward. So I went to google to dig up more on it from a programmer's perspective and came up with <A HREF="http://www.cs.berkeley.edu/~aiken/cs264/lectures/kahan.ps" target="_new">this</A>. It's a .ps file though, so you need a reader for it.
Anywho, it sounds like a simple case of bad programming. The x87 FPU uses the highest accuracy no matter what the actual accuracy needed, which is nice if you ask me. That I knew. The problem lies in the algoritmhs being used must be causing a lot of underflows at even the highest accuracy level. All of those underflows send the exception handlers haywire as they try to keep up with it all, and WHAM, 100% CPU usage as the algorithm just keeps plugging away at generating the exceptions from underflows.
The questions are, why are these algorithms:
1) <i>NOT</i> checking for an underflow themselves?
2) Forcing the FPU to calculate that kind of accuracy in the first place when the algorithms don't even <i>need</i> that level of accuracy?
The answer to both of those questions is simple: BAD PROGRAMMING. And this is coming from me, an x86 computer programmer.
Personally I haven't seen it happen in any of my code. But if it did, I'd certainly notice it while testing, and I'd certainly have to ask myself WTF I was doing in the first place forcing that poor FPU to calculate that kind of accuracy of near-zero if I don't even need it. I'd certainly have to smack myself for not including my own exception handling to keep the algorithm from happily looping once I've hit that wall of impossible accuracy. So in other words, it's simply bad software, through and through.
And I certainly can't say that it's undocumented when I can find documents pertaining to this situation dating back to as early as 1984. So there's really no excuse other than laziness as to why any code would be causing this problem in the first place.
That aside, it seems to be an inherant situation (<i>you can't really call it a flaw because the hardware handles it excellently, it's the software that is lacking in this case</i>) in <i>ALL</i> x86 and 680x0 processors with an x87 FPU unit. That's a heck of a lot of CPUs.
In the case of the audio software, the problem sounds incredibly easy to avoid. If the audio software is prone to this denormalization problem:
1) Report the bug to the software vendor.
2) Simply don't allow that audio software to render effects on a perfect (or nearly perfect) silent feed. Add just a smidgeon of low-level background noise to give the algorithms something calculably above zero to crunch yet is below the human range of hearing.
<font color=blue><pre>If you don't give me accurate and complete system specs
then I can't give you an accurate and complete answer.</pre><p></font color=blue>