Out of curiosity, what's the meaning of "denormal flushing" in the context of floating point arithmetic? Do you know a good reference for such topic?
I have some book but actually i can't find anything.
I need to understand how this definition is usually applied when a math library (i.e. function approximation) is going to be implemented, and how this definition somehow influence the ulp error of the function implemented compared to the "real" function.