I've always heard that you should use a money class due to floating point inaccuracy. However, it is astonishingly hard to find any example where floating point inaccuracy actually leads to a wrong result.
My programming language of choice is Python. To test if the result is different from the expected, I use:
expected = '1.23'
result = '{:0.2f}'.format(result)
assert expected == result
So while the following is a good example for floating point inaccuracy, it is NOT an example for the need of a money class using a rational number class (like Pythons fractions) for most use-cases:
a = 10.0
b = 1.2
assert a + b - a == b
The best thing I could come up with is
result = (a + b - a) * 10**14 - b * 10**14
expected = 0
but multiplying something money-related by 10**14 seems really made-up.
Now I wonder if there are any realistic examples showing the need for a money class or if everything is "captured" by simply rounding to two digits.
I would not say it is astonishingly hard. A famous real-world example, albeit not involving money was that the Patriot missile system code accumulated a floating point rounding error of 0.000000095 seconds per second; if the system was not rebooted every five days, it would be off by a fraction of a second. Since the missiles it intercepts can move several thousand meters per second, it would miss.
At least 28 people died as a result of this floating point error.
We can demonstrate the Patriot error without putting more lives at risk. Here's a little C# program. Suppose we are adding up dimes; how many do we have to add before we get a significant error?
Let it run as long as you like. The output on my machine started:
After only a hundred million computations -- so, $10M -- we are already off by two cents. By ten billion computations we are off by $163.12. Sure, that's a tiny error per transaction, and maybe $163.12 is not a lot of money in the grand scheme of things compared to a billion dollars, but if we cannot correctly compute 100 million times 0.1 then we have no reason to have confidence in any computation that comes out of this system.
The error could be guaranteed to be zero; why would you not want the error to be zero?
Exercise: You imply that you know where to put the roundings in to ensure that this error is eliminated. So: where do they go?
Some additional thoughts, inspired by your comment:
If what you want is real-world examples of money errors involving units of measure not being trapped by the type system, there are many, many such examples.
I used to work at a company which writes software that detects software defects. One of the most magical defect checkers is the "cut and paste error" detector, and it found a defect in real world code like
And then later on in the code
Oops.
The major international trading house that had that defect called us up and said that the beer was on them next time we were in Switzerland.
Examples like these show why financial houses are so interested in languages like F# that make it super easy to track properties in the type system.
I did a series on my blog a few years ago about using the ML type system to find bugs when implementing virtual machines where an integer could mean an address of a dozen different data structures, or an offset into those structures. It finds bugs fast, and the runtime overhead is minimal. Units of measure types are awesome, even for simple problems like making sure you don't mix up dollars with yen.