Here is an example:
>>> "%.2f" % 0.355
'0.35'
>>> "%.2f" % (float('0.00355') *100)
'0.36'
Why they give different result?
Here is an example:
>>> "%.2f" % 0.355
'0.35'
>>> "%.2f" % (float('0.00355') *100)
'0.36'
Why they give different result?
Because, as with all floating point "inaccuracy" questions, not every real number can be represented in a limited number of bits.
Even if we were to go nuts and have 65536-bit floating point formats, the number of numbers between 0 and 1 is still, ... well, infinite :-)
What's almost certainly happening is that the first one is slightly below 0.355 (say, 0.3549999999999) while the second is slightly above (say, 0.3550000001).
See here for some further reading on the subject.
A good tool to play with to see how floating point numbers work is Harald Schmidt's excellent on-line converter. This was so handy, I actually implemented my own C# one as well, capable of handling both IEEE754 single and double precision.
Arithmetic with floating point numbers is often inaccurate.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
This isn't a format bug. This is just floating point arithmetic. Look at the values underlaying your format commands:
The two expressions create different values.
I don't know if it's available in 2.4 but you can use the decimal module to make this work:
The decimal module treats floats as strings to keep arbitrary precision.