I have an application in which, it seems to me, fractional types are essential. And I think the use case is quite common.
Yet though they are implemented in several languages, I have not seen any guidelines which state 'under the following circumstances, you must use fractional types to get good results'.
To the contrary, fractional types seem to be deprecated, for example here.
My use case is commercial accounting in which proportions of an expense are allocated to different budget heads. Suppose 1/3 of the expense is allocated to customer A and 2/3 to customer B. Suppose repeatedly that A gets just under 1/3, to the limit of floating point accuracy within the given application. Then over, say 1,000,000,000,000 operations, A will lose out to B by an appreciable amount.
I give this only as one instance of a general mathematical problem, namely, it is not rational to try and approximate any given rational fraction by means of another rational fraction unless the two are identical after factorization. At the end of the day, a floating point number is a rational number of a quite particular type, namely some multiple of the smallest number that the implementation can represent.
But it is wrong to say that if
x= 0.00000000000000001
and
y=0.00000000000000001*41/43
that the expression
x==y
is true, 'more or less'
It is simply not true, and no amount of floating point accuracy can make it true
Maybe this question is too general for this forum in which case I apologise, and it can be deleted. But there is a principle involved, which I expected to be explored in a discussion of fractional types, but (so far) have not seen. I'm sure there is such a discussion, so I am just hoping to be directed to it, with apologies for not finding it for myself.
That said, I can see that the responses to the above question led to its closure on the grounds that it was opinion-based.
However in the above case, it is not an opinion that x!=y. It is a fact.