When I am writing functions that takes an arguments that determine a certain length, I always use uint
. As it makes no sense for the value to be a negative number.
But I see the opposite (very often) in the .Net framework. For example: http://msdn.microsoft.com/en-us/library/xsa4321w.aspx
Initializes a new instance of the String class to the value indicated by a specified Unicode character repeated a specified number of times.
public String(
char c,
int count
)
It also states that an ArgumentOutOfRangeException
is thrown when `count is less than zero."
Why not make the count
argument an uint
then!?
I had a similar question ages ago, although more for my own interfaces, not the core ones.
The short answer (given by no other than Eric Lippert, and repeated in answers to other questions) is: The Common Language Specification rules state that you should not use
uint
in public interfaces because not all languages that are designed to work with .NET actually have such a type. Java derivatives like J# come to mind – Java only has signed integral types.