Why do functions take an int as an argument when its value logically can not be < 0

116 views Asked by At

When I am writing functions that takes an arguments that determine a certain length, I always use uint. As it makes no sense for the value to be a negative number.

But I see the opposite (very often) in the .Net framework. For example: http://msdn.microsoft.com/en-us/library/xsa4321w.aspx

Initializes a new instance of the String class to the value indicated by a specified Unicode character repeated a specified number of times.

public String(
    char c,
    int count
)

It also states that an ArgumentOutOfRangeException is thrown when `count is less than zero."

Why not make the count argument an uint then!?

2

There are 2 answers

3
Joey On BEST ANSWER

I had a similar question ages ago, although more for my own interfaces, not the core ones.

The short answer (given by no other than Eric Lippert, and repeated in answers to other questions) is: The Common Language Specification rules state that you should not use uint in public interfaces because not all languages that are designed to work with .NET actually have such a type. Java derivatives like J# come to mind – Java only has signed integral types.

0
supercat On

Unsigned types in C have two primary usages:

  1. In some cases, it is useful to have values which would "wrap" when computations exceeded the range of a type. For example, when computing many kinds of hash or checksum it is much more convenient to simply perform a bunch of additions or multiplies while ignoring overflow, than to use an over-sized variable and/or conditional logic to prevent overflow.

  2. It is sometimes useful to have a two-byte variable be able to hold a value up to 65,535 rather than just 32,767; occasionally it is useful to have four-byte variables go up to 4,294,967,295, but that's much less common.

In C, an attempt to store -1 into an unsigned variable was required to store (without any sort of error or squawk) the value which, when added to +1, would yield zero. This was very useful for the first usage scenario; it wasn't desirable for the second, but since C never had any sort of overflow trapping on signed integers either it could be considered an extension of the principle that bad things will happen when computations on numbers (rather than algebraic rings) go out of range.

C#, unlike C, supports numeric overflow detection, and could thus apply it to the second style of use while still allowing the first style of use. Unfortunately, the determination is made based upon checked or unchecked numerical contexts, rather than upon the types of variables, parameters, or values. Thus, if a method were to accept a parameter of type UInt32 and code in an unchecked context tried to pass in a value of -1, the method would see that as a value of 4,294,967,295. There's no way by which a parameter can be marked to say "This should be a value between 0 and 4,294,967,295; squawk if it's anything else, regardless of checked/unchecked status. It's thus safer to have code accept an Int32 if an upper limit of 2,147,483,647 is sufficient, or Int64 if not.