Time as a Signed Integer

1.3k views Asked by At

I've been reading up on the Y2038 problem and I understand that time_t will eventually revert to the lowest representable negative number because it'll try to "increment" the sign bit.

According to that Wikipedia page, changing time_t to an unsigned integer cannot be done because it would break programs that handle early dates. (Which makes sense.)

However, I don't understand why it wasn't made an unsigned integer in the first place. Why not just store January 1, 1970 as zero rather than some ridiculous negative number?

3

There are 3 answers

3
Femaref On BEST ANSWER

Because letting it start at signed −2,147,483,648 is equivalent to letting it start at unsigned 0. It doesn't change the range of a values a 32 bit integer can hold - a 32 bit integer can hold 4,294,967,296 different states. The problem isn't the starting point, the problem is the maximum value which can be held by the integer. Only way to mitigate the problem is to upgrade to 64 bit integers.

Also (as I just realized that): 1970 was set as 0, so we could reach back in time as well. (reaching back to 1901 seemed to be sufficient at the time). If they went unsigned, the epoch would've begun at 1901 to be able to reach back from 1970, and we would have the same problem again.

0
Marc B On

Because not all systems have to deal purely with "past" and "future" values. Even in the 70s, when Unix was created and the time system defined, they had to deal with dates back in the 60s or earlier. So, a signed integer made sense.

Once everyone switches to 64bit time_t's, we won't have to worry about a y2038k type problem for another 2billion or so 136-year periods.

0
templatetypedef On

There's a more fundamental problem here than using unsigned values. If we used unsigned values, then we'd get only one more bit of timekeeping. This would have a definitely positive impact - it would double the amount of time we could keep - but then we'd have a problem much later on in the future. More generally, for any fixed-precision integer value, we'd have a problem along these lines.

When UNIX was being developed in the 1970s, having a 60 year clock sounded fine, though clearly a 120-year clock would have been better. If they had used more bits, then we'd have a much longer clock - say 1000 years - but after that much time elapsed we'd be right back in the same bind and would probably think back and say "why didn't they use more bits?"