I am trying to find the difference between two dates in seconds.
I have used tm1 and tm2 to store the two different dates.
tm1 = 1900-01-01 00:00:00
tm2 = 2000-01-01 20:38:40
tm1.tm_year = 0
tm1.tm_mon = 0
tm1.tm_mday = 1
tm2.tm_year = 100
tm2.tm_mon = 0
tm2.tm_mday = 1
tm2.tm_hour = 20
tm2.tm_min = 38
tm2.tm_sec = 40
Now to the code :
time_t t1, t2;
t1=mktime(&tm1);
t2=mktime(&tm2);
int diff = difftime(t2,t1);
The actual value of t1
in this case is 2209010000
and that of t2
is 946739320
.
Now I executed a similar function using PostgreSQL :
select EXTRACT(EPOCH FROM ('2000-01-01 20:38:40' - CAST('1900-01-01' AS TIMESTAMP)));
But the result from difftime()
function is 3155749320
and the result through querying is 3155747920
, a difference of 1400 seconds.
Why is there a difference in the result obtained from both the methods? What would have caused the difference? Do I need to handle time zones?
The unix timestamp for a given date begins at 1/1/1970, so the first thing that hits me when you show your timestamps is that 1/1/1900 should be negative. If we analize the exact UNIX timestamp for the date you used, it should be
-2208988800
(for1/1/1900 00:00:00 UTC
) which is larger (in absolute value) than the lowest minimum integer representable in 32 bits (in two's complement) which is-2147483648
.If you want to do exact calculations, you have to switch to 64bit integers, and you'll get the correct result.
2000-01-01 20:38:40 -> 946759120
1900-01-01 00:00:00 -> -2208988800
946759120 - (-2208988800) = 3155747920 s.
The timestamp you obtained are the 32 least signifiant bits of the overflown result. The minimum date possible with 32 bit timestamps in two's complement is for
-2147483648 -> 13/Dec/1901, 20:45:52
.