I need to get the full nanosecond-precision modified timestamp for each file in a Python 2 program that walks the filesystem tree. I want to do this in Python itself, because spawning a new subprocess for every file will be slow.
From the C library on Linux, you can get nanosecond-precision timestamps by looking at the st_mtime_nsec
field of a stat
result. For example:
#include <sys/stat.h>
#include <stdio.h>
int main() {
struct stat stat_result;
if(!lstat("/", &stat_result)) {
printf("mtime = %lu.%lu\n", stat_result.st_mtim.tv_sec, stat_result.st_mtim.tv_nsec);
} else {
printf("error\n");
return 1;
}
}
prints mtime = 1380667414.213703287
(/
is on an ext4 filesystem, which supports nanosecond timestamps, and the clock is UTC).
Similarly, date --rfc-3339=ns --reference=/
prints 2013-10-01 22:43:34.213703287+00:00
.
Python (2.7.3)'s os.path.getmtime(filename)
and os.lstat(filename).st_mtime
give the mtime as a float
. However, the result is wrong:
In [1]: import os
In [2]: os.path.getmtime('/') % 1
Out[2]: 0.21370339393615723
In [3]: os.lstat('/').st_mtime % 1
Out[3]: 0.21370339393615723
—only the first 6 digits are correct, presumably due to floating-point error.
Alternatively you coudl use the cffi library which works with Python 2 with the follwoing code (tested on LInux):
This is identical in behavior to your C program in your question. This produces the output:
Which has the same precision as your C program: