we're writing a scientific tool with MySQL support. The problem is, we need microsecond precision for our datetime fields, which MySQL doesn't currently support. I see at least two workarounds here:
- Using a decimal() column type, with integer part corresponding to seconds since some point in time (I doubt that UNIX epoch will do, since we have to store measurements taken in 60's and 50's).
- Using two integer columns, one for seconds, the other one for microseconds.
The most popular query is selecting columns corresponding to a time interval (i.e. dt_record > time1 and dt_record < time2).
Which one of these methods (or perhaps another one) is likely to provide better performance in the case of large tables (millions of rows)?
If you say that the most popular queries are time base, I would recomend going with a single column that stores the time as in your first option.
You could pick your own epoch for the application, and work from there.
This should simplify the queries that needs to be written when searching for the time intervals.
Also have a look at 10.3.1. The DATETIME, DATE, and TIMESTAMP Types