Normalizing Time Series Data

2.3k views Asked by At

I'm creating a database to store a lot of events. There will be a lot of them and they will each have an associated time that is precise to the second. As an example, something like this:

Event
-----
Timestamp
ActionType (FK)
Source (FK)
Target (FK)

Actions, Sources, and Targets are all in 6NF. I'd like to keep the Event table normalized, but all of the approaches I could think of have problems. To be clear about my expectations for the data, the vast majority (99.9%) of events will be unique with just the above four fields (so I can use the whole row as a PK), but the few exceptions can't be ignored.

  1. Use a Surrogate Key: If I use a four-byte integer this is possible, but it seems like just inflating the table for no reason. Additionally I'm concerned about using the database over a long period of time and exhausting the key space.

  2. Add a Count Column to Event: Since I expect small counts I could use a smaller datatype and this would have a smaller effect on database size, but it would require upserts or pooling the data outside the database before insertion. Either of those would add complexity and influence my choice of database software (I was thinking of going with Postgres, which does upserts, but not gladly.)

  3. Break Events into small groups: For example, all events in the same second could be part of a Bundle which could have a surrogate key for the group and another for each event inside it. This adds another layer of abstraction and size to the database. It would be a good idea if otherwise-duplicate events become common, but otherwise seems like overkill.

While all of these are doable, they feel like a poor fit for my data. I was thinking of just doing a typical Snowflake and not enforcing a uniqueness constraint on the main Event table, but after reading PerformanceDBA answers like this one I thought maybe there was a better way.

So, what is the right way to keep time-series data with a small number of repeated events normalized?

Edit: Clarification - the sources for the data are logs, mostly flat files but some in various databases. One goal of this database is to unify them. None of the sources have time resolution more precise than to the second. The data will be used for questions like "How many different Sources executed Action on Target over Interval?" where Interval will not be less than an hour.

1

There are 1 answers

2
Mike Sherrill 'Cat Recall' On BEST ANSWER

The simplest answers seem to be

  • store the timestamp with greater precision, or
  • store the timestamp to the second and retry (with a slightly later timestamp) if INSERT fails because of a duplicate key.

None of the three ideas you mention have anything to do with normalization. These are decisions about what to store; at the conceptual level, you normalize after you decide what to store. What the row means (so, what each column means) is significant; these meanings make up the table's predicate. The predicate lets you derive new true facts from older true facts.

Using an integer as a surrogate key, you're unlikely to exhaust the key space. But you still have to declare the natural key, so a surrogate in this case doesn't do anything useful for you.

Adding a "count" colummn makes sense if it makes sense to count things; otherwise it doesn't. Look at these two examples.

Timestamp            ActionType  Source  Target
--
2013-02-02 08:00:01  Wibble      SysA    SysB
2013-02-02 08:00:02  Wibble      SysA    SysB

Timestamp            ActionType  Source  Target  Count
--
2013-02-02 08:00:01  Wibble      SysA    SysB    2

What's the difference in meaning here? The meaning of "Timestamp" is particularly important. Normalization is based on semantics; what you need to do depends on what the data means, not on what the columns are named.

Breaking events into small groups might make sense (like adding a "count" column might make sense) if groups of events have meaning in your system.