I want to make a query on a 70 GB table in PostgreSQL + TimeScaleDB and copy the result into another table. The problem is that it looks like Postgres is trying to build the new table in memory before writing it on disk, which obviously creates an OUT OF MEMORY error.

The table I want to copy contains time series data with a second of precision. I want to create copies of this table with a lower precision to make queries on large time ranges - where such a precision is not necessary - faster. When I do this for precision of 1 week, 1 day or 1 hour it works. The problem only occurs with the 1 minute precision.

The query I am using to create the new table is:

CREATE TABLE downsampling_1m AS
SELECT time_bucket('1 minute', time) AS one_minute_bucket, name, avg(value) AS avg_value,
min(value) AS min_value, max(value) AS value, stddev(value) AS stddev_value
FROM original_table
GROUP BY name, one_minute_bucket
ORDER BY one_minute_bucket;

I would like Postgres not to fill the memory like an idiot and write the data on disk on the fly. I could write a script to divide this query into several queries on shorter time ranges, but it would really make my life easier if there was a built in solution to my problem.

1 Answers

0
thebuleon29 On Best Solutions

I figured it out, so I post the answer in case someone needs it in the future. TimeScale works with what it calls Hypertables, which are collection of tables called chunks, each one corresponding to a specific time range. For the user, this collection of tables is seen as a single table, the Hypertable. This way, when a user sends a query on a specific data range, only the chunks corresponding to this data range will be queried. It is one of the tricks used by TimeScale to make it possible for Postgres to handle Time Series.

The thing is that when the user queries an entire Hypertable, all the chunks are queried one by one. So, in the case of a copy, if a chunck is larger than the main memory it will cause the error.

The solution is to define the chunk size when creating the hypertable :

SELECT create_hypertable('tableName', 'timeColumn', chunk_time_interval=interval);

With chunks smaller than the size of the main memory it works.