I want to make a query on a 70 GB table in PostgreSQL + TimeScaleDB and copy the result into another table. The problem is that it looks like Postgres is trying to build the new table in memory before writing it on disk, which obviously creates an OUT OF MEMORY error.
The table I want to copy contains time series data with a second of precision. I want to create copies of this table with a lower precision to make queries on large time ranges - where such a precision is not necessary - faster. When I do this for precision of 1 week, 1 day or 1 hour it works. The problem only occurs with the 1 minute precision.
The query I am using to create the new table is:
CREATE TABLE downsampling_1m AS SELECT time_bucket('1 minute', time) AS one_minute_bucket, name, avg(value) AS avg_value, min(value) AS min_value, max(value) AS value, stddev(value) AS stddev_value FROM original_table GROUP BY name, one_minute_bucket ORDER BY one_minute_bucket;
I would like Postgres not to fill the memory like an idiot and write the data on disk on the fly. I could write a script to divide this query into several queries on shorter time ranges, but it would really make my life easier if there was a built in solution to my problem.