Not able to take backup of hypertable TimescaleDB database using pg_dump PostgreSQL

7.2k views Asked by At

command used to take backup

C:\Program Files\PostgreSQL\12\bin>pg_dump  -h localhost -U postgres -p 5432  -Fc -f "D:\Database Backup\temp_10.bak" GESEMS_Performace_Test.

Error :

pg_dump: NOTICE: hypertable data are in the chunks, no data will be copied.

DETAIL: Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.

Reference Image:

Any suggestions to take backup of TimescaleDB hypertables?

2

There are 2 answers

0
Mats Kindahl On BEST ANSWER

In TimescaleDB, the hypertable is an empty table and the data is stored in child tables called chunks. You can see the structure of the hypertable using the \d+ command in psql:

postgres=# \d+ devices
                                          Table "public.devices"
 Column |           Type           | Collation | Nullable | Default | Storage | Stats target | Description 
--------+--------------------------+-----------+----------+---------+---------+--------------+-------------
 time   | timestamp with time zone |           | not null |         | plain   |              | 
 device | integer                  |           | not null |         | plain   |              | 
 temp   | double precision         |           |          |         | plain   |              | 
Indexes:
    "devices_pkey" PRIMARY KEY, btree ("time", device)
    "devices_device_time_idx" btree (device, "time" DESC)
    "devices_time_idx" btree ("time" DESC)
Triggers:
    ts_insert_blocker BEFORE INSERT ON devices FOR EACH ROW EXECUTE FUNCTION _timescaledb_internal.insert_blocker()
Child tables: _timescaledb_internal._dist_hyper_1_10_chunk,
              _timescaledb_internal._dist_hyper_1_11_chunk,
              _timescaledb_internal._dist_hyper_1_12_chunk,
              _timescaledb_internal._dist_hyper_1_13_chunk,
              _timescaledb_internal._dist_hyper_1_14_chunk,
              _timescaledb_internal._dist_hyper_1_15_chunk,
              _timescaledb_internal._dist_hyper_1_1_chunk,
              _timescaledb_internal._dist_hyper_1_2_chunk,
              _timescaledb_internal._dist_hyper_1_3_chunk,
              _timescaledb_internal._dist_hyper_1_4_chunk,
              _timescaledb_internal._dist_hyper_1_5_chunk,
              _timescaledb_internal._dist_hyper_1_6_chunk,
              _timescaledb_internal._dist_hyper_1_7_chunk,
              _timescaledb_internal._dist_hyper_1_8_chunk,
              _timescaledb_internal._dist_hyper_1_9_chunk

When you dump a table using PostgreSQL pg_dump, it will dump the contents of the parent table and the child tables separately. When you restore the dump it will in turn fill both the hypertable (parent table) and the chunks (child tables).

Since pg_dump uses the standard COPY command to extract the contents of tables when dumping, TimescaleDB prints a notice that you're trying to dump a hypertable, which is empty.

The reasons for this is that if you do a direct dump of the hypertable only using COPY, it will not dump any data at all, which is useful to know since it's easy to make a mistake otherwise. Since it is not possible to distinguish between the case that you run COPY to on a single table directly and using pg_dump (which dumps all tables), this notice will be printed also when you use pg_dump, but it is harmless.

You should check the actual dump output to see that the child tables are actually dumped.

0
Ilya Dorfman On

In order to duplicate the schema, I used the following flow:

  1. pg_dump source schema into sql
  2. sed to replace source to target in the dump
  3. psql to target schema

The above three steps can be performed as a one liner: pg_dump | sed | psql

These steps don't create hypertables as the underlying tables are actually in internal timescaledb schema.

To create and load the data, generate the following steps from timescaledb_information.dimensions view:

  1. drop trigger ts_insert_blocker on <target>.<hypertable_name>;
  2. SELECT ts_config.create_hypertable('<target>.<hypertable_name>', 'timestamp', chunk_time_interval => 3600000000);
  3. psql -c "\copy (select * from <source>.<hypertable_name>) to STDOUT "| psql -c "\copy <target>.<hypertable_name> from STDIN "