Failed to migrate cnosdb data from v2.3.3 to v2.4.0

39 views Asked by At

I'm using CnosDB in singleton mode, and trying to upgrad my v2.3.3 instance to v2.4.0, but failed to do the migration.

The v2.3.3 and v2.4.0 edition are installed in the host which is my laptop, now I want to export data from v2.3.3 and import to v2.4.0 following this document.

  1. Start v2.3.3 instance, export meta data from v2.3.3 instance using curl -XGET http://ip:port/debug --o ./meta_dump.data, it reported an error of option --o.
# http to v2.3.3
❯ curl -XGET 'http://127.0.0.1:8901/debug' --o ./meta_dump.data
curl: option --o: is ambiguous
curl: try 'curl --help' or 'curl --manual' for more information

Change --o to -o I got a file meta_dump.data, then I need to edit that file, remote the first line and the last line, and the first character * for each line.

  1. Stop v2.3.3 instance and start v2.4.0 instance, because these two CnosDB instance listen on the same port. Import meta data into v2.4.0 instance using curl -XPOST http://ip:port/restore --data-binary "@./meta_dump.data"
# http to v2.4.0
❯ curl -XPOST 'http://127.0.0.1:8901/restore' --data-binary "@./meta_dump.data"
Restore Data Success, Total: 18 %
  1. Stop v2.4.0 instance and start v2.3.3 instance. Export data from v2.3.3 instance,
# sql on v2.3.3
public ❯ COPY INTO 'file:///tmp/data_dump.data' FROM table_1 FILE_FORMAT = (TYPE = 'PARQUET');
+--------+
| rows   |
+--------+
| 406727 |
+--------+

But when I want to re-start v2.4.0 instance to import the data file /tmp/data_dump.data it reports an error on start:

----------
2023-10-30T00:03:33.305641000+08:00  INFO meta::service::single: single meta http server start addr: 127.0.0.1:8901
2023-10-30T00:03:33.307614000+08:00  INFO meta::service::single: watch all  args: client-id: watch.1001, cluster: cluster_xxx, tenants: {}, version: 54
2023-10-30T00:03:33.307647000+08:00  INFO meta::store::storage: METADATA WRITE(ver: 55): /cluster_xxx/data_nodes/1001 :{"id":1001,"grpc_addr":"localhost:8903","http_addr":"localhost:8902","attribute":"Hot"}
2023-10-30T00:03:33.308157000+08:00  INFO meta::store::storage: METADATA WRITE(ver: 56): /cluster_xxx/data_nodes_metrics/1001 :{"id":1001,"disk_free":475465846784,"time":1698595413,"status":"Healthy"}
2023-10-30T00:03:33.308574000+08:00  INFO meta::service::single: watch all  args: client-id: watch.1001, cluster: cluster_xxx, tenants: {}, version: 56
2023-10-30T00:03:33.308839000+08:00  INFO tskv::kvcore: Summary task handler started
2023-10-30T00:03:33.308865000+08:00  INFO tskv::compaction::job: Compaction: start merge compact task job
2023-10-30T00:03:33.308871000+08:00  INFO tskv::compaction::job: Compaction: start vnode compaction job
2023-10-30T00:03:33.308875000+08:00  INFO tskv::compaction::job: Compaction: enable_compaction is false, later to start vnode compaction job
2023-10-30T00:03:33.308881000+08:00  INFO tskv::compaction::job: Flush task handler started
2023-10-30T00:03:33.308884000+08:00  INFO tskv::kvcore: Job 'WAL' starting.
2023-10-30T00:03:33.308902000+08:00  INFO tskv::kvcore: Job 'WAL' started.
2023-10-30T00:03:33.309962000+08:00  INFO meta::store::storage: METADATA WRITE(ver: 57): /cluster_xxx/data_nodes_metrics/1001 :{"id":1001,"disk_free":475465846784,"time":1698595413,"status":"Healthy"}
2023-10-30T00:03:33.310037000+08:00  INFO meta::service::single: watch notify watch.1001 56.56
2023-10-30T00:03:33.310321000+08:00  INFO meta::service::single: watch all  args: client-id: watch.1001, cluster: cluster_xxx, tenants: {}, version: 57
2023-10-30T00:03:33.310336000+08:00  WARN models::runtime::executor: DedicatedExecutor dropped without waiting for worker termination
The application panicked (crashed).
Message:  make dbms: Meta { source: TenantNotFound { tenant: "cnosdb" } }
Location: main/src/server.rs:262

And this migration cannot go on.

1

There are 1 answers

0
Baker X On BEST ANSWER

I tested it according to your description, and panic does occur here.By comparing the debug information of 2.3 and 2.4, I found that in the tenant information, the meta-information of the two versions is different.As follow:

/cluster_xxx/tenants/cnosdb: {"id":34967873693446849787034438314352008249,"name":"cnosdb","options":{"comment":"system tenant","limiter_config":null}}
/cluster_xxx/tenants/cnosdb: {"id":78322384368497284380257291774744000001,"name":"cnosdb","options":{"comment":"system tenant","limiter_config":null,"tenant_is_hidden":false}}

So, When restoring meta information, add "tenant_is_hidden":false to the tenant's configuration information. Hope it can help you