Keep PostgreSQL (FDW) Foreign Schema In-Sync

11.3k views Asked by At

Using Postgres 9.6 with postgres_fdw extension, there any means to keep the tables present within the local schema in-sync with the tables on the remote database? I often need to add new tables to the wrapped schema on the remote database and would like to access them locally via FDW without having to drop and re-import my foreign schema or individual tables as they come / go.

I'm looking for a command such as REFRESH FOREIGN SCHEMA schema_name.

5

There are 5 answers

0
user1487861 On BEST ANSWER

I don't think there is a refresh, but the drop and import should take less than a second:

DROP SCHEMA IF EXISTS local_schema_name CASCADE; 
CREATE SCHEMA local_schema_name ;
IMPORT FOREIGN SCHEMA foreign_schema_name 
    FROM SERVER foreign_server_name INTO local_schema_name ;
1
sgrefen On

With a recent postgres (I'm using 13) the following works like a refresh from psql. The tables are quoted to avoid tables that resemble SQL keywords to confuse the parser.

SELECT 'IMPORT FOREIGN SCHEMA <foreign_schema> EXCEPT ('|| 
   (SELECT string_agg('"'||table_name||'"',',') 
   FROM information_schema.tables 
   WHERE table_schema='<local_schema>') ||') 
FROM SERVER <foreign_server> INTO <local_schema>'\gexec

Should be straight forward to roll into a function using EXECUTE FORMAT instead of select and and string concatenation .

0
T P On

If you don't have any dependency, the easiest way to do is to DROP and CREATE a foreign table again

DROP SCHEMA IF EXISTS <local_schema_name> CASCADE;  
CREATE SCHEMA <local_schema_name> ;
IMPORT FOREIGN SCHEMA <foreign_schema_name> FROM SERVER <foreign_server_name> INTO <local_schema_name>;

In case you have dependency (View or something) you can Add/Alter columns that not have been synced.

ADD COLUMN is mostly used when your remote table adds some new column, and your local table is not updated (by default).

ALTER FOREIGN TABLE <schema_name>.<table_name> ADD COLUMN <column_name> <type of column>

For example:

ALTER FOREIGN TABLE library.book ADD COLUMN co_author text;

In case you want to change the type of current Foreign Table:

ALTER FOREIGN TABLE <schema_name>.<table_name> ALTER COLUMN <column_name> TYPE <column type>

ALTER FOREIGN TABLE library.book ALTER COLUMN pages TYPE int;

You can read more in here Postgres Foreign Table Doc

1
Luke Olson On

Dropping and recreating definitely works, but I don't like it as I often have views that are dependent on my local tables (which reference the foreign schema), so dropping the schema will also remove all views. To get around this, you can reimport the foreign schema, but limit it only to the new tables you've created:

IMPORT FOREIGN SCHEMA <foreign_schema> 
    LIMIT TO (<new_table1>, <new_table2>)
    FROM SERVER <foreign_server>
    INTO <local_schema>;
0
Roberto On

I found a process that let you keep the remote schema in-sync, It only solves the views problem but the concept could be applied to other objects (I guess)

Once you've got your schema imported and you have created several local views that use the remote tables, the process to refresh the schema would be like:

First of all, create a new schema other_schema (It's only temporary)

create schema IF NOT EXISTS other_schema;
IMPORT FOREIGN SCHEMA public
    FROM SERVER remote_server INTO other_schema;

Now we should change all our views code to use the tables in the new schema, other_schema, replacing the references to the original remote_schema, from psql we can execute:

SELECT 'CREATE OR REPLACE view '|| viewname || ' AS ' ||
  replace(definition, 'remote_schema.', 'other_schema.') view_stmt
  from pg_views where definition like '%remote_schema.%' \gexec

Now we already have our schema and local views updated, so we can remove the obsoleted schema and rename the new one to keep all system as It was before the refresh process.

DROP schema remote_schema cascade;
ALTER SCHEMA other_schema RENAME TO remote_schema;

That's all. Now, all content in the remote schema is updated and your views are using it.

This approach has got some caveats, for a short period of time you stored 2 similar schemas in local (you'll need enough disk to support this), the name of the schema should be different to any existing table or view and finally this only solves the problem with Views, to apply to other objects you'll need to read the proper catalog table/view and generate the SQL command to update it.