PG_DUMP Memory out of error

3.6k views Asked by At

I am trying to backup a database products with pg_Dump.

The total size of the database is 1.6 gb. One of the table in the database is product_image which is 1gb in size.

When I run the pg_dump on the database the database backup fails with this error.

##pg_dump: Dumping the contents of table "product_image" failed: 
PQgetCopyData
() failed.
pg_dump: Error message from server: lost synchronization with server: 
got messag
e type "d", length 6036499
pg_dump: The command was: COPY public.product_image (id, username, 
projectid, session, filename, filetype, filesize, filedata, uploadedon, "timestamp") T

If I try to backup the database by excluding the product_image table, the backup succeeds.

I tried increasing the shared_buffer in the postgres.conf to 1.5gb from 128MB , but the issue still persists. How can this issue be resolved?

1

There are 1 answers

0
adamba On

I ran into the same error and it was due to a buggy patch from RedHat for OpenSSL in early June (2015). There is related discussion on the PostgresSQL mailing list.

If you use SSL connections and cross a transferred size threshold, which depends on your PG version (default 512MB for PG < 9.4), the tunnel attempts to renegotiate the SSL keys and the connection dies with the errors you posted.

The fix that worked for me was setting the ssl_renegotiation_limit to 0 (unlimited) in postgresql.conf, followed by a reload.

ssl_renegotiation_limit (integer)

Specifies how much data can flow over an SSL-encrypted connection before renegotiation of the session keys will take place. Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.