dumping a mysql table to CSV (stdout) and then tunneling the output to another server

6k views Asked by At

I'm trying to move a database table to another server; the complication is that the machine currently running the table has little to no space left; so I'm looking for a solution that can work over the net.

I have tried mysqldumping the database from the src machine and piping it into mysql at the dest; but my database has 48m rows and even when turning auto_commit off & trx_commit cmd to 2; I am getting some dog slow times.

mysqldump -uuser -ppass --opt dbname dbtable  | mysql -h remove.server  -uuser -pass dbname

I then tried to mysqldump the rows a million at a time; scp them to the dest machine and do a mysql < file.sql but this seemed to get progressivly slower. I reached the 7th file (7,000,000) rows; and the following million import took 240 minutes.

I did a little bit of reading around and mysql suggests that using CSV LOAD IN FILE style imports are ~20x faster than inserts. So now I'm stuck.

I can work out how to export as CSV using the standard sql syntax:

SELECT *
INTO OUTFILE '/tmp/tmpfile'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM table;

but this obviously doesn't work as it will quickly chew up my already low disk space. So I was looking for a switch that lets mysqldump dump csv's to stdout. From what I have read it doesn't appear possible. The only way I can think of doing it is creating a FIFO and pointing mysql to dump there - then write a script that reads the FIFO at the same time and sends it to the dest server. Not really sure on the syntax of how to sync to the other server though; which brings me to my next problem.

Assuming I can get mysql to dump CSVs to stdout rather than a file; how do I then pipe that output to the dest server? I'm happy if I can simply get a single csv file on the dest server as it has more space; because then I can simply use mysqlimport from the file.

Which brings me to my next point... I would love to be able to do this:

mysqldump -uuser -ppass --opt dbname --tab /dev/stdout dbtable  | mysqlimport -h remove.server  -uuser -pass dbname 

But it looks like mysqlimport doens't support piping to it; you have to pass it a file.

Just had a thought while typing this;

Would it be possible to use the FIFO method listed above; then get mysqlimport to read from the FIFO and insert into the dest server? I guess the only problem there would be that mysql can dump quicker than it can do the imports to the dest server; subsequently filling up the src server.

I'm a bit lost on how to do a mysql CSV dump to stdout and transfer it over the net to a dest server (preferably importing at the same time, but happy to just dump as a file on the dest).

Any help would be greatly appreciated!

Cheers, Ben


UPDATE: I'm using innodb tables; and I can't shut the src box down for any period longer than 10mins.


UPDATE: I am now using sshfs to mount a dir on the dest onto the src and getting mysql to dump a csv into that folder - seems to work perfectly. Then its just a matter of using mysqlimport to load it into the database at the dest.


UPDATE: So now I have managed to get the data onto the dest box - the import is still as slow as if it were done with INSERTS. 9m rows imported in 12 hours. Something isn't right here. Any ideas?


UPDATE: For those interested... This doesn't work either: http://forums.mysql.com/read.php?22,154964

2

There are 2 answers

0
Ben Novakovic On BEST ANSWER

Turns out the problem was with the host I was inserting into. Not enough RAM + slow machine caused the queries to back up.

1
ajreal On

you would need mysqlhostcopy which support scp, of course the physical distance between the servers is going causing problem on traffic