I am trying to achieve the following using Python and the MySQLdb interface:
- Read the contents of a table that has a few million rows.
- Process and modify the output of every row.
- Put the modified rows into another table.
It seems sensible to me to iterate over each row, process on-the-fly and then insert each new row into the new table on-the-fly.
This works:
import MySQLdb
import MySQLdb.cursors
conn=MySQLdb.connect(
host="somehost",user="someuser",
passwd="somepassword",db="somedb")
cursor1 = conn.cursor(MySQLdb.cursors.Cursor)
query1 = "SELECT * FROM table1"
cursor1.execute(query1)
cursor2 = conn.cursor(MySQLdb.cursors.Cursor)
for row in cursor1:
values = some_function(row)
query2 = "INSERT INTO table2 VALUES (%s, %s, %s)"
cursor2.execute(query2, values)
cursor2.close()
cursor1.close()
conn.commit()
conn.close()
But this is slow and memory-consuming since it's using a client-side cursor for the SELECT
query. If I instead use a server-side cursor for the SELECT
query:
cursor1 = conn.cursor(MySQLdb.cursors.SSCursor)
Then I get a 2014 error:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method SSCursor.__del__ of <MySQLdb.cursors.SSCursor object at 0x925d6ec>> ignored
So it doesn't seem to like starting another cursor while iterating over a server-side cursor. Which seems to leave me stuck with a very slow client-side iterator.
Any suggestions?
You need a seperate connection to the database, since the first connection is stuck with streaming the resultset, you can't run the insert query.
Try this: