We are migrating data from a Firebird database using Python FDB to Azure SQL using pyodbc. There are many tables, and we could generate a Polybase workflow for each one, that is more work with many benefits.
However, I would like to see if we can write the data to the Azure SQL in 20MB segments through pyodbc.
Is there a way to detect the result set that comes back from FDB to make sure it is below 20MB?
Other than writing each result set (guess at the number of records that would be 20mb) to a file and measuring that, could I do it on the allocated memory instead somehow and then refetch till I get the right size?
Congratulations you find a solution in the end: