I have a dataframe which has million of records and while pulling the dataframe in jupyter it takes lot of memory and I am unable to do so as the server get's crashed because there are million's of records in database.
I got to know about DASK package which helps in getting huge dataframe in the python , I am new to dask and not sure how can I set up a connection using dask and mysql server.
I usually make connection with jupyter and mysql server using the following way , I would really appreciate if someone could provide me how to make connection for the same table and server using dask framework.
sql_conn = pyodbc.connect("DSN=CNVDED")
query = "SELECT * FROM Abc table"
df_training = pd.read_sql(query, sql_conn)
data=df_training
I would really appreciate if someone could help me on this and I can't use csv and then use dask need proper connection with mysql server