I have setup Spark Magic environment in my Jupyter Notebook. My aim is to read an excel file and generate a hadoop table for it. But when I go by the procedure of creating pandas dataframe by reading excel file followed by creating spark dataframe and then hadoop table from spark dataframe, it throws me an error - "Unable to call saveAsTable. While investigating further, I realised I cannot perform any logical commands(say .count(), .show()) on my spark dataframe. While if I read an existing hadoop table into spark dataframe and then write into hadoop table, it is working perfectly fine.

Same code is working fine with normal spark initialisation but not in spark magic.

Code:

    pandas_df=pd.read_excel(os.path.join(os.getcwd(),'pragya.xlsx'))
    spark_df = spark.createDataFrame(pandas_df)
    spark_df.write.mode("overwrite").saveAsTable("myDB.pragya_test")

Py4JJavaError: An error occurred while calling o937.saveAsTable. : org.apache.spark.SparkException: Job aborted.

0 Answers