Inside Fabric, I want to write a csv file to a lakehouse.
I have tried the following:
df_sales.write.format("csv").mode("overwrite").save("Files/Sales/PerLoadDate/sales_20210101.csv")
But although I explicitly specify that I want the dataframe written to a csv file it gets written in the delta format. I.e. these two files inside a FOLDER named sales_20210101.csv.
Does anybody have any solution to this?
I tried your code, and the result is a CSV file with random name, inside the sales_20210101.csv folder.
Spark will always save to a folder because it uses partitions to read and write files (just 1 partition in this case).
If you need a specific file name, consider using data factory or dataflows Gen2 services in Fabric.