I am currently working with a CSV file generated from a SQL query. This file contains all the fields and calculations necessary for my analysis. My current task is to transform this data into a JSON file following a specific hierarchical structure, see below.
I'm using Synapse Spark notebooks to try and format the data to suit the JSON format required by a custom Function App. There is one foreign key relationship which is how we're linking the Data2 array but they are separate arrays.
The output for Data1 is formatted correctly but I'm having an issue with the formatting for Data2. Unfortunately I can't provide the source dataset.
The python I have so far:
df = spark.read.option('header', 'true')
.option('delimiter', ',')
.csv(read_path)
df2 = (
df.groupBy("Data1Id")
.agg(collect_set('PreviousData1').alias('PreviousData1'))
)
JSON format needed:
{
"Data1":[
{
"Data1Id":"id1",
"PreviousData1":[
{
"Id":"PreviousId",
"PreviousShifts":[
]
}
],
"FutureData1":[
{
"Id":"futureId",
"FutureData1":[
],
"PreviousData1":[
]
},
]
}
],
"Data2":[
{
"Data2Id":"id2",
"Function2":[
{
"FunctionName":"function",
"Value":"3"
},
]
}
]
}
As explained above, I am trying to format my source data into JSON and am after some assistance with the best way to do this using Python. Thanks.
You can use the code below:
Output:
However, you may need to modify the code according to your data. If you provide a schema or sample data, it will be helpful for reproducing the issue.