I am trying to decode some records that are in avro to a spark dataframe. It was created with an old schema so it doesn't have the test
column defined below.
{
"default": false,
"name": "test",
"type":{
"connect.default": false,
"type": "boolean"
}
},
I'm calling the from_avro
function from spark-avro
package but it keeps thinking the avro records are malformed. My question is doesn't the way this schema is defined help assign a value of "false"
when the field test
is absent in the encoded data that is being read or am I misunderstanding something? (I've also verified this new field is causing the decoding issue since it works with the old schema)
My followup question is, what is the difference between the schema above and this one?
{
"default": false,
"name": "test",
"type": [
{
"connect.default": false,
"type": "boolean"
},
"null"
]
},