For PySpark, is it possible to ignore errors for some specific rows

67 views Asked by At

I’m executing a query on a table using PySpark and incorporating some Apache Sedona functions within the query. However, the query encountered issues with certain rows, leading to an interruption. Unfortunately, it didn’t provide any information about the problematic row.

Is there a way to implement a try except mechanism that would allow me to bypass the problematic row and continue processing the remaining rows?

0

There are 0 answers