I'm trying to implement upsert with aws glue and databricks using preactions and postactions, Here is the code below

sample_dataframe.write.format("com.databricks.spark.redshift")\
  .option("url", "jdbc:redshift://staging-db.asdf.ap-southeast-1.redshift.amazonaws.com:5439/stagingdb?user=sample&password=pwd")\
  .option("preactions", PRE_ACTION)\
  .option("postactions", POST_ACTION)\
  .option("dbtable", temporary_table)\
  .option("extracopyoptions", "region 'ap-southeast-1'")\
  .option("aws_iam_role", "arn:aws:iam::1234:role/AWSService-Role-ForRedshift-etl-s3")\
  .option("tempdir", args["TempDir"])\
  .mode("append")\
  .save()

I'm getting the following error

py4j.protocol.Py4JJavaError: An error occurred while calling o90.save.
: java.lang.UnsupportedOperationException: CSV data source does not support binary data type.
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$.org$apache$spark$sql$execution$datasources$csv$CSVUtils$$verifyType$1(CSVUtils.scala:127)
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$$anonfun$verifySchema$1.apply(CSVUtils.scala:131)
at org.apache.spark.sql.execution.datasources.csv.CSVUtils$$anonfun$verifySchema$1.apply(CSVUtils.scala:131)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)

Maybe I've missed something. Please help TIA.

I've also tried passing preactions and postactions as connection_options (below) and this too doesn't seem to work

redshift_datasink = glueContext.write_dynamic_frame_from_jdbc_conf(frame = sample_dyn_frame, catalog_connection='Staging' , connection_options = connect_options, redshift_tmp_dir = args["TempDir"], transformation_ctx = "redshift_datasink")

0 Answers