I am currently using Apache Spark 2.3.2 and creating a pipeline to read stream csv files from a file system and then write stream it to IBM Cloud object storage.
I am using Stocator connector for this. The regular read and writes to IBM COS is working fine with the below configuration. But , the read and write stream operations are throwing error as:
com.ibm.stocator.fs.common.exception.ConfigurationParseException: Configuration parse exception: Access KEY is empty. Please provide valid access key
The stocator config :
sc.hadoopConfiguration.set("fs.cos.impl","com.ibm.stocator.fs.ObjectStoreFileSystem")
sc.hadoopConfiguration.set("fs.stocator.scheme.list","cos")
sc.hadoopConfiguration.set("fs.stocator.cos.impl","com.ibm.stocator.fs.cos.COSAPIClient")
sc.hadoopConfiguration.set("fs.stocator.cos.scheme", "cos")
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.endpoint", "{url}")
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.access.key", "{access_key}")
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.secret.key", {secret_key})
readstream :
val csvDF = sqlContext
.readStream
.option("sep", ",")
.schema(fschema)
.csv({path})
writestream:
val query = csvDF
.writeStream
.outputMode(OutputMode.Append())
.format("parquet")
.option("checkpointLocation", "cos://stream-csv.Cloud Object Storage-POCDL/")
.option("path", "cos://stream-csv.Cloud Object Storage-POCDL/")
.start()
Error Logs :
"2018-12-17 16:51:14 WARN FileStreamSinkLog:66 - Could not use FileContext API for managing metadata log files at path cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata. Using FileSystem API instead for managing log files. The log may be inconsistent under failures.
2018-12-17 16:51:14 INFO ObjectStoreVisitor:110 - Stocator registered as cos for cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata
2018-12-17 16:51:14 INFO COSAPIClient:251 - Init : cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata
Exception in thread "main" com.ibm.stocator.fs.common.exception.ConfigurationParseException: Configuration parse exception: Access KEY is empty. Please provide valid access key"
Is there any way to resolve this error or another alternative to arrive to a solution?
Updated with more logs:
scala> val csvDF = spark.readStream.option("sep", ",").schema(fschema).csv("C:\\Users\\abc\\Desktop\\stream")
csvDF: org.apache.spark.sql.DataFrame = [EMP_NO: string, EMP_SALARY: string ... 2 more fields]
scala> val query = csvDF.writeStream.outputMode(OutputMode.Append()).format("csv").option("checkpointLocation", "cos://stream-csv.Cloud Object Storage-POCDL/").option("path", "cos://stream-csv.Cloud Object Storage-POCDL/").start()
18/12/18 10:47:40 WARN FileStreamSinkLog: Could not use FileContext API for managing metadata log files at path cos://stream-csv.Cloud%20Object%20Storage-POCDL/_spark_metadata. Using FileSystem API instead for managing log files. The log may be inconsistent under failures.
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Stocator schema space : cos, provided cos. Implementation com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 INFO ObjectStoreVisitor: Stocator registered as cos for cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Load implementation class com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Load direct init for COSAPIClient. Overwrite com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 INFO COSAPIClient: Init : cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ConfigurationHandler: COS driver: initialize start for cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ConfigurationHandler: extracted host name from cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata is stream-csv.Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG ConfigurationHandler: Initiaize for bucket: stream-csv, service: Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG ConfigurationHandler: Filesystem cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata, using conf keys for fs.cos.Cloud%20Object%20Storage-POCDL. Alternative list [fs.s3a.Cloud%20Object%20Storage-POCDL, fs.s3d.Cloud%20Object%20Storage-POCDL]
18/12/18 10:47:40 DEBUG ConfigurationHandler: Initialize completed successfully for bucket stream-csv service Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG MemoryCache: Guava initiated with size 2000 expiration 30 seconds
18/12/18 10:47:40 ERROR ObjectStoreVisitor: Configuration parse exception: Access KEY is empty. Please provide valid access key
com.ibm.stocator.fs.common.exception.ConfigurationParseException: Configuration parse exception: Access KEY is empty. Please provide valid access key
at com.ibm.stocator.fs.cos.COSAPIClient.initiate(COSAPIClient.java:276)
at com.ibm.stocator.fs.ObjectStoreVisitor.getStoreClient(ObjectStoreVisitor.java:130)
at com.ibm.stocator.fs.ObjectStoreFileSystem.initialize(ObjectStoreFileSystem.java:105)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager.<init>(HDFSMetadataLog.scala:409)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.createFileManager(HDFSMetadataLog.scala:292)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.<init>(HDFSMetadataLog.scala:63)
at org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog.<init>(CompactibleFileStreamLog.scala:46)
at org.apache.spark.sql.execution.streaming.FileStreamSinkLog.<init>(FileStreamSinkLog.scala:85)
at org.apache.spark.sql.execution.streaming.FileStreamSink.<init>(FileStreamSink.scala:98)
at org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:317)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:293)
... 49 elided
Just tested streaming and seems working for me, I tested a bit similar code
What Stocator version you are using? You can see this from the log, USER AGENT header