Read excel files with apache spark

5k views Asked by At

(new to apache spark)

I tried to create a small Scala Spark app which read excel files and insert data into database, but I have some errors which are occured due of different library versions (I think).

Scala v2.12 
Spark v3.0 
Spark-Excel v0.13.1

Maven configuration is:

    <dependencies>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.12</artifactId>
                <version>3.0.0</version>
            </dependency>
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-sql_2.12</artifactId>
                <version>3.0.0</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/com.crealytics/spark-excel -->
            <dependency>
                <groupId>com.crealytics</groupId>
                <artifactId>spark-excel_2.12</artifactId>
                <version>0.13.1</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core -->
            <dependency>
                <groupId>com.fasterxml.jackson.core</groupId>
                <artifactId>jackson-core</artifactId>
                <version>2.11.1</version>
            </dependency>
        </dependencies>

Main.scala

        val spark = SparkSession
            .builder
            .appName("SparkApp")
            .master("local[*]")
            .config("spark.sql.warehouse.dir", "file:///C:/temp") // Necessary to work around a Windows bug in Spark 2.0.0; omit if you're not on Windows.
        .getOrCreate()
 
        val path = "file_path"
        val excel = spark.read
          .format("com.crealytics.spark.excel")
          .option("useHeader", "true")
          .option("treatEmptyValuesAsNulls", "false")
          .option("inferSchema", "false")
          .option("location", path)
          .option("addColorColumns", "false")
          .load()
    
        println(s"excel count is ${excel.count}")

Error is:

Exception in thread "main" scala.MatchError: Map(treatemptyvaluesasnulls -> false, location -> file_path, useheader -> true, inferschema -> false, addcolorcolumns -> false) (of class org.apache.spark.sql.catalyst.util.CaseInsensitiveMap) 
    at com.crealytics.spark.excel.WorkbookReader$.apply(WorkbookReader.scala:38) 
    at com.crealytics.spark.excel.DefaultSource.createRelation(DefaultSource.scala:28) 
    at com.crealytics.spark.excel.DefaultSource.createRelation(DefaultSource.scala:18) 
    at com.crealytics.spark.excel.DefaultSource.createRelation(DefaultSource.scala:12) 
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:339) 
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:279) 
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:268) 
    at scala.Option.getOrElse(Option.scala:189)     at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:268) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203) 
    at main.scala.Main$.main(Main.scala:42) 
    at main.scala.Main.main(Main.scala)

This happening only when I try to read excel files because I use spark-excel library. Csv or tsv works fine.

2

There are 2 answers

1
Som On BEST ANSWER

I think, you forgot specifying the excel in load like spark.read....load("Worktime.xlsx")

Sample example -

val df = spark.read
    .format("com.crealytics.spark.excel")
    .option("dataAddress", "'My Sheet'!B3:C35") // Optional, default: "A1"
    .option("header", "true") // Required
    .option("treatEmptyValuesAsNulls", "false") // Optional, default: true
    .option("inferSchema", "false") // Optional, default: false
    .option("addColorColumns", "true") // Optional, default: false
    .option("timestampFormat", "MM-dd-yyyy HH:mm:ss") // Optional, default: yyyy-mm-dd hh:mm:ss[.fffffffff]
    .option("maxRowsInMemory", 20) // Optional, default None. If set, uses a streaming reader which can help with big files
    .option("excerptSize", 10) // Optional, default: 10. If set and if schema inferred, number of rows to infer schema from
    .option("workbookPassword", "pass") // Optional, default None. Requires unlimited strength JCE for older JVMs
    .schema(myCustomSchema) // Optional, default: Either inferred schema, or all columns are Strings
    .load("Worktime.xlsx")

Ref- readme

0
Jonathanpro On

I know that this doesn't answer directly your questions, but this may still help your in solving your issue.

  1. You can use the pandas package from python.
  2. read in the excel file with pandas and python
  3. convert the pandas dataframe to spark dataframe
  4. save with pyspark as parquet/hive table
  5. load data with scala&spark