How to read a CSV file with commas within a field using pyspark?

11.2k views Asked by At

I have a csv file containing commas within a column value. For example,

Column1,Column2,Column3    
123,"45,6",789  

The values are wrapped in double quotes when they have extra commas in the data. In the above example, the values are Column1=123, Column2=45,6 and Column3=789 But, when trying to read the data, it gives me 4 values because of extra comma in Column2 field.

How to get the right values when reading this data in PySpark? I am using Spark 1.6.3

I am currently doing the below to create a rdd and then a data frame from rdd.

rdd = sc.textFile(input_file).map(lambda line: line.split(','))
df = sqlContext.createDataFrame(rdd) 
1

There are 1 answers

3
Tobi On

You can directly read it to an DF using an SQLContext:

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

df = sqlContext.read.format('com.databricks.spark.csv')
    .options(header='true', inferschema='true', quote='"', delimiter=',')
    .load(input_file)

As Delimiter ',' and Quote '"' are the defaults you can also omit them. Commas inside quotes are ignored by default. An description of the parameters can be found here: https://github.com/databricks/spark-csv

Edit:

Without relying on Databricks, I can only think of a more tricky solution - this might not be the best approach:

  1. Replace commas in numbers with points
  2. Split using remaining commas

So, you could keep your original code, and add the REGEX replace

import re
rdd = sc.textFile(input_file).map(lambda line: (re.sub(r'\"(\d+),(\d+)\"',r'\1.\2', line)).split(','))
df.sqlContext.createDataFrame(rdd)

The supplied REGEX also gets rid of the double-quotes.