I'm using spark with scala to read a specific Hive partition. The partition is year, month, day, a and b

scala> spark.sql("select * from db.table where year=2019 and month=2 and day=28 and a='y' and b='z'").show

But I get this error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 236 in stage 0.0 failed 4 times, most recent failure: Lost task 236.3 in stage 0.0 (TID 287, server, executor 17): org.apache.hadoop.security.AccessControlException: Permission denied: user=user, access=READ, inode="/path-to-table/table/year=2019/month=2/day=27/a=w/b=x/part-00002":user:group:-rw-rw----

As you can see, spark is trying to read a different partition and I don't have permisions there.

It shouldn't be, because I created a filter and this filter is my partition.

I tried the same query with Hive and it's works perfectly (No access problems)

Hive> select * from db.table where year=2019 and month=2 and day=28 and a='y' and b='z';

Why is spark trying to read this partition and Hive doesn't?

There is a Spark configuration that am I missing?

Edit: More information

Some files were created with Hive, others were copied from one server and pasted to our server with different permissions (we can not change the permissions), then they should have refreshed the data.

We are using: cloudera hive 1.1.0 spark 2.3.0 hadoop 2.6.0 scala 2.11.8 java 1.8.0_144

Show create table

PARTITIONED BY (`year` int COMMENT '*', `month` int COMMENT '*', `day` int COMMENT '*', `a` string COMMENT '*', `b` string COMMENT '*')
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
 'serialization.format' = '1'
 INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
 OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'hdfs://path'
 'transient_lastDdlTime' = '1559029332'

3 Answers

prakharjain On Best Solutions

A parquet hive table in Spark can use following 2 read flows -

  1. Hive flow - This will be used when spark.sql.hive.convertMetastoreParquet is set to false. For partitioning pruining to work in this case, you have to set spark.sql.hive.metastorePartitionPruning=true.

    spark.sql.hive.metastorePartitionPruning: When true, some predicates will be pushed down into the Hive metastore so that unmatching partitions can be eliminated earlier. This only affects Hive tables not converted to filesource relations (see HiveUtils.CONVERT_METASTORE_PARQUET and HiveUtils.CONVERT_METASTORE_ORC for more information

  2. Datasource flow - This flow by default has partition pruning turned on.

DaRkMaN On

This can happen when metastore does not have the partition values for the partition column. Can we run from Spark


And then rerun the same query.

Moustafa Mahmoud On

You will not be able to read special partition in a table you don't have access to all its partition using Spark-Hive API. Spark is using a Hive table access permission and in Hive you need to take full access to the table.

The reason you can't treat spark-hive as unix access. If you need to do it use spark.csv (or whatever format). Then read the data as file based.

You can simply use spark.csv.read("/path-to-table/table/year=2019/month=2/day=27/a=w/b=x/part-")

If you need to verify my answer, Ignore spark and try to run the same query in Hive shell it will not work as part of hive configurations.