Read Impala table with SparkSQL

2.4k views Asked by At

I was trying to execute a query that had functions like lead .. over .. partition and Union. This query works well when I try to run it on impala but fails on Hive.

I need to write a Spark job that performs this query. It is failing as well in SparkSQL and my assumption is because Spark 1.6 internally uses HiveQL to do the above task.

Is there some different way to read impala tables from SparkSQL? Because the basic queries that work in Hive and Both working fine with SprkSQL.

FYR the query I am trying to run:

SELECT issue_id,
  CASE WHEN COALESCE(lead(created, 1) OVER (PARTITION BY issue_id ORDER BY created ASC,
  field_sequence ASC), '') = '' THEN 'to' ELSE LEAD('from', 1) OVER (PARTITION BY issue_id ORDER BY created ASC, field_sequence ASC) END Status,
  created StartDate,
  LEAD(created, 1) OVER (PARTITION BY issue_id ORDER BY created ASC, field_sequence ASC) EndDate
FROM  (
    SELECT issue_id, created, field, 'from', 'to', field_sequence FROM tab1 WHERE COALESCE(LOWER(field), '') = 'status'
    UNION
    SELECT issue_id, updated_date created, '' field, '' 'from', '' 'to', 0 field_sequence FROM tab2
) hc WHERE hc.issue_id = '123'

And Error message:

Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/pyspark/sql/context.py", line 580, in sql
        return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
      File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
      File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/pyspark/sql/utils.py", line 45, in deco
        return f(*a, **kw)
      File "/opt/cloudera/parcels/<CDHVersion>/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling o83.sql.
    : java.lang.RuntimeException: [1.55] failure: ``)'' expected but identifier OVER found

    at scala.sys.package$.error(package.scala:27)
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)
    at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
    at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
    at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
    at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
    at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
    at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:745)
1

There are 1 answers

2
MaFF On

You're missing a AS when defining Status, you're missing several commas in the last select statement. Moreover, the coalesce is useless and you can use IF ELSE since there is only one case.

You should break down your computations so you don't have nested select clauses, they are inefficient.