Geomesa + SparkSQL integration issue

640 views Asked by At

My setup is a 3-nodes cluster running in AWS. I already ingested my data (30 millon rows) and have no problems when running queries using jupyter notebook. But now I am trying to run a query using spark and java, as seen in the following snippet.

public class SparkSqlTest {

    private static final Logger log = Logger.getLogger(SparkSqlTest.class);


    public static void main(String[] args) {
        Map<String, String> dsParams = new HashMap<>();
        dsParams.put("instanceId", "gis");
        dsParams.put("zookeepers", "server ip");
        dsParams.put("user", "root");
        dsParams.put("password", "secret");
        dsParams.put("tableName", "posiciones");

        try {
            DataStoreFinder.getDataStore(dsParams);
            SparkConf conf = new SparkConf();
            conf.setAppName("testSpark");
            conf.setMaster("yarn");
            SparkContext sc = SparkContext.getOrCreate(conf);
            SparkSession ss = SparkSession.builder().config(conf).getOrCreate();

            Dataset<Row> df = ss.read()
                .format("geomesa")
                .options(dsParams)
                .option("geomesa.feature", "posicion")
                .load();
            df.createOrReplaceTempView("posiciones");

            long t1 = System.currentTimeMillis();
            Dataset<Row> rows = ss.sql("select count(*) from posiciones where id_equipo = 148 and fecha_hora >= '2015-04-01' and fecha_hora <= '2015-04-30'");
            long t2 = System.currentTimeMillis();
            rows.show();

            log.info("Tiempo de la consulta: " + ((t2 - t1) / 1000) + " segundos.");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

I upload the code in my master EC2 box (inside the jupyter notebook image), and run it using the following commands:

docker cp myjar-0.1.0.jar jupyter:myjar-0.1.0.jar
docker exec jupyter sh -c '$SPARK_HOME/bin/spark-submit --master yarn --class mypackage.SparkSqlTest file:///myjar-0.1.0.jar --jars $GEOMESA_SPARK_JARS'

But I got the following error:

17/09/15 19:45:01 INFO HSQLDB4AD417742A.ENGINE: dataFileCache open start
17/09/15 19:45:02 INFO execution.SparkSqlParser: Parsing command: posiciones
17/09/15 19:45:02 INFO execution.SparkSqlParser: Parsing command: select count(*) from posiciones where id_equipo = 148 and fecha_hora >= '2015-04-01' and fecha_hora <= '2015-04-30'
java.lang.RuntimeException: Could not find a SpatialRDDProvider
at org.locationtech.geomesa.spark.GeoMesaSpark$$anonfun$apply$2.apply(GeoMesaSpark.scala:33)
at org.locationtech.geomesa.spark.GeoMesaSpark$$anonfun$apply$2.apply(GeoMesaSpark.scala:33)

Any ideas why this happens?

1

There are 1 answers

0
jramirez On

I finally sorted out, my problem was that I did not include the following entries in my pom.xml

    <dependency>
        <groupId>org.locationtech.geomesa</groupId>
        <artifactId>geomesa-accumulo-spark_2.11</artifactId>
        <version>${geomesa.version}</version>
    </dependency>

    <dependency>
        <groupId>org.locationtech.geomesa</groupId>
        <artifactId>geomesa-spark-converter_2.11</artifactId>
        <version>${geomesa.version}</version>
    </dependency>