So I'm trying to create a local instance of spark jobserver to test jobs on and I can't even get it to run.
So the first thing I do when I got into my vagrant instance is I start spark. I know this works because I submit jobs to spark with the submit-job utility it provides. I then go to my local spark-jobserver clone and run
vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt
[info] Loading project definition from /home/vagrant/spark-jobserver/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/home/vagrant/spark-jobserver/)
> reStart /home/vagrant/spark-jobserver/config/local.conf
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 35 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 6 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-extras/target
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 8 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 7 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 2 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] Application job-server not yet started
[info] Starting application job-server in the background ...
job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[warn] No main class detected
[info] Application job-server-extras not yet started
[info] Starting application job-server-extras in the background ...
job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM
> job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>
In another terminal I ssh into the vagrant instance and run
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars
The requested resource could not be found.
This is what is in my config/local.conf
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "spark://192.168.10.11:7077"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 1
# predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 1 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
spark.cassandra.connection.host = "127.0.0.1"
# in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ','
dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"]
# If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough,
# such as hadoop connection settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
home = "/home/vagrant/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
Figured out what the problem was, the server was starting correctly (although not logging correctly)
The problem was that I didn't have "/" at the end of the path passed to curl
so to fix it change the curl statement to this: